CSC/ECE 517 Fall 2017/E1787 OSS project Bronze Score calculations

From Expertiza_Wiki
Revision as of 22:38, 27 October 2017 by Skatta3 (talk | contribs) (→‎Our Work)
Jump to navigation Jump to search

Introduction

This project involves revision of score calculation bug of Expertiza homework reviewing mechanism.

Scenario

Sometimes when a reviewee of a project or homework fills out the review form, he or she may leave certain review question blank. When taken into score calculation for the project or homework, the application instead fills in 0 on blank answers. This behavior is incorrect as the blank review answers should never be used when used to calculate final score.

Solution

Github Link

Explanation

Average Calculation

The scores

Testing

We have run unit test using RSpec and Carpybara. The test file path is spec/models/vm_question_response_row_spec.rb.

For the test, we have identified 2 scenarios. First one is regarding correctness on average score calculation, where we will create a VmQuestionResponseRow object with sample test scores inside and call its average_score_for_row method, then we match returned average to the expected average;

Second one is regarding whether average score calculation includes nil values, where we will create a VmQuestionResponseRow object with one or more nil sample test scores inside and call its average_score_for_row method, then we match returned average to the expected average to check if nil is redefined and used in score calculation.

Our Work

For the OSS project, our topic is fixing and modification on Expertize team score calculation mechanism. For this project, we have meet once every week for 2 weeks either with TA. For the first one and half week, each of us has set up the environment on our machine. Some of notable issues on setting up include theRubyRacer dependency problem on project, Node packages not recognized by rails (Windows), Font-Awesome-Rails path issues, etc, and we worked together to identify the issues and finding ways to get around them. For the rest of these a little over 2 weeks period, we have identified several issues with our project. The main issue for our part include the false score calculation mechanism by expertiza where empty scores are assumed automatically to 0 and included in calculation; this would in turn give a false score calculation result as empty scores are not supposed to be set to 0 and then included in for calculation. Through debugging and defining the problem, we found that the reason for such issue involves average_score_for_row method in vm_question_response_row.rb in model as shown below in red:

where the row_average_score dividend was not been used correctly.

We proceeded to solve the issue by modify the calculation for dividend using a counter to exclude nil values and modified the constructor for VmQuestionResponseRow classs that it can dynamically accept 5 or 6 parameters, as shown below:

(post code)

The first modification was done as it fix the issue while keep code at its most simplicity; the second modification was done to accommodate the scenario where if specific score given is over certain upper-limit x or lower-limit y, the score will be automatically reset to x or y accordingly so that the score will alway be in between x and y.

Afterward, we have testing the model class modifications using RSpec, with code as shown below:

(post code)

For this project, we have used techniques such as Unit testing, constructor overloading, and technologies such as bower, carpybara, RSpec, etc.