CSC/ECE 517 Fall 2017/E1787 OSS project Bronze Score calculations: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
Line 11: Line 11:




==Our Work==
==Our Work==d in


For the OSS project, our topic is fixing and modification on Expertize team score calculation mechanism. For this project, we have meet once every week for 2 weeks either with TA. For the first one and half week, each of us has set up the environment on our machine. Some of notable issues on setting up include theRubyRacer dependency problem on project, Node packages not recognized by rails (Windows), Font-Awesome-Rails path issues, etc, and we worked together to identify the issues and finding ways to get around them. For the rest of these a little over 2 weeks period, we have identified the issues with our project. The main issue for our part include the false score calculation mechanism by expertiza where empty scores are assumed automatically to 0 and included in calculation; this would in turn give a false score calculation result as empty scores are not supposed to be set to 0 and then included in for calculation. Through debugging and defining the problem, we found that the reason for such issue involves (code/file part that is not working correctly and why its behavior is not wanted) ... . Our proposed solution for issue is … (proposed solution here) … . This will involve skills we have learned in class such as unit testing using …(expand on skills) … .
For the OSS project, our topic is fixing and modification on Expertize team score calculation mechanism. For this project, we have meet once every week for 2 weeks either with TA. For the first one and half week, each of us has set up the environment on our machine. Some of notable issues on setting up include theRubyRacer dependency problem on project, Node packages not recognized by rails (Windows), Font-Awesome-Rails path issues, etc, and we worked together to identify the issues and finding ways to get around them. For the rest of these a little over 2 weeks period, we have identified several issues with our project. The main issue for our part include the false score calculation mechanism by expertiza where empty scores are assumed automatically to 0 and included in calculation; this would in turn give a false score calculation result as empty scores are not supposed to be set to 0 and then included in for calculation. Through debugging and defining the problem, we found that the reason for such issue involves average_score_for_row method in vm_question_response_row.rb in model as shown below in red:
 
(post code)
 
where the row_average_score dividend was not been used correctly.
 
We proceeded to solve the issue by modify the calculation for dividend using a counter to exclude nil values and modified the constructor for VmQuestionResponseRow classs that it can dynamically accept 5 or 6 parameters, as shown below:
 
(post code)
 
The first modification was done as it fix the issue while keep code at its most simplicity; the second modification was done to accommodate the scenario where if specific score given is over certain upper-limit x or lower-limit y, the score will be automatically reset to x or y accordingly so that the score will alway be in between x and y.
 
Afterward, we have testing the model class modifications using RSpec, with code as shown below:
 
(post code)
 
For this project, we have used techniques such as Unit testing, constructor overloading, and technologies such as bower, carpybara, RSpec, etc.

Revision as of 22:03, 27 October 2017

Introduction

This project involves revision of score calculation bug of Expertiza homework reviewing mechanism.

Scenario

Sometimes when a reviewee of a project or homework fills out the review form, he or she may leave certain review question blank. When taken into score calculation for the project or homework, the application instead fills in 0 on blank answers. This behavior is incorrect as the blank review answers should never be used when used to calculate final score.

Solution

Github Link


==Our Work==d in

For the OSS project, our topic is fixing and modification on Expertize team score calculation mechanism. For this project, we have meet once every week for 2 weeks either with TA. For the first one and half week, each of us has set up the environment on our machine. Some of notable issues on setting up include theRubyRacer dependency problem on project, Node packages not recognized by rails (Windows), Font-Awesome-Rails path issues, etc, and we worked together to identify the issues and finding ways to get around them. For the rest of these a little over 2 weeks period, we have identified several issues with our project. The main issue for our part include the false score calculation mechanism by expertiza where empty scores are assumed automatically to 0 and included in calculation; this would in turn give a false score calculation result as empty scores are not supposed to be set to 0 and then included in for calculation. Through debugging and defining the problem, we found that the reason for such issue involves average_score_for_row method in vm_question_response_row.rb in model as shown below in red:

(post code)

where the row_average_score dividend was not been used correctly.

We proceeded to solve the issue by modify the calculation for dividend using a counter to exclude nil values and modified the constructor for VmQuestionResponseRow classs that it can dynamically accept 5 or 6 parameters, as shown below:

(post code)

The first modification was done as it fix the issue while keep code at its most simplicity; the second modification was done to accommodate the scenario where if specific score given is over certain upper-limit x or lower-limit y, the score will be automatically reset to x or y accordingly so that the score will alway be in between x and y.

Afterward, we have testing the model class modifications using RSpec, with code as shown below:

(post code)

For this project, we have used techniques such as Unit testing, constructor overloading, and technologies such as bower, carpybara, RSpec, etc.