CSC/ECE 517 Fall 2017/E1762 Test various kinds of response-map hierarchies

From Expertiza_Wiki
Jump to navigation Jump to search

Expertiza Background

Expertiza is a web application where students can submit and peer-review learning objects (articles, codes, websites, etc). Instructors add and edit assignments to Expertiza. Students can be assigned in teams based on their selection of the topics. The Expertiza project is supported by the National Science Foundation. The Expertiza project is software to create reusable learning objects through peer review. It also supports team projects, and the submission of almost any document type, including URLs and wiki pages.

Introduction

Problem Statement

In Expertiza, there are several types of response maps (model files in app/models). They basically map responses, the one who submitted the response and the one to whom it is directed to. The parent class is ResponseMap, and the subclasses are (from most used to least used) ReviewResponseMap, TeammateReviewResponseMap, FeedbackResponseMap, QuizResponseMap, AssignmentSurveyResponseMap, BookmarkRatingResponseMap, MetareviewResponseMap, SelfReviewResponseMap, CourseSurveyResponseMap, GlobalSurveyResponseMap. You can find the database structure for these response maps here. For each response map record, there is one reviewer_id and reviewee_id. However, you need to check the code to learn what are recorded as reviewer/reviewee ids (see the foreign key constraints at the beginning of each model, they could be participant_id, team_id, etc. as per that model). These models do not have any unit tests.


Work to be done

• Create a factory for each response map models in factories.rb file (review_response_map factory has already existed).

• Then create test files in spec/models (you can refer to answer_spec.rb, due_date_spec.rb for how to write specs); write model specs for methods (if you find any method has no caller, remove the method instead of write tests for it). Good tests means their coverage is maximum, so try to cover as much methods and conditions as possible. ---

Files to be used or created

To write the unit tests for the models, we need to define the spec files for the models. But before that, we need to understand the working of the methods. For this we use the following files (in app\models) –


• Response_map.rb

• Review_response_map.rb

• Teammate_review_response_map.rb

• Feedback_response_map.rb

• Quiz_response_map.rb

• Assignment_survey_response_map.rb

• Bookmark_rating_response_map.rb

• Metareview_response_map.rb

• Self_review_response_map.rb

• Course_survey_response_map.rb

• Global_survey_response_map.rb


While writing our tests we will need a way to set up database records in a way to test against them in different scenarios. This is done by creating factories for each response map model in the spec/factories/factories.rb file.


Now, we use RSpec to create the test cases for these models. These files are added in the spec/models/ folder. The convention of naming the files is MODELNAME_spec.rb, hence we get following files.


• Response_map_spec.rb

• Review_response_map_spec.rb

• Teammate_review_response_map_spec.rb

• Feedback_response_map_spec.rb

• Quiz_response_map_spec.rb

• Assignment_survey_response_map_spec.rb

• Bookmark_rating_response_map_spec.rb

• Metareview_response_map_spec.rb

• Self_review_response_map_spec.rb

• Course_survey_response_map_spec.rb

• Global_survey_response_map_spec.rb


After studying the given models to be tested, we realize that many of the models contained either a blank/unimplemented method or contained a standard ruby implemented method. Hence, the RSpec for these methods are not part of problem coverage.Consequently, no factories are created for the same. The following models did not contain any relevant method, hence the tests were not necessary for them:


• Assignment_survey_response_map.rb

• Teammate_review_response_map.rb

• Bookmark_rating_review_response_map.rb

• Self_review_response_map.rb

• Course_survey_response_map.rb

• Global_survey_response_map.rb

Also, the ReviewResponseMap model has a few private methods which according to unit testing rules need not be tested. Hence, those methods are outside the scope of given problem statement.


Creating Factories

Each response map model needs its own factory for testing to create relevant mock objects to test the specs written. Hence here is a sample of a factory we created for the response_map.rb model.

   factory :response_map, class: ResponseMap do
       reviewed_object_id 1
       reviewer_id 1
       reviewee_id 1
       type 'ResponseMap'
       calibrate_to 0
    end

Here, we create a mock object for the model ResponseMap to test its methods. We assign the necessary attributes with default values like reviewer ID, etc. in factory description. The values of these attributes can be set as per personal choice while creating the factories.

Writing Tests

The test cases implemented in the project can be observed in the respective spec files of the models. A sample test written for the some methods of response_map.rb is stated here for reference.

   describe '.get_reviewer_assessments_for' do
       let(:team) { Team.create name: 'team',id: 2, parent_id: 1, type: "AssignmentTeam" }
       it 'should return the responses given to the team by the reviewer' do
         @reviewer = create(:student,id: 5)
         @response_map1 = create(:response_map, id: 1, reviewer_id: 5, reviewed_object_id: 1, reviewee_id: 2, type: "ReviewResponseMap")
         @response1 = create(:response, id: 1, map_id: 1, is_submitted: true)
         expect(ResponseMap.get_reviewer_assessments_for(team,@reviewer)).to eql @response1
       end
     end
   describe '#map_id' do
       it 'should return the id' do
         expect(@response_map.map_id).to be == 1
       end
     end

Test Plan

The test plan of this project involves running the newly made RSpec files for respective models and check if the expected output is attained or not. Testing the working of the files can be done as explained in the next section.

Testing the RSpec files

To check the test cases of a particular model, run the given command in terminal:

   rspec spec/models/<name of model_spec file>

Explaination of a few unit tests

Condsider the given RSpec for export method in ReviewResponseMap model:

   describe '.export' do
        it 'should return an array containing the mapping between reviewee name and reviewer name' do
          @assignment = create(:assignment,id: 1)
          @assignment_team = create(:assignment_team,id: 2,name: "teamxyz",parent_id: 1)
          @student = create(:student,id: 5, name: "abcd")
          @reviewer = create(:participant,id: 1, user_id: 5,parent_id: 1)
          @review_response_map = create(:review_response_map,assignment: @assignment,reviewee: @assignment_team, reviewer_id: 1)
          expect(ReviewResponseMap.export([],1,nil)).to eql [@review_response_map]
        end
        it 'should return an array sorted according to reviewee name' do
          @assignment = create(:assignment,id: 1)
          @assignment_team1 = create(:assignment_team,id: 2,name: "teamxyz",parent_id: 1)
          @assignment_team2 = create(:assignment_team,id: 3,name: "abcdefg",parent_id: 1)
          @student = create(:student,id: 5, name: "abcd")
          @reviewer = create(:participant,id: 1, user_id: 5,parent_id: 1)
          @review_response_map1 = create(:review_response_map,assignment: @assignment,reviewee: @assignment_team1, reviewer_id: 1)
          @review_response_map2 = create(:review_response_map,assignment: @assignment,reviewee: @assignment_team2, reviewer_id: 1)
          expect(ReviewResponseMap.export([],1,nil)).to eql [@review_response_map2,@review_response_map1]
        end
      end

The export method is as follows:

    def self.export(csv, parent_id, _options)
        mappings = where(reviewed_object_id: parent_id).to_a
        mappings.sort! {|a, b| a.reviewee.name <=> b.reviewee.name }
        mappings.each do |map|
          csv << [
            map.reviewee.name,
            map.reviewer.name
          ]
        end
      end 

The export method takes an empty array and the aassignment id as input. It returns a list containing mapping between the reviewee name and reviewer name and the mapping list is sorted according to the reviewee name. In the unit test, we need to check two conditions: 1)It should return the mapping between reviewee name and reviewer name. 2)The mapping must be sorted according to reviewee name.

In the RSpec we have tested the two conditions as shown in the code. For the first condition, we created the mock objects for assignment, assignment_team(reviewee), review_response_map and reviewer and assigned their proper associations. When we run the export method, we get the expected mapping between the reviewee and the reviewer.

For second condition, we created single mock object for assignment and reviewer and two mock objects for assignment_team and review_response_map. Now when the export method is executed, we get the mapping in sorted order based on reviewee name which can be seen in the RSpec code.

For another example, condsider the given RSpec for feedback_report_response method in FeedbackResponseMap model:

    describe '.feedback_response_report' do
        it "should return authors and responses by rounds" do
          @assignment = create(:assignment,id: 1)
          @student1 = create(:student,id: 5, name: "abcd")
          @student2 = create(:student,id: 3, name: "efgh")
          @student3 = create(:student,id: 7, name: "wxyz")
          @student4 = create(:student,id: 8, name: "pqrs")
          @student5 = create(:student,id: 9, name: "lmno")
          @reviewer1 = create(:participant,id: 1, user_id: 5,parent_id: 1)
          @reviewer2 = create(:participant,id: 4, user_id: 8,parent_id: 1)
          @reviewer3 = create(:participant,id: 5, user_id: 9,parent_id: 1)
          @reviewee1 = create(:participant,id: 2, user_id: 3,parent_id: 1)
          @reviewee2 = create(:participant,id: 3, user_id: 7,parent_id: 1)
          @team = create(:assignment_team,id: 2,name: "teamxyz",parent_id: 1)
          @teamuser1 = create(:team_user,team: @team,user: @student2)
          @teamuser2 = create(:team_user,team: @team,user: @student3)
          @review_response_map1 = create(:review_response_map,id: 1, assignment: @assignment,reviewee: @team, reviewer_id: 1)
          @review_response_map2 = create(:review_response_map,id: 2, assignment: @assignment,reviewee: @team, reviewer_id: 4)
          @review_response_map3 = create(:review_response_map,id: 3, assignment: @assignment,reviewee: @team, reviewer_id: 5)
          @response1 = create(:response,id: 1, response_map: @review_response_map1, round: 1)
          @response2 = create(:response,id: 2, response_map: @review_response_map2, round: 2)
          @response3 = create(:response,id: 3, response_map: @review_response_map3, round: 3)
          expect(FeedbackResponseMap.feedback_response_report(1,nil)).to eql ([[@reviewee1,@reviewee2],[1,2,3]])
        end
      end


The feedback_report_response method is as follows:

    def self.feedback_response_report(id, _type)
        # Example query
        # SELECT distinct reviewer_id FROM response_maps where type = 'FeedbackResponseMap' and
        # reviewed_object_id in (select id from responses where
        # map_id in (select id from response_maps where reviewed_object_id = 722 and type = 'ReviewResponseMap'))
        @review_response_map_ids = ReviewResponseMap.where(["reviewed_object_id = ?", id]).pluck("id")
        teams = AssignmentTeam.where(parent_id: id)
        @authors = []
        teams.each do |team|
          team.users.each do |user|
            participant = AssignmentParticipant.where(parent_id: id, user_id: user.id).first
            @authors << participant
          end
        end
        @temp_review_responses = Response.where(["map_id IN (?)", @review_response_map_ids]).order("created_at DESC")
        # we need to pick the latest version of review for each round
        @temp_response_map_ids = []
        if Assignment.find(id).varying_rubrics_by_round?
          @all_review_response_ids_round_one = []
          @all_review_response_ids_round_two = []
          @all_review_response_ids_round_three = []
          @temp_review_responses.each do |response|
             next if @temp_response_map_ids.include? response.map_id.to_s + response.round.to_s
           @temp_response_map_ids << response.map_id.to_s + response.round.to_s
            @all_review_response_ids_round_one << response.id if response.round == 1
            @all_review_response_ids_round_two << response.id if response.round == 2
            @all_review_response_ids_round_three << response.id if response.round == 3
          end
        else
          @all_review_response_ids = []
          @temp_review_responses.each do |response|
            unless @temp_response_map_ids.include? response.map_id
              @temp_response_map_ids << response.map_id
              @all_review_response_ids << response.id
            end
          end
        end
        # @feedback_response_map_ids = ResponseMap.where(["reviewed_object_id IN (?) and type = ?", @all_review_response_ids, 
    type]).pluck("id")
        # @feedback_responses = Response.where(["map_id IN (?)", @feedback_response_map_ids]).pluck("id")
        if Assignment.find(id).varying_rubrics_by_round?
          return @authors, @all_review_response_ids_round_one, @all_review_response_ids_round_two, @all_review_response_ids_round_three
        else
          return @authors, @all_review_response_ids
        end
      end

The feedback_report_response method takes assignment ID as input and returns a list of response ID's for all rounds of reviews(1, 2 or 3). In the RSpec code, we need to test that the function returns list of authors and responses by rounds. In RSpec code, we create mock objects for reviewers, reviewees, team and responses. We execute the function with parameter assignment_id. The expected result is a list of reviewee's in the team and a list of responses associated with individual rounds i.e. 1 , 2 and 3.

For other methods and their RSpec, kindly refer github repo.

Important links

The link to our github repo is:

https://github.com/urmilparikh95/expertiza

The link to our pull request is:

https://github.com/expertiza/expertiza/pull/1050

The link to the screencast of running the tests is:

https://youtu.be/Iu7hx7Ncc_M

Due to time constraint on video, we have not explained all RSpec individually in the video. Please refer the github code to see other defined RSpec files.