CSC/ECE 517 Spring 2022 - E2211: Testing for summary helper

From Expertiza_Wiki
Jump to navigation Jump to search

About Expertiza

Expertiza is an assignment/project management portal that can be used by both instructors and students for collaborative learning and feedback. It is an open-source project based on Ruby on Rails framework. It allows the instructors not only to create and customize new or existing assignments but also to create a list of topics the students can sign up for. Students can form teams to work on various projects and assignments. Expertiza also lets students peer-review other students' submissions, enabling them to work together to improve others' learning experiences.

Team

Mentor

  • Vinay Deshmukh

Team Members

  • Saswat Priyadarshan
  • Rachel Son
  • Bhuwan Bhatt

Description and Testing

Our project deals with writing test cases for the summary_helper.rb file. The summary helper is responsible for doing the following tasks:

  • Get all answers for each question and send them to summarization WS.
  • Get average scores and a summary for each question in a review by a reviewer.

Files Involved

  • summary_helper.rb
  • summary_helper_spec.rb

Running Tests

To successfully run summary_helper_test in local machine, please run the below rspec command.

  rspec spec/helpers/summary_helper_spec.rb

Requirement

summary_helper class had methods which were not tested. Our job was to test all the methods so that we can verify summary_helper class is working fine.


Test Plan

After going through all the methods in the class, we figured out there were some methods that were never being called. So, we removed all the redundant methods that are not being called. Now we had only 9 methods in the class that we need to test. Nine methods in summary_helper class that needed to be tested were:

  • summarize_reviews_by_reviewee
  • summarize_reviews_by_reviewee_question
  • get_max_score_for_question
  • summarize_sentences
  • get_sentences
  • break_up_comments_to_sentences
  • calculate_avg_score_by_criterion
  • calculate_round_score
  • calculate_avg_score_by_round


For the testing, we have mock following items :

  let(:answer) { Answer.new(answer: 1, comments: 'This is a sentence. This is another sentence.', question_id: 1) }
  let(:answer1){ Answer.new(answer: 2, comments: 'This is a sentence1. This is another sentence1.', question_id: 2)}
  let(:question) {build(:question, weight:1, type:"Criterion")}
  let(:avg_scores_by_criterion) { {a:2.345} }


  • get_sentences
  To test this method we mocked an answer object and checked if the number of sentences is equal to the number of comments provided in the answer.
  We identified two test cases for this method:
  1. When the answer is nil
  2. When the comment is two sentences

Code Snippet:

 describe '#get_sentences' do
    context 'when the answer is nil' do
      it 'returns a nil object' do
        expect(@summary.get_sentences(nil)).to eq(nil)
      end
    end
    context 'when the comment is two sentences' do
      it 'returns an array of two sentences' do
        sentences = @summary.get_sentences(answer)
        expect(sentences.length).to be(2)
      end
    end
  end


  • get_max_score_for_question
  To test this method we mocked two different types of questions which are as follows:
  1. let(:questionOne){Question.new(type:'Checkbox')}
  2. let(:questionTwo) { build(:question, questionnaire: questionnaire1, weight: 1, id: 1) }
  Now based on the different question types we tested two scenarios for this method:
  1. When the question type is Checkbox
  2. When question type is not Checkbox

Code Snippet:

 describe 'get_max_score_for_question' do
    context 'When question type is Checkbox' do
      let(:questionOne){Question.new(type:'Checkbox')}
      it 'returns 1' do
        max_score = @summary.get_max_score_for_question(questionOne)
        expect(max_score).to be(1)
      end
    end
    context 'When question type is not Checkbox' do
      let(:questionnaire1) { build(:questionnaire, id: 2) }
      let(:questionTwo) { build(:question, questionnaire: questionnaire1, weight: 1, id: 1) }
      it 'return the max score for the provided question' do
        allow(Questionnaire).to receive(:where).with(id:2).and_return(questionnaire1)
        allow(questionnaire1).to receive(:first).and_return(questionnaire1)
        expect(@summary.get_max_score_for_question(questionTwo)).to eql(5)
      end
    end
  end


  • summarize_sentence
  To test this method we mocked a comments array with the following values ["Hello this is first comment", "This is second comment"]. This method is making a web service call at the following Web Service: 
  'http://peerlogic.csc.ncsu.edu/sum/v1.0/summary/8/lsa'
  We are using expect to compare the expected result with the actual result from the web service call.
  Disclaimer: This web service call results in a bad gateway error while running the test case. We have already informed our mentor and the professor regarding the same.

Code Snippet:

 describe '#summarize_sentence' do
    context 'successful webservice call' do
      comments = ["Hello this is first comment", "This is second comment"]
      summary_ws_url = WEBSERVICE_CONFIG['summary_webservice_url']
      it 'return success' do
        expect(@summary.summarize_sentences(comments,summary_ws_url)).not_to eql(nil)
      end
    end
  end


  • break_up_comments_to_sentences
  This methods question answer array and breaks them into comments. 
  In our testing we provided the question_answer array of length 2 and see if the number of comments is equal to 2.

Code Snippet:

     describe '#break_up_comments_to_sentences' do
    context 'when the question_answers is not nil' do
      it 'add the comment to an array to be converted as a json request' do
        comments = @summary.break_up_comments_to_sentences([answer])
        expect(comments.length).to be(2)
      end
    end
    context 'when the question_answers is nil' do
      it 'returns an empty array' do
        comments = @summary.break_up_comments_to_sentences([])
        expect(comments.length).to be(0)
      end
    end
  end


  • calculate_avg_score_by_criterion

This test is to see when question answer was given, the output of method is correctly calculated as percentage of question score. Code Snippet:

  describe '#calculate_avg_score_by_criterion' do
    context 'when question_answers are available' do
      it 'calculate percentage question_score  & no float' do
   expect(@summary.calculate_avg_score_by_criterion([answer,answer1], 3)).to be_within(0).of(50)
        end
    end


This test is to see when question answer was nil, the output of method is correctly calculated as 0. Code Snippet:

    context 'when question_answers are not available' do
      it 'gives question scores 0.0' do
        expect(@summary.calculate_avg_score_by_criterion([], 3)).to eq(0.0)
      end
    end


This test is to see when q_max_score = 0, the output of method is correctly calculated. Code Snippet:

    context 'when q_max_score = 0' do
      it 'gives pure question_score' do
        expect(@summary.calculate_avg_score_by_criterion([answer,answer1], 0)).to eq(3.0)
      end
    end
  end


  • calculate_round_score

The test is to see when the criteria input is nil, the method gives right output of nil, which is 0 in this case. Code Snippet:

  describe '#calculate_round_score' do
   context 'when criteria not available' do
     it 'returns 0.0 since round_score = 0.0' do
       expect(@summary.calculate_round_score(avg_scores_by_criterion, nil)).to eq(0.to_f)
     end
   end


The test is to see when the criteria input is not nil, the method gives the right output, which is to get a two-digit round score. The criteria input was defined as question in mock section. Code Snippet:

   context 'when criteria not nil' do
     it 'get 2 round_score  ' do
       expect(@summary.calculate_round_score(avg_scores_by_criterion, question)).to be_within(0.01).of(2.345)
     end
   end
 end


  • calculate_avg_score_by_round

This test is to see the method round an average number up to Two-digit. The input of avg_scores_by_criterion was given as 2.345, and we tested here that it gives two-digit round number as 2.35. Code Snippet:

  describe '#calculate_avg_score_by_round'do
   context 'when avg_scores_by_criterion available' do
     it 'gives 2 round value' do
       expect(@summary.calculate_avg_score_by_round(avg_scores_by_criterion, question)).to eq(2.35)
     end
   end
  end

Test Execution

We divided the work among the teammates and started tackling the problems. We stubbed the data using factory and mocked the method calls which were being done internally to get the desired output from the methods that were calling other methods internally.

Test Coverage

The summary helper spec file is newly created. This project increased testing coverage by 72.86% according to simplecov test.

Conclusion

While we tried to test all the methods in the summary_helper.rb class, we faced some blockers. All the blockers are described below:-

  • summarize_sentences method

summary_webservice_url: 'http://peerlogic.csc.ncsu.edu/sum/v1.0/summary/8/lsa' This URL is being used in the summary_helper.rb file and we were not able to mock it. It's giving us bad gateway error while we try to hit that url.

  • summarize_reviews_by_reviewee method

In this method we found out in the loop, questions[round] is not the correct way of passing the individual question since questions is an array and use 0 based indexing. Whereas in the above logic it is being treated as a hash.

  • summarize_reviews_by_reviewee_question method

This method uses the instance variables from a different method(summarize_reviews_by_reviewee) which are not being passed as arguments to this method. Technically this type of initialization is bad coding practice and hence needs refratoring.

We already informed our Mentor and professor regarding these issues and they said it will be included in future work.

We tested all other remaining methods in the summary_helper.rb class. All the unit tests passed and the methods were working as expected. We tried to cover corner cases but there is room for some improvement.