CSC/ECE 517 Fall 2013/oss E816 cyy: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
 
(77 intermediate revisions by 3 users not shown)
Line 1: Line 1:
=Introduction to Refactoring plagiarism_check.rb and sentence_state.rb =
Expertiza is a web application, which allows students to submit assignments and do peer review of each other's work<ref> [https://github.com/expertiza/expertiza Expertiza github]</ref>. Expertiza also supports team projects and any document type of submission is acceptable<ref> [http://wikis.lib.ncsu.edu/index.php/Expertiza Expertiza wiki]</ref>. Expertiza has been deployed for years to help professors and students engaging in the learning process. Expertiza is an open source project, for each year, students in the course of CSC517-Object Oriented Programmning of North Carolina State University will contributes to this project along with teaching assistant and professor.


Expertiza is a web application, which allows students to submit assignments and do peer review of each other's work.<ref> [https://github.com/expertiza/expertiza Expertiza github]</ref> Expertiza also supports team projects and any document type of submission is acceptable. Expertiza has been deployed for years to help professors and students engaging in the learning process.
For this year, we are responsible for refactoring plagiarism_check.rb and sentence_state.rb of the Expertiza project. Expertiza is built using Ruby on Rails with [http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller MVC design pattern]. plagiarism_check.rb and sentence_state.rb are parts of the automated_metareview functionality inside models. The responsibility of sentence_state.rb is to determine the state of each clause of a sentence, and the responsibility of plagiarism_check.rb is to determine whether the reviews are just copied from other sources.


=Project description=
=Project description=
===Classes===
Classes we are going to refactor are plagiarism_check.rb (155 lines).
sentence_state.rb (293 lines)
===What it Does===
The two class files performs functions which are needed in NLP analysis of reviews in the research. plagiarism_check.rb and sentence_state.rb are used to check whether the reviews are copied from other places. To check whether plagiarism happens is important because the reviewers are tends to game with the automated review system to get a high score instead of writing a high quality review. The classes compare the review text with text from Internet to determine whether and copy-paste happens<ref> [http://www.lib.ncsu.edu/resolver/1840.16/8813 Automated Assessment of Reviews]</ref>.
===Our Job===
The code has many code smells. First, the methods are long and complex, and the codes are not structured well, which makes it not readable and understandable. Second, extremely long if-else branch and loop exists everywhere. Third, the responsibility of sentence_state.rb is heavy, some functions of sentence_state should be given to others. Finally, there are some duplicate codes.
Our job is to refactor the two classes and make it more O-O style and have a clear structure. To eliminate the code smells, we need removing the duplicate code, reconstruct long if-else branch and loop, generating new methods and classes to encapsulate functionality of other complex methods, clear the responsibility of classes and methods. After refactoring, we also need to test the two classes throughoutly without error.


=Design=
=Design=
==sentence_state.rb==
==sentence_state.rb==
To see the original code please go to this link:  
To see the original code please go to this link:  
https://github.com/expertiza/expertiza/blob/master/app/models/automated_metareview/sentence_state.rb
[https://github.com/expertiza/expertiza/blob/master/app/models/automated_metareview/sentence_state.rb sentence_state.rb]
 
===Main Responsibility===
The responsibility of SentenceState is to determine the state of each clause of a sentence. The possible states include positive, negative, and suggestive. In order to find the state of the sentence, SentenceState first splits the sentence into sentence clauses, then splits each clause into sentence tokens (words). Then it iterates through the tokens, and determines the new state of the sentence dependent on the previous state and the token state.
 
Take for example the sentence_state_test Identify State 8:
 
sentence = “We are not not musicians.”
 
First token: We => Positive, state = > positive
 
Second token: are => Positive and prev_state = >positive, state => positive
 
Third token: not => Negative and prev_state => positive, state => negative
 
Fourth token: not => Negative and prev_state => negative, state => positive (double
negative!)
 
Fifth token: musicians => positive and prev_state => positive, state => positive
 
Therefore the sentence state is positive.
 
===Design Ideas===
The original code had several design smells, mostly deeply-nested if-else statements and duplicated code. Often it was possible to remove these smells using the Strategy Design Pattern, as will be explained in more detail below. Another design smell was that SentenceState had too many responsibilities. It had to first parse the sentence into separate sentence clauses and then separate the sentence clauses into tokens before iterating through the tokens to determine the state of the sentence. These responsibility of parsing the sentence and sentence tokens can be split into another class because this functionality could be useful elsewhere in the future and should be kept decoupled from SentenceState. The worst problem was the a deeply nested if-else statement which determined the next state of the sentence clause based on the previous state and the next sentence token. Instead of the SentenceState class being responsible for all of these relationships, it is better to have subclasses of SentenceState which each know the relationship between there own state and any sentence token type.
 
===Refactor Steps===
The first step in refactoring is to get the tests to pass. This required some debugging to find that some constants were defined in two different files, [https://github.com/expertiza/expertiza/blob/master/app/models/automated_metareview/constants.rb constants.rb] and [https://github.com/expertiza/expertiza/blob/master/app/models/automated_metareview/negations.rb negations.rb], and the NEGATIVE_DESCRIPTORS definition was incomplete in negations.rb file. After updating this, all 18 original tests in sentence_state_test.rb passed.
 
The first place to refactor was the longest method in SentenceState, the method sentence_state(str_with_pos_tags). There were three for loops, each with deeply nested if-else statements inside of them. To make this method more readable, I extracted three for or if-else statements into their own method to clean up the code. This changed the code from 164 lines of if-else and for statements to:
def sentence_state(str_with_pos_tags)
    state = POSITIVE
    prev_negative_word ="" 
    tagged_tokens, tokens = parse_sentence_tokens(str_with_pos_tags)
    for j  in (0..tokens.length-1)
      current_token_type = get_token_type(tokens[j..tokens.length-1)
      state = next_state(state, current_token_type, prev_negative_word, tagged_tokens)
      if(tokens[j].casecmp("NO") == 0 or tokens[j].casecmp("NEVER") == 0 or tokens[j].casecmp("NONE") == 0)
        prev_negative_word = tokens[j]
      end
    end #end of for loop
    if(state == NEGATIVE_DESCRIPTOR or state == NEGATIVE_WORD or state == NEGATIVE_PHRASE)
      state = NEGATED
    end
    return state
end
 
This was much easier to read and it revealed that the class SentenceState had too many responsibilities. The class now had two methods which parse the sentence: break_at_coordinating_conjunctions and parse_sentence_tokens. However, parsing a sentence into its sentence clauses and individual tokens (words) could potentially be useful elsewhere, so it is better to decouple this responsibility from SentenceState into a new class. So these two methods were refactored into a TaggedSentence class with the methods break_at_coordinating_conjunctions and parse_sentence_tokens. Now when SentenceState is called, it makes a new TaggedSentence and then calls break_at_coordinating_conjunctions which returns the sentence clauses as arrays of sentence tokens. See the new TaggedSentence class here: [https://github.com/shanfangshuiyuan/expertiza/blob/master/app/models/automated_metareview/tagged_sentence.rb tagged_sentence.rb]
 
Once this change was done, the next step was to refactor each of the three new methods that were created earlier to clean up the sentence_state method, because these still contain deeply nested if statements. The first method created was parse_sentence_tokens(str_with_pos_tags). Originally this method was as shown below:
def parse_sentence_tokens(str_with_pos_tags)
    tokens = Array.new
    tagged_tokens = Array.new
    i = 0
    interim_noun_verb  = false #0 indicates no interim nouns or verbs
       
    #fetching all the tokens
    for k in (0..st.length-1)
      ps = st[k]
      if(ps.include?("/"))
        ps = ps[0..ps.index("/")-1]
      end
      #removing punctuations
      if(ps.include?("."))
        tokens[i] = ps[0..ps.index(".")-1]
      elsif(ps.include?(","))
        tokens[i] = ps.gsub(",", "")
      elsif(ps.include?("!"))
        tokens[i] = ps.gsub("!", "")
      elsif(ps.include?(";"))
        tokens[i] = ps.gsub(";", "")
      else
        tokens[i] = ps
        i+=1
      end   
    return tokens
end
 
One of the problems of this code is the duplication of ps[0..ps.index(punctuation)-1] and ps.gsub(punctuation, "") because of the use of an if else statement. To remove this duplication, the strategy design pattern can be used to make each of the duplicated functions into lambda blocks, or commands, and iterate over a punctuation array to remove undesired punctuation. Also, after inspecting the code it was seen that tokens[i] is only truly updated when there is not a punctuation in the string because i is not increased unless it gets to the end of the if-else statement. To fix this we can set a valid_token boolean if no punctuation is found, and then save that value into tokens. This refactored code is shown below.
 
def parse_sentence_tokens(str_with_pos_tags)
    sentence_pieces = str_with_pos_tags.split(' ')
    num_tokens = 0
    tokens = Array.new
    tag = '/'
    punctuation = %w(. , ! ;)
    sentence_pieces.each do |sp|
      #remove tag from sentence word
      if sp.include?(tag)
        sp = sp[0..sp.index(tag)-1]
      end
      valid_token = true
      punctuation.each do |p|
        if sp.include?(p)
          valid_token = false
          break
        end
      end
      if valid_token
        tokens[num_tokens] = sp
        num_tokens+=1
      end
    end
    #end of the for loop
    tokens
  end
 
This does not shorten the code but it makes it more readable and extendable. If anyone wants to check for an additional punctuation, they do not have to update the if-else statement which could easily make a bug, instead they only have to update the punctuation array with the new punctuation. Also, the variables are defined so that the reader can understand their functionality. Future refactoring could include moving the line sp = sp[0..sp.index(tag)-1] into a lambda called remove_tag_from_sp[sp, tag] so that the code is even more readable.
 
The next method to refactor was get_token_type(current_token) in the SentenceState class as shown below.
 
      if(is_negative_word(tokens[j]) == NEGATED) 
        returned_type = NEGATIVE_WORD
      #checking for a negative descriptor (indirect indicators of negation)
      elsif(is_negative_descriptor(tokens[j]) == NEGATED)
        returned_type = NEGATIVE_DESCRIPTOR
      #2-gram phrases of negative phrases
      elsif(j+1 < count && !tokens[j].nil? && !tokens[j+1].nil? &&
        is_negative_phrase(tokens[j]+" "+tokens[j+1]) == NEGATED)
        returned_type = NEGATIVE_PHRASE
        j = j+1     
      #if suggestion word is found
      elsif(is_suggestive(tokens[j]) == SUGGESTIVE)
        returned_type = SUGGESTIVE
      #2-gram phrases suggestion phrases
      elsif(j+1 < count && !tokens[j].nil? && !tokens[j+1].nil? &&
        is_suggestive_phrase(tokens[j]+" "+tokens[j+1]) == SUGGESTIVE)
        returned_type = SUGGESTIVE
        j = j+1
      #else set to positive
      else
        returned_type = POSITIVE
      end
 
Problems with this code included a nested if else statement and ambiguous variable names. Digging deeper into each of the methods called by the if-else conditions, there was a lot of duplicated code. Each method called iterated through an array of words, and returned one token_type if the current_token was found in that array, and token_type = POSITIVE if it was not. The only major differences between the methods was the array that was searched, the type that was returned, and whether the input was a word or a phrase (1 or 2 tokens, respectively.) An example of one of these methods is shown below:
 
def is_negative_word(word)  <== input could be word or phrase
  not_negated = POSITIVE        <== type always POSITIVE
  for i in (0..NEGATED_WORDS.length - 1)      <== different array of words
    if(word.casecmp(NEGATED_WORDS[i]) == 0)
      not_negated = NEGATIVE_WORD      <== different type matching array
      break
    end
  end
  return not_negated
end
 
This was refactored using the strategy design pattern. Lambda blocks get_word and get_phrase were created to handle parsing the current_token into a word or a phrase. An array called types was created which holds the relationship between a type of token, which array it searches through to check for that type, and if the values in the search array are words or phrases. Iterating through the types array calls get_word or get_phrase to parse the input current_token into a word or phrase, then checks if that word or phrase is in the word_or_phrase_array, and returns the type associated with that word_or_phrase_array if it is found. This refactored five methods with duplicated code into one method as shown below.
 
def get_token_type(current_token)
    #input parsers
    get_word = lambda { |c| c[0]}
    get_phrase = lambda {|c| c[1].nil? ? nil : c[0]+' '+c[1]}
    #types holds relationships between word_or_phrase_array_of_type => [input parser of type, type]
    types = {NEGATED_WORDS => [get_word, NEGATIVE_WORD], NEGATIVE_DESCRIPTORS => [get_word, NEGATIVE_DESCRIPTOR], SUGGESTIVE_WORDS => [get_word, SUGGESTIVE], NEGATIVE_PHRASES => [get_phrase,NEGATIVE_PHRASE], SUGGESTIVE_PHRASES => [get_phrase, SUGGESTIVE]}
    current_token_type = POSITIVE
    types.each do |word_or_phrase_array, type_definition|
      get_word_or_phrase, word_or_phrase_type = type_definition[0], type_definition[1]
      token = get_word_or_phrase.(current_token)
      unless token.nil?
        word_or_phrase_array.each do |word_or_phrase|
            if token.casecmp(word_or_phrase) == 0
              current_token_type = word_or_phrase_type
              break
            end
        end
      end
    end
    current_token_type
  end
 
This refactoring made the method more readable because it moved duplicated code in five methods into a single method so there is only one place to read and understand the code. It also made it easily extensible. If you want to check for a new type, just add the relationship to the types array. If you need a different input, just make a new input_parser lambda.
 
Finally the biggest if-else statement to refactor is in the next_state() method as shown below:
 
 
  if((state == NEGATIVE_WORD or state == NEGATIVE_DESCRIPTOR or state == NEGATIVE_PHRASE) and returned_type == POSITIVE)
        if(interim_noun_verb == false and (tagged_tokens[j].include?("NN") or tagged_tokens[j].include?("PR") or tagged_tokens[j].include?("VB") or tagged_tokens[j].include?("MD")))
          interim_noun_verb = true
        end
      end
     
      if(state == POSITIVE and returned_type != POSITIVE)
        state = returned_type
      #when state is a negative word
      elsif(state == NEGATIVE_WORD) #previous state
        if(returned_type == NEGATIVE_WORD)
          #these words embellish the negation, so only if the previous word was not one of them you make it positive
          if(prev_negative_word.casecmp("NO") != 0 and prev_negative_word.casecmp("NEVER") != 0 and prev_negative_word.casecmp("NONE") != 0)
            state = POSITIVE #e.g: "not had no work..", "doesn't have no work..", "its not that it doesn't bother me..."
          else
            state = NEGATIVE_WORD #e.g: "no it doesn't help", "no there is no use for ..."
          end 
          interim_noun_verb = false #resetting       
        elsif(returned_type == NEGATIVE_DESCRIPTOR or returned_type == NEGATIVE_PHRASE)
          state = POSITIVE #e.g.: "not bad", "not taken from", "I don't want nothing", "no code duplication"// ["It couldn't be more confusing.."- anomaly we dont handle this for now!]
          interim_noun_verb = false #resetting
        elsif(returned_type == SUGGESTIVE)
          #e.g. " it is not too useful as people could...", what about this one?
          if(interim_noun_verb == true) #there are some words in between
            state = NEGATIVE_WORD
          else
            state = SUGGESTIVE #e.g.:"I do not(-) suggest(S) ..."
          end
          interim_noun_verb = false #resetting
        end
      #when state is a negative descriptor
      elsif(state == NEGATIVE_DESCRIPTOR)
        if(returned_type == NEGATIVE_WORD)
          if(interim_noun_verb == true)#there are some words in between
            state = NEGATIVE_WORD #e.g: "hard(-) to understand none(-) of the comments"
          else
            state = POSITIVE #e.g."He hardly not...."
          end
          interim_noun_verb = false #resetting
        elsif(returned_type == NEGATIVE_DESCRIPTOR)
          if(interim_noun_verb == true)#there are some words in between
            state = NEGATIVE_DESCRIPTOR #e.g:"there is barely any code duplication"
          else
            state = POSITIVE #e.g."It is hardly confusing..", but what about "it is a little confusing.."
          end
          interim_noun_verb = false #resetting
        elsif(returned_type == NEGATIVE_PHRASE)
          if(interim_noun_verb == true)#there are some words in between
            state = NEGATIVE_PHRASE #e.g:"there is barely any code duplication"
          else
            state = POSITIVE #e.g.:"it is hard and appears to be taken from"
          end
          interim_noun_verb = false #resetting
        elsif(returned_type == SUGGESTIVE)
          state = SUGGESTIVE #e.g.:"I hardly(-) suggested(S) ..."
          interim_noun_verb = false #resetting
        end
      #when state is a negative phrase
      elsif(state == NEGATIVE_PHRASE)
        if(returned_type == NEGATIVE_WORD)
          if(interim_noun_verb == true)#there are some words in between
            state = NEGATIVE_WORD #e.g."It is too short the text and doesn't"
          else
            state = POSITIVE #e.g."It is too short not to contain.."
          end
          interim_noun_verb = false #resetting
        elsif(returned_type == NEGATIVE_DESCRIPTOR)
          state = NEGATIVE_DESCRIPTOR #e.g."It is too short barely covering..."
          interim_noun_verb = false #resetting
        elsif(returned_type == NEGATIVE_PHRASE)
          state = NEGATIVE_PHRASE #e.g.:"it is too short, taken from ..."
          interim_noun_verb = false #resetting
        elsif(returned_type == SUGGESTIVE)
          state = SUGGESTIVE #e.g.:"I too short and I suggest ..."
          interim_noun_verb = false #resetting
        end
      #when state is suggestive
      elsif(state == SUGGESTIVE) #e.g.:"I might(S) not(-) suggest(S) ..."
        if(returned_type == NEGATIVE_DESCRIPTOR)
          state = NEGATIVE_DESCRIPTOR
        elsif(returned_type == NEGATIVE_PHRASE)
          state = NEGATIVE_PHRASE
        end
        #e.g.:"I suggest you don't.." -> suggestive
        interim_noun_verb = false #resetting
      end
     
      #setting the prevNegativeWord
      if(tokens[j].casecmp("NO") == 0 or tokens[j].casecmp("NEVER") == 0 or tokens[j].casecmp("NONE") == 0)
        prev_negative_word = tokens[j]
      end 
         
    end #end of for loop
   
    if(state == NEGATIVE_DESCRIPTOR or state == NEGATIVE_WORD or state == NEGATIVE_PHRASE)
      state = NEGATED
    end
  return state
 
The key to refactoring this code was recognizing that the next state of the sentence depended on the current state of the sentence and the current_token_type. Understanding this revealed that a better design would be to have SentenceState subclasses (PositiveState, NegativeWordState, etc). The superclass SentenceState would contain information about interim state variables such as interim_noun_verb and prev_negative_word and the current sentence clause state, while the subclasses would only know their relationship between themselves the current_token_type, and the next state of the sentence. The superclass SentenceState would also be in charge of making these states in a factory method using only the current state of the sentence. An example of one of the subclasses is shown below:
 
class NegativeDescriptorState < SentenceState
  def negative_word
    @state = if_interim_then_state_is(NEGATIVE_WORD, POSITIVE)
    #puts "next token is negative"
  end
  def positive
    set_interim_noun_verb(true)
    @state = NEGATIVE_DESCRIPTOR
    #puts "next token is positive"
  end
  def negative_descriptor
    @state = if_interim_then_state_is(NEGATIVE_DESCRIPTOR, POSITIVE)
    #puts "next token is negative"
  end
  def negative_phrase
    @state = if_interim_then_state_is(NEGATIVE_PHRASE, POSITIVE)
    #puts "next token is negative phrase"
  end
  def suggestive
    @state = SUGGESTIVE #e.g.:"I hardly(-) suggested(S) ..."
                        #puts "next token is suggestive"
  end
  def get_state
    #puts "negative_descriptor"
    NEGATED
  end
end
 
Every other subclass also has the same methods, so that each subclass can be responsible for knowing what to do for any current_token_type. (These methods are different in every subclass because the next state is different for every current state and current_token_type). The methods such as if_interim_then_state_is(thistype, elsethistype) are implemented in the superclass to remove duplicate code from the subclasses.
 
This simplifies the next_state method so the superclass doesn't have to know anything about the relationships of the subclasses to find the next state of the sentence as shown below:
 
def next_state(current_token_type)
    method = {POSITIVE => self.method(:positive), NEGATIVE_DESCRIPTOR => self.method(:negative_descriptor), NEGATIVE_PHRASE => self.method(:negative_phrase), SUGGESTIVE => self.method(:suggestive), NEGATIVE_WORD => self.method(:negative_word)}[current_token_type]
    method.call()
    if @state != POSITIVE
      set_interim_noun_verb(false) #resetting
    end
    @state
  end
 
The method variable calls the correct method in the subclass based on the current_token_type. Now the code is much more extensible. Instead of having to edit that awfully long if-else statement, now a programmer only has to make a new SentenceState subclass which defines all the relationships of that subclass with any possible current_token_types. Future refactoring would include changing the variable method to a more descriptive name.
 
So now the sentence_state method has to be modified once more to use the new SentenceState subclasses:
 
def sentence_state(sentence_tokens) #str_with_pos_tags)
    #initialize state variables so that the original sentence state is positive
    @state = POSITIVE
    current_state = factory(@state)
    @@prev_negative_word = false
    @interim_noun_verb = false
    sentence_tokens.each_with_next do |curr_token, next_token|
      #get current token type
      current_token_type = get_token_type([curr_token, next_token])
      #Ask State class to get current state based on current state, current_token_type, and if there was a prev_negative_word
      current_state = factory(current_state.next_state(current_token_type))
      #setting the prevNegativeWord
      NEGATIVE_EMPHASIS_WORDS.each do |e|
        if curr_token.casecmp(e)
          @@prev_negative_word = true
        end
      end
    end #end of for loop
    current_state.get_state()
  end
 
The factory method is implemented so that the SentenceState class only has to know what type of state it wants to make. At any one time, a SentenceState instance only uses to one SentenceState subclass instance so that the interim instance variables are not being overridden by multiple subclasses.


===Design Smells===
def factory(state)
The original code had several design smells, mostly deeply-nested if-else statements and duplicated code. Another design smell was that SentenceState had too many responsibilities. It had to first parse the sentence into separate sentence clauses and then separate the sentence clauses into tokens before iterating through the tokens to determine the state of the sentence. The main responsibility of SentenceState should be just to determine the state of a sentence, and another class should be in charge of knowing the parts of the sentence. The worst problem was the a deeply nested if-else statement which determined the next state of the sentence clause based on the previous state and the next sentence token. Instead of the SentenceState class being responsible for all of these relationships, it is better to have subclasses of SentenceState which each know the relationship between there own state and any sentence token type.
    {POSITIVE => PositiveState, NEGATIVE_DESCRIPTOR => NegativeDescriptorState, NEGATIVE_PHRASE => NegativePhraseState, SUGGESTIVE => SuggestiveState, NEGATIVE_WORD => NegativeWordState}[state].new()
end
 
Finally, there were some other simple refactorings which I did across the code. This included changing for loops into iterations over objects, and changing variable names to make them more readable for other programmers, and finally removing "return" from the end of methods, because this is implied in ruby code. Overall I think these refactorings and new designs make the code much more readable, extensible, and maintainable.


==plagiarism_check.rb==
==plagiarism_check.rb==


To see the original code please go to this [https://github.com/expertiza/expertiza/blob/master/app/models/automated_metareview/plagiarism_check.rb link].
To see the original code please go to this [https://github.com/expertiza/expertiza/blob/master/app/models/automated_metareview/plagiarism_check.rb link].
===Main Responsibility ===


The main responsibility of Plagiarism_Check is to determine whether the reviews are just copied from other sources.  
The main responsibility of Plagiarism_Check is to determine whether the reviews are just copied from other sources.  
Line 21: Line 380:


Basically, there are four kinds of plagiarism need to be check :  
Basically, there are four kinds of plagiarism need to be check :  


1. whether the review is copied from the submissions of the assignment
1. whether the review is copied from the submissions of the assignment
Line 43: Line 403:
  end
  end


The check_for_plagiarism method compares the review text with submission text. In this case, the review text does not quote the words as well as sentences properly and the reviewer just copies what the author says, which cause a plagiarism.  
The check_for_plagiarism method compares the review text with submission text. In this case, the review text does not quote the words as well as sentences properly and the reviewer just copies what the author says, which cause a plagiarism.
 
===Design Ideas===


From above point of view, the refactoring needs to be done with 4 fundamental methods and each method only does one thing correctly.  So as the initial file Plagiarism_check.rb indicates, the compare_reviews_with_questions_responses method has roughly 2 functions : compare reviews with review questions  as well as compare reviews with others’ responses, which makes us confused.  As the refactoring goes, we need to split the two functions up, and make sure such bad smells disappear.
From above point of view, the refactoring needs to be done with 4 fundamental methods and each method only does one thing correctly.  So as the initial file Plagiarism_check.rb indicates, the compare_reviews_with_questions_responses method has roughly 2 functions : compare reviews with review questions  as well as compare reviews with others’ responses, which makes us confused.  As the refactoring goes, we need to split the two functions up, and make sure such bad smells disappear.
Line 49: Line 411:
The first thing to do is based on the above statement, we need to define 4 methods with different functions.  
The first thing to do is based on the above statement, we need to define 4 methods with different functions.  


They are compare_reviews_with_submissions, compare_reviews_with_questions,  
They are: compare_reviews_with_submissions,  
compare_reviews_with_responses, compare_reviews_with_google_search, each method has its specific functions.   
 
compare_reviews_with_questions,  
 
compare_reviews_with_responses, and
 
compare_reviews_with_google_search, each method has its specific functions.   




Line 62: Line 429:
  …
  …
  end
  end
===Refactor Steps===


Next we need to extract the same part from the long method and make the part a individual method which can be called in class. For example in the method compare_reviews_with_questions and compare_reviews_with_responses they have the common parts: to check whether the reviews are copied fully from the responses/questions,
Next we need to extract the same part from the long method and make the part a individual method which can be called in class. For example in the method compare_reviews_with_questions and compare_reviews_with_responses they have the common parts: to check whether the reviews are copied fully from the responses/questions,
Line 106: Line 475:
  ...
  ...
  review_text.each do |review_arr| #iterating through the review's sentences
  review_text.each do |review_arr| #iterating through the review's sentences
   
     review = review_arr.to_s
     review = review_arr.to_s
     subm_text.each do |subm_arr|
     subm_text.each do |subm_arr|
     
       #iterating though the submission's sentences
       #iterating though the submission's sentences
       submission = subm_arr.to_s
       submission = subm_arr.to_s
Line 116: Line 487:
         rev_len, rev_phrase = skip_empty_array(array, rev_len)
         rev_len, rev_phrase = skip_empty_array(array, rev_len)
       ...
       ...
  def skip_empty_array(array, rev_len)
  def skip_empty_array(array, rev_len)
   if (array[rev_len] == " ") #skipping empty
   if (array[rev_len] == " ") #skipping empty
     rev_len+=1
     rev_len+=1
  end
  end
   #generating the sentence segment you'd like to compare
    
#generating the sentence segment you'd like to compare
   rev_phrase = array[rev_len]
   rev_phrase = array[rev_len]
   return rev_len, rev_phrase
   return rev_len, rev_phrase
  end
  end


=Test Our Code=
Please see code after refactoring in detail on this [https://github.com/shanfangshuiyuan/expertiza/blob/master/app/models/automated_metareview/plagiarism_check.rb page].
 
All the tests have been passed without failures since refactoring.
 
=Testing=
==Link to VCL==
==Link to VCL==
The purpose of running the VCL server is to let you make sure that expertiza is still working properly using our refactored code. The first VCL link is seeded with the expertiza-scrubbed.sql file which includes questionnaires and courses and assignments so that it is easy to verify that reviews work. You only need to make users and then have them do reviews on one another. The second link is only using the test.sql file but you can still verify that the functionality of expertiza works. If neither of these links work, please do not do your review in a hurry, shoot us an email, we will fix it as soon as possible. (yhuang25@ncsu.edu, ysun6@ncsu.edu, grimes.caroline@gmail.com). Thank you so much!
The purpose of running the VCL server is to let you make sure that expertiza is still working properly using our refactored code. The first VCL link is seeded with the expertiza-scrubbed.sql file which includes questionnaires and courses and assignments so that it is easy to verify that reviews work. You only need to make users and then have them do reviews on one another. The second link is only using the test.sql file but you can still verify that the functionality of expertiza works. If neither of these links work, please do not do your review in a hurry, shoot us an email, we will fix it as soon as possible. (yhuang25@ncsu.edu, ysun6@ncsu.edu, grimes.caroline@gmail.com). Thank you so much!
Line 136: Line 513:
https://github.com/shanfangshuiyuan/expertiza <ref> [https://github.com/shanfangshuiyuan/expertiza Expertiza fork]</ref>
https://github.com/shanfangshuiyuan/expertiza <ref> [https://github.com/shanfangshuiyuan/expertiza Expertiza fork]</ref>


==Steps to Setup Project==
1. Clone the git repository shown above.
2. Use ruby 1.9.3
3. Setup mysql and start server
4. Command line: bundle install
5. Download from http://dev.mysql.com/get/Downloads/Connector-C/mysql-connector-c-noinstall-6.0.2-win32.zip/from/pick and copy all files from the lib folder from the download into <Ruby193>\bin
6. Change /config/database.yml according your mysql root password and mysql port.
7. Command line: db:create:all
8. Command line: mysql -u root -p <YOUR_PASSWORD> pg_development < expertiza-scrubbed_2013_07_10.sql
9. Command line: rake db:migrate
10. Command line: rails server


==Test Our Code==
==Test Our Code==
Line 181: Line 538:
7. plagiarism_check_test.rb
7. plagiarism_check_test.rb


=Future work=
=Future Work=
 
 
Through refactoring we've made the code easier to understand with design patterns involved, which meets the requirements of this project. But from our perspective, there should be more work to do in order to improve the whole performance of the code, which includes:
 
1. There are some bugs in the initial method compare_reviews_with_questions_responses and google_search_response, which can not be implemented so far. We hope that people who are responsible for this project can fix it and make the method do the expected function well.
 
2. Based on 1, we can do more tests regarding plagiarism, which makes the code development better.
 
3. Through running tests, we've found there are some errors within the method of text_preprocssing.rb file,  which may cause a conflict with the function of plagiarism-check. Bug-fixing is needed.
 
4. The TaggedSentence class could be refactored by allowing a sentence to be either broken into sentence clauses or into sentence clause arrays of parsed sentence tokens.


=References=
=References=
<references/>
<references/>
*https://github.com/expertiza/expertiza
*http://wikis.lib.ncsu.edu/index.php/Expertiza

Latest revision as of 21:21, 30 October 2013

Expertiza is a web application, which allows students to submit assignments and do peer review of each other's work<ref> Expertiza github</ref>. Expertiza also supports team projects and any document type of submission is acceptable<ref> Expertiza wiki</ref>. Expertiza has been deployed for years to help professors and students engaging in the learning process. Expertiza is an open source project, for each year, students in the course of CSC517-Object Oriented Programmning of North Carolina State University will contributes to this project along with teaching assistant and professor.

For this year, we are responsible for refactoring plagiarism_check.rb and sentence_state.rb of the Expertiza project. Expertiza is built using Ruby on Rails with MVC design pattern. plagiarism_check.rb and sentence_state.rb are parts of the automated_metareview functionality inside models. The responsibility of sentence_state.rb is to determine the state of each clause of a sentence, and the responsibility of plagiarism_check.rb is to determine whether the reviews are just copied from other sources.

Project description

Classes

Classes we are going to refactor are plagiarism_check.rb (155 lines). sentence_state.rb (293 lines)

What it Does

The two class files performs functions which are needed in NLP analysis of reviews in the research. plagiarism_check.rb and sentence_state.rb are used to check whether the reviews are copied from other places. To check whether plagiarism happens is important because the reviewers are tends to game with the automated review system to get a high score instead of writing a high quality review. The classes compare the review text with text from Internet to determine whether and copy-paste happens<ref> Automated Assessment of Reviews</ref>.

Our Job

The code has many code smells. First, the methods are long and complex, and the codes are not structured well, which makes it not readable and understandable. Second, extremely long if-else branch and loop exists everywhere. Third, the responsibility of sentence_state.rb is heavy, some functions of sentence_state should be given to others. Finally, there are some duplicate codes.

Our job is to refactor the two classes and make it more O-O style and have a clear structure. To eliminate the code smells, we need removing the duplicate code, reconstruct long if-else branch and loop, generating new methods and classes to encapsulate functionality of other complex methods, clear the responsibility of classes and methods. After refactoring, we also need to test the two classes throughoutly without error.

Design

sentence_state.rb

To see the original code please go to this link: sentence_state.rb

Main Responsibility

The responsibility of SentenceState is to determine the state of each clause of a sentence. The possible states include positive, negative, and suggestive. In order to find the state of the sentence, SentenceState first splits the sentence into sentence clauses, then splits each clause into sentence tokens (words). Then it iterates through the tokens, and determines the new state of the sentence dependent on the previous state and the token state.

Take for example the sentence_state_test Identify State 8:

sentence = “We are not not musicians.”

First token: We => Positive, state = > positive

Second token: are => Positive and prev_state = >positive, state => positive

Third token: not => Negative and prev_state => positive, state => negative

Fourth token: not => Negative and prev_state => negative, state => positive (double negative!)

Fifth token: musicians => positive and prev_state => positive, state => positive

Therefore the sentence state is positive.

Design Ideas

The original code had several design smells, mostly deeply-nested if-else statements and duplicated code. Often it was possible to remove these smells using the Strategy Design Pattern, as will be explained in more detail below. Another design smell was that SentenceState had too many responsibilities. It had to first parse the sentence into separate sentence clauses and then separate the sentence clauses into tokens before iterating through the tokens to determine the state of the sentence. These responsibility of parsing the sentence and sentence tokens can be split into another class because this functionality could be useful elsewhere in the future and should be kept decoupled from SentenceState. The worst problem was the a deeply nested if-else statement which determined the next state of the sentence clause based on the previous state and the next sentence token. Instead of the SentenceState class being responsible for all of these relationships, it is better to have subclasses of SentenceState which each know the relationship between there own state and any sentence token type.

Refactor Steps

The first step in refactoring is to get the tests to pass. This required some debugging to find that some constants were defined in two different files, constants.rb and negations.rb, and the NEGATIVE_DESCRIPTORS definition was incomplete in negations.rb file. After updating this, all 18 original tests in sentence_state_test.rb passed.

The first place to refactor was the longest method in SentenceState, the method sentence_state(str_with_pos_tags). There were three for loops, each with deeply nested if-else statements inside of them. To make this method more readable, I extracted three for or if-else statements into their own method to clean up the code. This changed the code from 164 lines of if-else and for statements to:

def sentence_state(str_with_pos_tags)
   state = POSITIVE
   prev_negative_word =""  
   tagged_tokens, tokens = parse_sentence_tokens(str_with_pos_tags)
   for j  in (0..tokens.length-1)
     current_token_type = get_token_type(tokens[j..tokens.length-1)
     state = next_state(state, current_token_type, prev_negative_word, tagged_tokens)
     if(tokens[j].casecmp("NO") == 0 or tokens[j].casecmp("NEVER") == 0 or tokens[j].casecmp("NONE") == 0)
       prev_negative_word = tokens[j]
     end
   end #end of for loop
   if(state == NEGATIVE_DESCRIPTOR or state == NEGATIVE_WORD or state == NEGATIVE_PHRASE)
     state = NEGATED
   end
   return state
end

This was much easier to read and it revealed that the class SentenceState had too many responsibilities. The class now had two methods which parse the sentence: break_at_coordinating_conjunctions and parse_sentence_tokens. However, parsing a sentence into its sentence clauses and individual tokens (words) could potentially be useful elsewhere, so it is better to decouple this responsibility from SentenceState into a new class. So these two methods were refactored into a TaggedSentence class with the methods break_at_coordinating_conjunctions and parse_sentence_tokens. Now when SentenceState is called, it makes a new TaggedSentence and then calls break_at_coordinating_conjunctions which returns the sentence clauses as arrays of sentence tokens. See the new TaggedSentence class here: tagged_sentence.rb

Once this change was done, the next step was to refactor each of the three new methods that were created earlier to clean up the sentence_state method, because these still contain deeply nested if statements. The first method created was parse_sentence_tokens(str_with_pos_tags). Originally this method was as shown below:

def parse_sentence_tokens(str_with_pos_tags)
   tokens = Array.new
   tagged_tokens = Array.new
   i = 0
   interim_noun_verb  = false #0 indicates no interim nouns or verbs
       
   #fetching all the tokens
   for k in (0..st.length-1)
     ps = st[k]
     if(ps.include?("/"))
       ps = ps[0..ps.index("/")-1] 
     end
     #removing punctuations 
     if(ps.include?("."))
       tokens[i] = ps[0..ps.index(".")-1]
     elsif(ps.include?(","))
       tokens[i] = ps.gsub(",", "")
     elsif(ps.include?("!"))
       tokens[i] = ps.gsub("!", "")
     elsif(ps.include?(";"))
       tokens[i] = ps.gsub(";", "")
     else
       tokens[i] = ps
       i+=1
     end     
   return tokens
end

One of the problems of this code is the duplication of ps[0..ps.index(punctuation)-1] and ps.gsub(punctuation, "") because of the use of an if else statement. To remove this duplication, the strategy design pattern can be used to make each of the duplicated functions into lambda blocks, or commands, and iterate over a punctuation array to remove undesired punctuation. Also, after inspecting the code it was seen that tokens[i] is only truly updated when there is not a punctuation in the string because i is not increased unless it gets to the end of the if-else statement. To fix this we can set a valid_token boolean if no punctuation is found, and then save that value into tokens. This refactored code is shown below.

def parse_sentence_tokens(str_with_pos_tags)
   sentence_pieces = str_with_pos_tags.split(' ')
   num_tokens = 0
   tokens = Array.new
   tag = '/'
   punctuation = %w(. , ! ;)
   sentence_pieces.each do |sp|
     #remove tag from sentence word
     if sp.include?(tag)
       sp = sp[0..sp.index(tag)-1]
     end
     valid_token = true
     punctuation.each do |p|
       if sp.include?(p)
         valid_token = false
         break
       end
     end
     if valid_token
       tokens[num_tokens] = sp
       num_tokens+=1
     end
   end
   #end of the for loop
   tokens
 end

This does not shorten the code but it makes it more readable and extendable. If anyone wants to check for an additional punctuation, they do not have to update the if-else statement which could easily make a bug, instead they only have to update the punctuation array with the new punctuation. Also, the variables are defined so that the reader can understand their functionality. Future refactoring could include moving the line sp = sp[0..sp.index(tag)-1] into a lambda called remove_tag_from_sp[sp, tag] so that the code is even more readable.

The next method to refactor was get_token_type(current_token) in the SentenceState class as shown below.

     if(is_negative_word(tokens[j]) == NEGATED)  
       returned_type = NEGATIVE_WORD
     #checking for a negative descriptor (indirect indicators of negation)
     elsif(is_negative_descriptor(tokens[j]) == NEGATED)
       returned_type = NEGATIVE_DESCRIPTOR
     #2-gram phrases of negative phrases
     elsif(j+1 < count && !tokens[j].nil? && !tokens[j+1].nil? && 
       is_negative_phrase(tokens[j]+" "+tokens[j+1]) == NEGATED)
       returned_type = NEGATIVE_PHRASE
       j = j+1      
     #if suggestion word is found
     elsif(is_suggestive(tokens[j]) == SUGGESTIVE)
       returned_type = SUGGESTIVE
     #2-gram phrases suggestion phrases
     elsif(j+1 < count && !tokens[j].nil? && !tokens[j+1].nil? &&
        is_suggestive_phrase(tokens[j]+" "+tokens[j+1]) == SUGGESTIVE)
       returned_type = SUGGESTIVE
       j = j+1
     #else set to positive
     else
       returned_type = POSITIVE
     end

Problems with this code included a nested if else statement and ambiguous variable names. Digging deeper into each of the methods called by the if-else conditions, there was a lot of duplicated code. Each method called iterated through an array of words, and returned one token_type if the current_token was found in that array, and token_type = POSITIVE if it was not. The only major differences between the methods was the array that was searched, the type that was returned, and whether the input was a word or a phrase (1 or 2 tokens, respectively.) An example of one of these methods is shown below:

def is_negative_word(word)  <== input could be word or phrase
 not_negated = POSITIVE         <== type always POSITIVE
 for i in (0..NEGATED_WORDS.length - 1)      <== different array of words
   if(word.casecmp(NEGATED_WORDS[i]) == 0)
     not_negated = NEGATIVE_WORD      <== different type matching array
     break
   end
 end
 return not_negated
end

This was refactored using the strategy design pattern. Lambda blocks get_word and get_phrase were created to handle parsing the current_token into a word or a phrase. An array called types was created which holds the relationship between a type of token, which array it searches through to check for that type, and if the values in the search array are words or phrases. Iterating through the types array calls get_word or get_phrase to parse the input current_token into a word or phrase, then checks if that word or phrase is in the word_or_phrase_array, and returns the type associated with that word_or_phrase_array if it is found. This refactored five methods with duplicated code into one method as shown below.

def get_token_type(current_token)
   #input parsers
   get_word = lambda { |c| c[0]}
   get_phrase = lambda {|c| c[1].nil? ? nil : c[0]+' '+c[1]}
   #types holds relationships between word_or_phrase_array_of_type => [input parser of type, type]
   types = {NEGATED_WORDS => [get_word, NEGATIVE_WORD], NEGATIVE_DESCRIPTORS => [get_word, NEGATIVE_DESCRIPTOR], SUGGESTIVE_WORDS => [get_word, SUGGESTIVE], NEGATIVE_PHRASES => [get_phrase,NEGATIVE_PHRASE], SUGGESTIVE_PHRASES => [get_phrase, SUGGESTIVE]}
   current_token_type = POSITIVE
   types.each do |word_or_phrase_array, type_definition|
     get_word_or_phrase, word_or_phrase_type = type_definition[0], type_definition[1]
     token = get_word_or_phrase.(current_token)
     unless token.nil?
       word_or_phrase_array.each do |word_or_phrase|
           if token.casecmp(word_or_phrase) == 0
             current_token_type = word_or_phrase_type
             break
           end
       end
     end
   end
   current_token_type
 end

This refactoring made the method more readable because it moved duplicated code in five methods into a single method so there is only one place to read and understand the code. It also made it easily extensible. If you want to check for a new type, just add the relationship to the types array. If you need a different input, just make a new input_parser lambda.

Finally the biggest if-else statement to refactor is in the next_state() method as shown below:


  if((state == NEGATIVE_WORD or state == NEGATIVE_DESCRIPTOR or state == NEGATIVE_PHRASE) and returned_type == POSITIVE)
       if(interim_noun_verb == false and (tagged_tokens[j].include?("NN") or tagged_tokens[j].include?("PR") or tagged_tokens[j].include?("VB") or tagged_tokens[j].include?("MD")))
         interim_noun_verb = true
       end
     end 
     
     if(state == POSITIVE and returned_type != POSITIVE)
       state = returned_type
     #when state is a negative word
     elsif(state == NEGATIVE_WORD) #previous state
       if(returned_type == NEGATIVE_WORD)
         #these words embellish the negation, so only if the previous word was not one of them you make it positive
         if(prev_negative_word.casecmp("NO") != 0 and prev_negative_word.casecmp("NEVER") != 0 and prev_negative_word.casecmp("NONE") != 0)
           state = POSITIVE #e.g: "not had no work..", "doesn't have no work..", "its not that it doesn't bother me..."
         else
           state = NEGATIVE_WORD #e.g: "no it doesn't help", "no there is no use for ..."
         end  
         interim_noun_verb = false #resetting         
       elsif(returned_type == NEGATIVE_DESCRIPTOR or returned_type == NEGATIVE_PHRASE)
         state = POSITIVE #e.g.: "not bad", "not taken from", "I don't want nothing", "no code duplication"// ["It couldn't be more confusing.."- anomaly we dont handle this for now!]
         interim_noun_verb = false #resetting
       elsif(returned_type == SUGGESTIVE)
         #e.g. " it is not too useful as people could...", what about this one?
         if(interim_noun_verb == true) #there are some words in between
           state = NEGATIVE_WORD
         else
           state = SUGGESTIVE #e.g.:"I do not(-) suggest(S) ..."
         end
         interim_noun_verb = false #resetting
       end
     #when state is a negative descriptor
     elsif(state == NEGATIVE_DESCRIPTOR)
       if(returned_type == NEGATIVE_WORD)
         if(interim_noun_verb == true)#there are some words in between
           state = NEGATIVE_WORD #e.g: "hard(-) to understand none(-) of the comments"
         else
           state = POSITIVE #e.g."He hardly not...."
         end
         interim_noun_verb = false #resetting
       elsif(returned_type == NEGATIVE_DESCRIPTOR)
         if(interim_noun_verb == true)#there are some words in between
           state = NEGATIVE_DESCRIPTOR #e.g:"there is barely any code duplication"
         else 
           state = POSITIVE #e.g."It is hardly confusing..", but what about "it is a little confusing.."
         end
         interim_noun_verb = false #resetting
       elsif(returned_type == NEGATIVE_PHRASE)
         if(interim_noun_verb == true)#there are some words in between
           state = NEGATIVE_PHRASE #e.g:"there is barely any code duplication"
         else 
           state = POSITIVE #e.g.:"it is hard and appears to be taken from"
         end
         interim_noun_verb = false #resetting
       elsif(returned_type == SUGGESTIVE)
         state = SUGGESTIVE #e.g.:"I hardly(-) suggested(S) ..."
         interim_noun_verb = false #resetting
       end
     #when state is a negative phrase
     elsif(state == NEGATIVE_PHRASE)
       if(returned_type == NEGATIVE_WORD)
         if(interim_noun_verb == true)#there are some words in between
           state = NEGATIVE_WORD #e.g."It is too short the text and doesn't"
         else
           state = POSITIVE #e.g."It is too short not to contain.."
         end
         interim_noun_verb = false #resetting
       elsif(returned_type == NEGATIVE_DESCRIPTOR)
         state = NEGATIVE_DESCRIPTOR #e.g."It is too short barely covering..."
         interim_noun_verb = false #resetting
       elsif(returned_type == NEGATIVE_PHRASE)
         state = NEGATIVE_PHRASE #e.g.:"it is too short, taken from ..."
         interim_noun_verb = false #resetting
       elsif(returned_type == SUGGESTIVE)
         state = SUGGESTIVE #e.g.:"I too short and I suggest ..."
         interim_noun_verb = false #resetting
       end
     #when state is suggestive
     elsif(state == SUGGESTIVE) #e.g.:"I might(S) not(-) suggest(S) ..."
       if(returned_type == NEGATIVE_DESCRIPTOR)
         state = NEGATIVE_DESCRIPTOR
       elsif(returned_type == NEGATIVE_PHRASE)
         state = NEGATIVE_PHRASE
       end
       #e.g.:"I suggest you don't.." -> suggestive
       interim_noun_verb = false #resetting
     end
     
     #setting the prevNegativeWord
     if(tokens[j].casecmp("NO") == 0 or tokens[j].casecmp("NEVER") == 0 or tokens[j].casecmp("NONE") == 0)
       prev_negative_word = tokens[j]
     end  
         
   end #end of for loop
   
   if(state == NEGATIVE_DESCRIPTOR or state == NEGATIVE_WORD or state == NEGATIVE_PHRASE)
     state = NEGATED
   end
 return state

The key to refactoring this code was recognizing that the next state of the sentence depended on the current state of the sentence and the current_token_type. Understanding this revealed that a better design would be to have SentenceState subclasses (PositiveState, NegativeWordState, etc). The superclass SentenceState would contain information about interim state variables such as interim_noun_verb and prev_negative_word and the current sentence clause state, while the subclasses would only know their relationship between themselves the current_token_type, and the next state of the sentence. The superclass SentenceState would also be in charge of making these states in a factory method using only the current state of the sentence. An example of one of the subclasses is shown below:

class NegativeDescriptorState < SentenceState
 def negative_word
   @state = if_interim_then_state_is(NEGATIVE_WORD, POSITIVE)
   #puts "next token is negative"
 end
 def positive
   set_interim_noun_verb(true)
   @state = NEGATIVE_DESCRIPTOR
   #puts "next token is positive"
 end
 def negative_descriptor
   @state = if_interim_then_state_is(NEGATIVE_DESCRIPTOR, POSITIVE)
   #puts "next token is negative"
 end
 def negative_phrase
   @state = if_interim_then_state_is(NEGATIVE_PHRASE, POSITIVE)
   #puts "next token is negative phrase"
 end
 def suggestive
   @state = SUGGESTIVE #e.g.:"I hardly(-) suggested(S) ..."
                       #puts "next token is suggestive"
 end
 def get_state
   #puts "negative_descriptor"
   NEGATED
 end
end

Every other subclass also has the same methods, so that each subclass can be responsible for knowing what to do for any current_token_type. (These methods are different in every subclass because the next state is different for every current state and current_token_type). The methods such as if_interim_then_state_is(thistype, elsethistype) are implemented in the superclass to remove duplicate code from the subclasses.

This simplifies the next_state method so the superclass doesn't have to know anything about the relationships of the subclasses to find the next state of the sentence as shown below:

def next_state(current_token_type)
   method = {POSITIVE => self.method(:positive), NEGATIVE_DESCRIPTOR => self.method(:negative_descriptor), NEGATIVE_PHRASE => self.method(:negative_phrase), SUGGESTIVE => self.method(:suggestive), NEGATIVE_WORD => self.method(:negative_word)}[current_token_type]
   method.call()
   if @state != POSITIVE
     set_interim_noun_verb(false) #resetting
   end
   @state
 end

The method variable calls the correct method in the subclass based on the current_token_type. Now the code is much more extensible. Instead of having to edit that awfully long if-else statement, now a programmer only has to make a new SentenceState subclass which defines all the relationships of that subclass with any possible current_token_types. Future refactoring would include changing the variable method to a more descriptive name.

So now the sentence_state method has to be modified once more to use the new SentenceState subclasses:

def sentence_state(sentence_tokens) #str_with_pos_tags)
   #initialize state variables so that the original sentence state is positive
   @state = POSITIVE
   current_state = factory(@state)
   @@prev_negative_word = false
   @interim_noun_verb = false
   sentence_tokens.each_with_next do |curr_token, next_token|
     #get current token type
     current_token_type = get_token_type([curr_token, next_token])
     #Ask State class to get current state based on current state, current_token_type, and if there was a prev_negative_word
     current_state = factory(current_state.next_state(current_token_type))
     #setting the prevNegativeWord
     NEGATIVE_EMPHASIS_WORDS.each do |e|
       if curr_token.casecmp(e)
         @@prev_negative_word = true
       end
     end
   end #end of for loop
   current_state.get_state()
 end

The factory method is implemented so that the SentenceState class only has to know what type of state it wants to make. At any one time, a SentenceState instance only uses to one SentenceState subclass instance so that the interim instance variables are not being overridden by multiple subclasses.

def factory(state)
   {POSITIVE => PositiveState, NEGATIVE_DESCRIPTOR => NegativeDescriptorState, NEGATIVE_PHRASE => NegativePhraseState, SUGGESTIVE => SuggestiveState, NEGATIVE_WORD => NegativeWordState}[state].new()
end

Finally, there were some other simple refactorings which I did across the code. This included changing for loops into iterations over objects, and changing variable names to make them more readable for other programmers, and finally removing "return" from the end of methods, because this is implied in ruby code. Overall I think these refactorings and new designs make the code much more readable, extensible, and maintainable.

plagiarism_check.rb

To see the original code please go to this link.


Main Responsibility

The main responsibility of Plagiarism_Check is to determine whether the reviews are just copied from other sources.


Basically, there are four kinds of plagiarism need to be check :


1. whether the review is copied from the submissions of the assignment

2. whether the review is copied from the review questions

3. whether the review is copied from other reviews

4. whether the review is copied from the Internet or other sources, this may be detected through google search


For example, in the test file: expertiza/test/unit/automated_metareview/plagiarism_check_test.rb,

The 1st test shows:

test "check for plagiarism true match" do
   review_text = ["The sweet potatoes in the vegetable bin are green with mold. These sweet potatoes in the vegetable bin are fresh."]
   subm_text = ["The sweet potatoes in the vegetable bin are green with mold. These sweet potatoes in the vegetable bin are fresh."]
  
   instance = PlagiarismChecker.new
   assert_equal(true, instance.check_for_plagiarism(review_text, subm_text))
end

The check_for_plagiarism method compares the review text with submission text. In this case, the review text does not quote the words as well as sentences properly and the reviewer just copies what the author says, which cause a plagiarism.

Design Ideas

From above point of view, the refactoring needs to be done with 4 fundamental methods and each method only does one thing correctly. So as the initial file Plagiarism_check.rb indicates, the compare_reviews_with_questions_responses method has roughly 2 functions : compare reviews with review questions as well as compare reviews with others’ responses, which makes us confused. As the refactoring goes, we need to split the two functions up, and make sure such bad smells disappear.

The first thing to do is based on the above statement, we need to define 4 methods with different functions.

They are: compare_reviews_with_submissions,

compare_reviews_with_questions,

compare_reviews_with_responses, and

compare_reviews_with_google_search, each method has its specific functions.


As showed above, we have to split the method compare_reviews_with_questions _responses up to 2 methods:

def compare_reviews_with_questions(auto_metareview, map_id)
…
end
def compare_reviews_with_responses(auto_metareview, map_id)
…
end

Refactor Steps

Next we need to extract the same part from the long method and make the part a individual method which can be called in class. For example in the method compare_reviews_with_questions and compare_reviews_with_responses they have the common parts: to check whether the reviews are copied fully from the responses/questions,

if(count_copies > 0) #resetting review_array only when plagiarism was found
      auto_metareview.review_array = rev_array
   end
   
   if(count_copies > 0 and count_copies == scores.length)
     return ALL_RESPONSES_PLAGIARISED #plagiarism, with all other metrics 0
   elsif(count_copies > 0)
     return SOME_RESPONSES_PLAGIARISED #plagiarism, while evaluating other metrics
end

To avoid such things to happen, we extract this part and let it be a method to check the state of plagiarism :

def check_plagiarism_state(auto_metareview, count_copies, rev_array, scores)
  if count_copies > 0 #resetting review_array only when plagiarism was found
    auto_metareview.review_array = rev_array
    if count_copies == scores.length
      return ALL_RESPONSES_PLAGIARISED #plagiarism, with all other metrics 0
    else
      return SOME_RESPONSES_PLAGIARISED #plagiarism, while evaluating other metrics
    end
  end
end

Next thing to do is extract the long loop or if-else sentence to a individual method in order to make the initial method too long or confused for others.

Take the 1st method compare_reviews_with_submissions as example, we noticed that the this part:

if(array[rev_len] == " ") #skipping empty
         rev_len+=1
         next
       end
       
       #generating the sentence segment you'd like to compare
rev_phrase = array[rev_len]

can be extracted and made a new method skip_empty_array, since these lines focus on the function of generating the array without backspaces to make comparisons. Once we extract the method, the initial code of the compare_reviews_with_submissions changed:

expertiza/app/models/automated_metareview/plagiarism_check.rb

...
review_text.each do |review_arr| #iterating through the review's sentences
   
   review = review_arr.to_s
   subm_text.each do |subm_arr|
     
     #iterating though the submission's sentences
     submission = subm_arr.to_s
     rev_len = 0
     #review's tokens, taking 'n' at a time
     array = review.split(" ")
     while(rev_len < array.length) do
       rev_len, rev_phrase = skip_empty_array(array, rev_len)
     ...

def skip_empty_array(array, rev_len)
 if (array[rev_len] == " ") #skipping empty
   rev_len+=1
end
 
#generating the sentence segment you'd like to compare
 rev_phrase = array[rev_len]
 return rev_len, rev_phrase
end

Please see code after refactoring in detail on this page.

All the tests have been passed without failures since refactoring.

Testing

Link to VCL

The purpose of running the VCL server is to let you make sure that expertiza is still working properly using our refactored code. The first VCL link is seeded with the expertiza-scrubbed.sql file which includes questionnaires and courses and assignments so that it is easy to verify that reviews work. You only need to make users and then have them do reviews on one another. The second link is only using the test.sql file but you can still verify that the functionality of expertiza works. If neither of these links work, please do not do your review in a hurry, shoot us an email, we will fix it as soon as possible. (yhuang25@ncsu.edu, ysun6@ncsu.edu, grimes.caroline@gmail.com). Thank you so much!

1. http://152.46.20.30:3000/ Username: admin, password:password

2. http://vclv99-129.hpc.ncsu.edu:3000 Username: admin, password: admin

Git Forked Repository URL

https://github.com/shanfangshuiyuan/expertiza <ref> Expertiza fork</ref>


Test Our Code

1. Set up the project following the steps above

2. Command line: db:test:prepare

3. Run plagiarism_check_test.rb and sentence_state_test.rb, they are under /test/unit/automated_metareview. After refactoring, all tests passed without error.

4. Review the refactored files: sentence_state.rb and plagiarism_check.rb are under /app/models/automated_metareview. Other changed files are shown below.

Files Changed

1. text_preprocessing.rb

2. plagiarism_check.rb

3. sentence_state.rb

4. tagged_sentence.rb

5. constants.rb

6. negations.rb

7. plagiarism_check_test.rb

Future Work

Through refactoring we've made the code easier to understand with design patterns involved, which meets the requirements of this project. But from our perspective, there should be more work to do in order to improve the whole performance of the code, which includes:

1. There are some bugs in the initial method compare_reviews_with_questions_responses and google_search_response, which can not be implemented so far. We hope that people who are responsible for this project can fix it and make the method do the expected function well.

2. Based on 1, we can do more tests regarding plagiarism, which makes the code development better.

3. Through running tests, we've found there are some errors within the method of text_preprocssing.rb file, which may cause a conflict with the function of plagiarism-check. Bug-fixing is needed.

4. The TaggedSentence class could be refactored by allowing a sentence to be either broken into sentence clauses or into sentence clause arrays of parsed sentence tokens.

References

<references/>