CSC/ECE 517 Fall 2021 - E2150. Integrate suggestion detection algorithm: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 29: Line 29:
# Currently there are many repetitive blocks of code. For Ex.; in _response_analysis.html for the getReviewFeedback(), the API call for each type of tag (sentiment, suggestion, etc) is being repeated. Only the API link differs there.
# Currently there are many repetitive blocks of code. For Ex.; in _response_analysis.html for the getReviewFeedback(), the API call for each type of tag (sentiment, suggestion, etc) is being repeated. Only the API link differs there.


== Proposed Solution ==
# We need to improve the UI so that if there are many rubric items, user can easily check the feedback for each comment given to each rubric item. We propose that instead of showing a table with feedback on all comments, we will be showing the feedback below each rubric item so that the feedback is easily accessible.
# Another issue with the current implementation is that code is quite repetitive for making different API calls  (Sentiment, Suggestion, etc). We propose to store the API link in the config file and use the variable in the getReviewFeedback()
# We also plan to remove a global variable response_general, which is being used to store the response of API call. We will refactor the makeRequest function to directly return the response which can be used in various places. This will resolve implicit coupling issues in the code and make it more easily extendible.
# The previous implementation has hardcoded configuration information like the help text button.


== Comments on Prior Teams Implementation ==
== Comments on Prior Teams Implementation ==

Revision as of 00:23, 3 November 2021

Problem Definition

Peer-review systems like Expertiza utilize a lot of students’ input to determine each other’s performance. At the same time, students learn from the reviews they receive to improve their own performance. In order to make this happen, it would be good to have everyone give quality reviews instead of generic ones. Currently, Expertiza has a few classifiers that can detect useful features of review comments, such as whether they contain suggestions. The suggestion-detection algorithm has been coded as a web service, and other detection algorithms, such as problem detection and sentiment analysis, also exist as newer web services. We need to make the UI more intuitive by allowing users to view the feedback of specific review comments and the code needs to be refactored to remove redundancy to follow the DRY principle.


Previous Implementation

Overview

The previous implementation added the following features:

  1. Setup a config file 'review_metric.yml' where the instructor can select what review metric to display for the current assignments
  2. API calls (sentiment, problem, suggestion) are made and a table is rendered below, displaying the feedback of the review comments.
  3. The total time taken by the API calls was also displayed.

UI Screenshots

The following image shows how a reviewer interacts with the system to get feedback on the review comments.

Control Flow


Issues with Previous Work

With the previous implementation of this project, students can write comments and request feedback for the comments. There are certain issues with the previous implementation that needs to be addressed.

  1. The criteria are numbered in the view, and those numbers do not correspond to anything on the rubric form. So if the rubric is long, it would be quite difficult for the reviewer to figure out what automated feedback referred to which comment.
  2. Too much specific information on metrics is encoded into the text. While some of the info is in configuration files, the help text for the info buttons is in the code.
  3. Currently there are many repetitive blocks of code. For Ex.; in _response_analysis.html for the getReviewFeedback(), the API call for each type of tag (sentiment, suggestion, etc) is being repeated. Only the API link differs there.


Proposed Solution

  1. We need to improve the UI so that if there are many rubric items, user can easily check the feedback for each comment given to each rubric item. We propose that instead of showing a table with feedback on all comments, we will be showing the feedback below each rubric item so that the feedback is easily accessible.
  2. Another issue with the current implementation is that code is quite repetitive for making different API calls (Sentiment, Suggestion, etc). We propose to store the API link in the config file and use the variable in the getReviewFeedback()
  3. We also plan to remove a global variable response_general, which is being used to store the response of API call. We will refactor the makeRequest function to directly return the response which can be used in various places. This will resolve implicit coupling issues in the code and make it more easily extendible.
  4. The previous implementation has hardcoded configuration information like the help text button.

Comments on Prior Teams Implementation

  • The criteria are numbered in the view, and those numbers do not correspond to anything on the rubric form.
    • So if the rubric is long, it would be quite difficult for the reviewer to figure out what automated feedback referred to which comment
  • There is too much specific information on metrics that is encoded into the text.
    • While some of the info is in configuration files, the help text for the info buttons is in the code. Also, calls for each metric are programmed into the code, though, depending on how diverse they are, this is perhaps unavoidable
  • The code can be more modularized.
    • Currently there are many repetitive blocks of code. For ex; in _response_analysis.html for the getReviewFeedback(), the API call for each type of tag (sentiment, suggestion, etc) is being repeated. Only the API link differs there. So it might be more beneficial to store the API link in the config file and use the variable in the function

API Endpoints

JSON Formatting

  • Input Text is passed in the following JSON format
 { 
     "text": "This is an excellent project. Keep up the great work" 
 }
  • Output is returned in the following JSON format:
 {          
     "sentiment_score": 0.9,
     "sentiment_tone": "Positive",
     "suggestions": "absent",
     "suggestions_chances": 10.17,
     "text": "This is an excellent project. Keep up the great work",
     "total_volume": 10,
     "volume_without_stopwords": 6
  }

Team

  • Prashan Sengo (psengo)
  • Griffin Brookshire ()
  • Divyang Doshi (ddoshi2)
  • Srujan (sponnur)


Relevant Links