CSC/ECE 517 Spring 2022 - E2230. Integrate suggestion detection algorithm

From Expertiza_Wiki
Jump to navigation Jump to search


Problem Definition

This project builds on the work which was done by E2150. This project was added to introduce a new feature in the expertiza which will help reviewers to get feedback for the review provided. When the reviewer will hit a button for getting the review, on the backend API’s will be called and provide the response to the reviewer. Currently, there are suggestion, sentiment and problem detection algorithms which are coded as a web-service. We need to remove issues and work on to provide a scalable and extendable solution for the given problem. We also need to fix and work on some issues with UI such as the order of the rubric and also need to address if some non-text item appears in the rubric.


Previous Implementation/Issues to Address

The previous team had created a response_analysis.html.erb file in which they have written a script for making a call to the API once the request for the feedback has been done. In the UI, they have a button for saving the review and also for getting feedback for the review. When the review clicks the button a script is called in which they are handling all the steps.

They have created .yml files which contain information for the type of metrics and also URL of all those APIs. They are loading all the data in those .yml files and then calling a function for getting all the comments which the reviewer has provided, after the comments are fetched, they are converting the comments into JSON and then requesting output for that particular comment object.

The real issue arises when the responses are coming, they are calling a function for combining the api output, in which they are dealing with the responses by conditional statements, they are checking the metric for the API and based upon that they are processing the output. Also, they are not generalizing for the parsing of the output. Since the current API’s are providing the output in JSON format, they have parsed the output in JSON format. We are proposing to generalize the solution by considering API’s which can provide output in any other form than JSON.

So, the issues with the previous implementation are that they are not considering the API’s which do not provide output in JSON and since our task is to integrate API’s we are also considering the solution which can handle any new API in the future with minimal code change.

Previous Implementation and its Issues

The following image shows how a reviewer interacts with the system to get feedback on the review comments.

Proposed UI Plan

Because we are making the solution scalable, we intend to add the table in such a way that if new API calls are added, the table's columns will be extended correspondingly, making the UI scalable as well as dynamic. We are adhering to the previous design implementation so that, if there are many rubric items, users can easily check the feedback for each comment given to each rubric item. Each comment on hover displays the user's specific comment on the specific rubric review question, making the feedback easily accessible. Similarly to the previous implementation, when the get review feedback button is clicked, we loop through the dynamically created review question form and save the mapping of questions and the reviewer's corresponding comments, which are displayed as a popup when hovered over.

Design of Proposed Scalable Solution



Implementation Plan

There are three major goals that we hope to accomplish in our project: Ensure that the UI code is scalable for future additions of metrics to the system. Ensure that the UI for comment feedback displays comments in the appropriate order for how the student filled in their response. Ensure that non-text rubric items are not passed to API calls that will not know how to handle this input form.

Goal 1: Scalable Metrics

Of these three goals, the first is the one that will require the most reworking of the previous implementations to accomplish. Future iterations on the expertiza project may require the integration of further metrics to collect and display data on student responses. When future programmers need to perform these tasks, it will be important that new API calls for metrics can be easily integrated into our system such that they get automatically triggered on all student responses and their results will be added into the table view shown above.

In order to create a scalable implementation, our team plans to extend the existing implementation of API calls through JavaScript. In the present implementation, metrics are added into the review_metrics_api_urls.yml file. Each metric is listed with a Tool Tip Text and URL to use when performing the API call. After each API is called, it has to have its data formatted into an appropriate form through a single function in the _response_analysis.html.erb file. Our proposed solution would change this such that each metric is associated with its own custom class. These classes would inherit from a generic Metric class that will provide standard operations to call the associated API and format the returned data before passing it along to be included in the process of generating a table. In order to scale this to further metrics in the future, a programmer would then only need to create a custom subclass of the Metric class that implements these pieces of functionality.

Goal 2: UI Comment Ordering

In the previous implementation of this project, there is an issue in which metric feedback for comments is not properly ordered in the generated table, creating confusion for the student as to which metrics apply to which of their comments. In order to solve this issue, we will have to conduct further testing of the output to see what is causing this improper ordering to occur. Upon our team’s initial inspection of the prior implementation, we suspect that this may be caused by the use of JavaScript dictionaries to store the response information. Dictionaries have no implicit ordering, so it could be difficult to define whether the output would be in the proper order when rendering to a table. Due to the lack of depth in testing from the previous implementation, we will have to further investigate this to figure out the true cause once we begin coding for the project.

Goal 3: Non-Rubric Items

There are several instances in which student responses will be in a non-text format, such as in the instance of selecting a checkbox for a true or false response. Passing this type of information along to the various API calls desired does not make sense for most cases, such as detecting sentiment from the comments. From the previous implementation, it is unclear whether or not these types of questions were being passed along to the API, although our initial inspection of the code base leads our team to believe that they are not. This feature will be something that is prioritized during testing to ensure that we cover this scenario and do not have to worry about undefined behavior in the final implementation in the expertiza project.

Sample API Input/Output

At the present moment, there are three metrics that need to be implemented in our system. They will be retrieved from the following API calls: Sentiment: https://peerlogic.csc.ncsu.edu/sentiment/analyze_reviews_bulk Presence of Problems: http://152.7.99.200:5000/problem Presence of Suggestions: http://152.7.99.200:5000/suggestions

Each of these API calls uses the following JSON input format

 { 
     "text": "This is an excellent project. Keep up the great work" 
 }

Currently, our team is having issues accessing the APIs as they all appear to be returning internal server error messages when called through Postman. From our project description, we were able to identify the following expected JSON return format for the Sentiment metric:

 {          
     "sentiment_score": 0.9,
     "sentiment_tone": "Positive",
     "suggestions": "absent",
     "suggestions_chances": 10.17,
     "text": "This is an excellent project. Keep up the great work",
     "total_volume": 10,
     "volume_without_stopwords": 6
  }

We will be working with our mentor to gain a better understanding of the return format for each of the remaining API calls.

Testing Plan

The test is to confirm that the correct API URLs are retrieved from the config file, verifies that the values are taken and service is working.

It is supposed to place in spec/controllers/response_controller_spec.rb

These manual tests below were taken into account in the previous project. We will handle these at the end of the project.

  • The functionality test will be written in javascript.
  • To test the view, students is able to Access the type of review
  • There is a button at the bottom of the review called 'Get Review Feedback'.
  • When pressing button, API calls are issued and the metrics will show up within the table
  • As API calls will take time, 'Loading..' text will appear until the API calls are complete
  • All the review feedback for the comments will be displayed in a colorful table.
  • In the feedback table, upon hovering on the comment number, we will be able to see the rubric item and review comments associated with that particular rubric item

In addition to previous manual tests,

  • Rubric item should appear in the same order as the rubric
  • When a non text item appears in the rubric, the order of the rubric item should still be preserved in the view and table.
  • Unit tests to determine that the web-service was communicating correctly with Expertiza.

Important Links

  1. https://docs.google.com/document/d/1H0BjzMBz5it7Wckhegq4LXhMi8RVctTwVzP2x9gnFCo
  2. https://peerlogic.csc.ncsu.edu/sentiment/developer

This section will be updated with the pull request link when the project is done.

Team

  • Dev Gaurangbhai Mehta (dmehta3)
  • Matthew Martin (mkmarti5)
  • Muhammet Mustafa Olmez (molmez)
  • Priyam Garg (pgarg6)