CSC/ECE 517 Spring 2022 - E2230. Integrate Suggestion Detection Algorithm

From Expertiza_Wiki
Jump to navigation Jump to search


Problem Definition

This project builds on the work which was done by E2150. This project was added to introduce a new feature in the expertiza which will help reviewers to get feedback for the review provided. When the reviewer will hit a button for getting the review, on the backend API’s will be called and provide the response to the reviewer. Currently, there are suggestion, sentiment and problem detection algorithms which are coded as a web-service. We need to remove issues and work on to provide a scalable and extendable solution for the given problem. We also need to fix and work on some issues with UI such as the order of the rubric and also need to address if some non-text item appears in the rubric.

Previous Implementation/Issues to Address

  • The previous team had created a response_analysis.html.erb file in which they have written a script for making a call to the API once the request for the feedback has been done. In the UI, they have a button for saving the review and also for getting feedback for the review. When the review clicks the button a script is called in which they are handling all the steps.
  • They have created .yml files which contain information for the type of metrics and also URL of all those APIs. They are loading all the data in those .yml files and then calling a function for getting all the comments which the reviewer has provided, after the comments are fetched, they are converting the comments into JSON and then requesting output for that particular comment object.
  • The real issue arises when the responses are coming, they are calling a function for combining the api output, in which they are dealing with the responses by conditional statements, they are checking the metric for the API and based upon that they are processing the output. Also, they are not generalizing for the parsing of the output. Since the current API’s are providing the output in JSON format, they have parsed the output in JSON format. We are proposing to generalize the solution by considering API’s which can provide output in any other form than JSON.
  • So, the issues with the previous implementation are that they are not considering the API’s which do not provide output in JSON and since our task is to integrate API’s we are also considering the solution which can handle any new API in the future with minimal code change.

Previous UI Implementation

The following image shows how a reviewer interacts with the system to get feedback on the review comments.

Previous Control Flow

Proposed UI Plan

Because we are making the solution scalable, we intend to add the table in such a way that if new API calls are added, the table's columns will be extended correspondingly, making the UI scalable as well as dynamic. We are adhering to the previous design implementation so that, if there are many rubric items, users can easily check the feedback for each comment given to each rubric item. Each comment on hover displays the user's specific comment on the specific rubric review question, making the feedback easily accessible. Similarly to the previous implementation, when the get review feedback button is clicked, we loop through the dynamically created review question form and save the mapping of questions and the reviewer's corresponding comments, which are displayed as a popup when hovered over.

Design of Proposed Scalable Solution

Implementation Plan

There are three major goals that we hope to accomplish in our project:

  1. Ensure that the UI code is scalable for future additions of metrics to the system.
  2. Ensure that the UI for comment feedback displays comments in the appropriate order for how the student filled in their response.
  3. Ensure that non-text rubric items are not passed to API calls that will not know how to handle this input form.
Goal 1: Scalable Metrics

Of these three goals, the first is the one that will require the most reworking of the previous implementations to accomplish. Future iterations on the expertiza project may require the integration of further metrics to collect and display data on student responses. When future programmers need to perform these tasks, it will be important that new API calls for metrics can be easily integrated into our system such that they get automatically triggered on all student responses and their results will be added into the table view shown above.

In order to create a scalable implementation, our team plans to extend the existing implementation of API calls through JavaScript. In the present implementation, metrics are added into the review_metrics_api_urls.yml file. Each metric is listed with a Tool Tip Text and URL to use when performing the API call. After each API is called, it has to have its data formatted into an appropriate form through a single function in the _response_analysis.html.erb file. Our proposed solution would change this such that each metric is associated with its own custom class. These classes would inherit from a generic Metric class that will provide standard operations to call the associated API and format the returned data before passing it along to be included in the process of generating a table. In order to scale this to further metrics in the future, a programmer would then only need to create a custom subclass of the Metric class that implements these pieces of functionality.

Goal 2: UI Comment Ordering

In the previous implementation of this project, there is an issue in which metric feedback for comments is not properly ordered in the generated table, creating confusion for the student as to which metrics apply to which of their comments. In order to solve this issue, we will have to conduct further testing of the output to see what is causing this improper ordering to occur. Upon our team’s initial inspection of the prior implementation, we suspect that this may be caused by the use of JavaScript dictionaries to store the response information. Dictionaries have no implicit ordering, so it could be difficult to define whether the output would be in the proper order when rendering to a table. Due to the lack of depth in testing from the previous implementation, we will have to further investigate this to figure out the true cause once we begin coding for the project.

Goal 3: Non-Rubric Items

There are several instances in which student responses will be in a non-text format, such as in the instance of selecting a checkbox for a true or false response. Passing this type of information along to the various API calls desired does not make sense for most cases, such as detecting sentiment from the comments. From the previous implementation, it is unclear whether or not these types of questions were being passed along to the API, although our initial inspection of the code base leads our team to believe that they are not. This feature will be something that is prioritized during testing to ensure that we cover this scenario and do not have to worry about undefined behavior in the final implementation in the expertiza project.

Sample API Input/Output

At the present moment, there are three metrics that need to be implemented in our system. They will be retrieved from the following API calls: Sentiment: https://peerlogic.csc.ncsu.edu/sentiment/analyze_reviews_bulk Presence of Problems: http://152.7.99.200:5000/problem Presence of Suggestions: http://152.7.99.200:5000/suggestions

Each of these API calls uses the following JSON input format

 { 
     "text": "This is an excellent project. Keep up the great work" 
 }

Currently, our team is having issues accessing the APIs as they all appear to be returning internal server error messages when called through Postman. From our project description, we were able to identify the following expected JSON return format for the Sentiment metric:

 {          
     "sentiment_score": 0.9,
     "sentiment_tone": "Positive",
     "suggestions": "absent",
     "suggestions_chances": 10.17,
     "text": "This is an excellent project. Keep up the great work",
     "total_volume": 10,
     "volume_without_stopwords": 6
  }

We will be working with our mentor to gain a better understanding of the return format for each of the remaining API calls.

Testing Plan

Testing Methodology We decided to proceed with mostly manual testing for this project. This is due to the fact that most of the functionality is written in JavaScript which is not able to be tested through the use of rspec as most other tests in the project are written. In addition, much of the functionality regarding these types of tests relies on outside API calls which can change values if the API functionality is changed. For these reasons, we thought it would be best to create a defined method of manually testing this project that can be used in the future without issues.

Preconditions for Every Test:

Create a new assignment: In the navigation bar, click “Manage → Assignments” Click the blue “+” button in the top right of the page Use the following field values in the “General” Tab: Name: “Basic Assignment” Course: “CSC 216, Fall 2009” Has Teams?: True Click “Create” Click on the “Due dates” tab and select the following options: For “Round 1: Submission”, set the Date & Time to be approximately 10 minutes ahead of the current time (You will need to complete the other setup in this time in order to test the review functionality, so choose accordingly). For “Round 1: Review”, set the Date & Time to be approximately 1 week from today (give yourself plenty of time to complete the test). Click “Save” NOTE: the assignment should default to have a Review rubric. For our testing, this default value was “rubric1,” which had all the coverage we needed for both text and non-text (checkbox) based questions. If this does not default to the same rubric, select a different rubric or create your own. Add Students to the assignment: In the navigation bar, click “Manage → Assignments” Find the newly created assignment “Basic Assignment” and click the button to “Add participants” Add the following two test users: “student7144” “student7126” Submit the assignment as one user In the navigation bar, click “Manage → Impersonate User” Enter “student7126” Click “Impersonate” Open the newly created assignment. Add yourself to a new team under the “Your Team” tab. Add a submission under the “Your Work” tab. This can be simple, such as including the URL “www.google.com” Review the assignment as the other user In the navigation bar, click “Revert” to stop impersonating user “student7126” Click “Manage → Impersonate User” Enter “student7144” Click “Impersonate” Open the newly created assignment. Add yourself to a new team under the “Your Team” tab. Wait for the submission deadline to pass. This is the same deadline as what you set in step 1e. Click “Others’ work” Click “Request” to get access to review the previous student’s work. Fill out the review rubric. Include comments in each of the text boxes where appropriate. Click “Get Review Feedback” to submit your reviews to the various API Metric calls.


FORMAT AS TABLE -------------------------------

Table of Tests/Results

What are we Testing? Description Expected Results Actual Results “Get Review Feedback” Button Students should see the “Get Review Feedback” button near the bottom of the page.  Click on the “Get Review Feedback” button. When the button is clicked, there may be a message that appears saying “Loading…” before a table appears containing metrics on the responses given by the student.  The metrics at the current moment should be Sentiment, Problem Detection, and Suggestion Detection. As expected, when students click the button after completing their review comments, they see a table with API metrics for each of their comments.  There are metrics for Sentiment, Problem Detection, and Suggestion Detection. Time Taken Students should click the “Get Review Feedback” button. The page will show the message “Loading…” until the API calls are completed, then display the time that was taken to complete the operation. The time should be displayed on the page in seconds underneath the table of metric responses.  The time should be within the range of 10-60 seconds depending on the machine that expertiza is running on and the metrics being requested. For our tests running expertiza on the VCL, the time was properly displayed and tended to be around 25 seconds.  Order of The Comments Students should write comments on each question in the rubric. These should be seen in the metric table in the same order as they appear for completing the rubric. In the table, comments should be ordered the same as they were in the rubric for the student to fill out.  This can be verified by hovering over the information tooltip for each question, which shows the question associated with that row and the answer that the student gave. The feedback in the table is in the proper ordering for each of the questions and responses. Tooltip Values In the table, the user should be able to hover over the information icon for each of the questions and get more information about the question that the metric values are associated with. When hovering over each question, the user should see a tooltip including the question and the response that they provided in the following format: The tooltip text appropriately displays the question and associated answers for all questions. “Q) <Question> A) <Answer>” Handling Non-Text Questions The rubric used for testing must include at least one non-text (checkbox) question and several other text-based questions.  These non-text questions should not impact the appearance of the metric table at all and they will not be processed by the metric APIs. The results table shows up with rows for each of the text-based questions, but not for the non-text based question.  The ordering of the text-based question feedback still aligns with the order that they were completed in the rubric. The results table contains only the appropriate values for text based questions, and is in the proper order.


Important Links

  1. https://docs.google.com/document/d/1H0BjzMBz5it7Wckhegq4LXhMi8RVctTwVzP2x9gnFCo
  2. https://peerlogic.csc.ncsu.edu/sentiment/developer

This section will be updated with the pull request link when the project is done.

Team

  • Dev Gaurangbhai Mehta (dmehta3)
  • Matthew Martin (mkmarti5)
  • Muhammet Mustafa Olmez (molmez)
  • Priyam Garg (pgarg6)