<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Yzhan114</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Yzhan114"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Yzhan114"/>
	<updated>2026-05-06T15:29:49Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137760</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137760"/>
		<updated>2020-11-21T20:46:18Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* RSpec Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row representing a student, a metrics chart is added below the volume chart that is already there. Graders get useful information by looking at these two charts combined and can offer more accurate feedback to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor over each bar to view its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding a column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method, which returns either the metric average for one reviewer or the metric average for the whole class, depending on whether the &amp;lt;code&amp;gt;reviewer&amp;lt;/code&amp;gt; is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which strips html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (&amp;lt;code&amp;gt;tag_prompt_deployment&amp;lt;/code&amp;gt; instead of &amp;lt;code&amp;gt;tag_prompts_deployment&amp;lt;/code&amp;gt;)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could pass&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter as 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for the disabled button and the spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer-tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer-tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added a bar chart in each row of the review report for metrics&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes in many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
One new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results onto the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple web service queries.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags is gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137759</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137759"/>
		<updated>2020-11-21T20:32:27Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* RSpec Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row representing a student, a metrics chart is added below the volume chart that is already there. Graders get useful information by looking at these two charts combined and can offer more accurate feedback to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor over each bar to view its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding a column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method, which returns either the metric average for one reviewer or the metric average for the whole class, depending on whether the &amp;lt;code&amp;gt;reviewer&amp;lt;/code&amp;gt; is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which strips html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (&amp;lt;code&amp;gt;tag_prompt_deployment&amp;lt;/code&amp;gt; instead of &amp;lt;code&amp;gt;tag_prompts_deployment&amp;lt;/code&amp;gt;)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could pass&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter as 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for the disabled button and the spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer-tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer-tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added a bar chart in each row of the review report for metrics&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
One new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple web service queries.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags is gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137758</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137758"/>
		<updated>2020-11-21T20:31:17Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* UI Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row representing a student, a metrics chart is added below the volume chart that is already there. Graders get useful information by looking at these two charts combined and can offer more accurate feedback to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor over each bar to view its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding a column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method, which returns either the metric average for one reviewer or the metric average for the whole class, depending on whether the &amp;lt;code&amp;gt;reviewer&amp;lt;/code&amp;gt; is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which strips html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (&amp;lt;code&amp;gt;tag_prompt_deployment&amp;lt;/code&amp;gt; instead of &amp;lt;code&amp;gt;tag_prompts_deployment&amp;lt;/code&amp;gt;)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could pass&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter as 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for the disabled button and the spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer-tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer-tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added a bar chart in each row of the review report for metrics&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple web service queries.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags is gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137757</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137757"/>
		<updated>2020-11-21T20:30:57Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* UI Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row representing a student, a metrics chart is added below the volume chart that is already there. Graders get useful information by looking at these two charts combined and can offer more accurate feedback to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor over each bar to view its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding a column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method, which returns either the metric average for one reviewer or the metric average for the whole class, depending on whether the &amp;lt;code&amp;gt;reviewer&amp;lt;/code&amp;gt; is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which strips html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (&amp;lt;code&amp;gt;tag_prompt_deployment&amp;lt;/code&amp;gt; instead of &amp;lt;code&amp;gt;tag_prompts_deployment&amp;lt;/code&amp;gt;)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could pass&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter as 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for the disabled button and the spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer-tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer-tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added a bar chart in each row of the review report for metrics&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple web service queries.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137756</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137756"/>
		<updated>2020-11-21T20:28:33Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row representing a student, a metrics chart is added below the volume chart that is already there. Graders get useful information by looking at these two charts combined and can offer more accurate feedback to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor over each bar to view its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding a column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method, which returns either the metric average for one reviewer or the metric average for the whole class, depending on whether the &amp;lt;code&amp;gt;reviewer&amp;lt;/code&amp;gt; is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which strips html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (&amp;lt;code&amp;gt;tag_prompt_deployment&amp;lt;/code&amp;gt; instead of &amp;lt;code&amp;gt;tag_prompts_deployment&amp;lt;/code&amp;gt;)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could pass&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter as 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for the disabled button and the spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer-tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer-tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added a bar chart in each row of the review report for metrics&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137755</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137755"/>
		<updated>2020-11-21T20:06:15Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Bug Fixes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row representing a student, a metrics chart is added below the volume chart that is already there. Graders get useful information by looking at these two charts combined and can offer more accurate feedback to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor over each bar to view its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding a column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137754</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137754"/>
		<updated>2020-11-21T20:05:59Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Answer-Tagging Report Page */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row representing a student, a metrics chart is added below the volume chart that is already there. Graders get useful information by looking at these two charts combined and can offer more accurate feedback to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor over each bar to view its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding a column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137753</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137753"/>
		<updated>2020-11-21T20:04:37Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Review Report Page */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row representing a student, a metrics chart is added below the volume chart that is already there. Graders get useful information by looking at these two charts combined and can offer more accurate feedback to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor over each bar to view its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137752</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137752"/>
		<updated>2020-11-21T20:03:02Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Review Tagging Page */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. The tag in this form is still editable, meaning students can override some of the inferred tags if they wish to.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137751</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137751"/>
		<updated>2020-11-21T20:01:50Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Metrics Analysis Page */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review-giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting requests multiple times is that the same set of comments are sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add a loader and a message to the bottom of the 'submit' button, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. Students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with the confidence level under the predefined threshold are not rendered in the report, so students do not see uncertain or inaccurate predictions. When students confirm to submit, they return to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137750</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137750"/>
		<updated>2020-11-21T19:50:27Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Database Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the existing &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table, which is originally used to store tags assigned by students. One can imagine results from web service to be a stack of tags assigned by the outside tool, with a confidence level indicating how confident the outside tool is to each tag it assigns. Therefore, the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table will have two types of tags, one from the student, which has the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;, and the other inferred from web service, which has the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; but not the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt;. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137749</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137749"/>
		<updated>2020-11-21T19:43:07Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Control Flow Diagram */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to train the models (active learning) further&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It first gets called when students finish and about to submit their reviews on other students' work. Before the system marks their reviews as submitted, we plan that the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts these reviews' content and sends them to the Peer Logic web service for predictions. After it receives the predicted results, it caches them to the local Expertiza database and then releases the intercept. Instead of being redirected to the list of reviews, students are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is about this prediction. Whenever a student visits the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to check whether the web service has previously determined the value of the tag and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident about its prediction, it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish between normal tags and gray-out tags and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the algorithm's knowledge.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's why we need these data to be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes thousands of such reviews. Querying web service results in real-time is impractical concerning the time it consumes. We limit the number of contacts with the web service the least by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137748</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137748"/>
		<updated>2020-11-21T19:35:30Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Database Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ answer_tags table&lt;br /&gt;
|-&lt;br /&gt;
! id !! answer_id !! tag_prompt_deployment_id !! user_id !! value !! confidence_level !! created_at !! updated_at&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 1066685 || 5 || 7513 || 0 || NULL || 2017-11-04 02:32:59 || 2017-11-04 02:32:59&lt;br /&gt;
|-&lt;br /&gt;
| 353916 || 1430431 || 149 || NULL || -1 || 0.99700 || 2020-11-16 01:47:29 || 2020-11-16 01:47:29&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137747</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137747"/>
		<updated>2020-11-21T19:28:31Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm.' Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137746</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=137746"/>
		<updated>2020-11-21T19:26:04Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm pre-determine the confidence level of the asked characteristic's presence in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136715</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136715"/>
		<updated>2020-11-11T04:59:06Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Core Changes===&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
===Cache Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Added styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
===Show Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Added styles for different forms of tags (gray-out and overridden)&lt;br /&gt;
&lt;br /&gt;
===Show Summary of Inferred Tags===&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
===Bug Fixes===&lt;br /&gt;
&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
===RSpec Testing===&lt;br /&gt;
&lt;br /&gt;
Since this project involves changes to many places of the system, some existing tests needed to be fixed. These include&lt;br /&gt;
&lt;br /&gt;
spec/features/peer_review_spec.rb&lt;br /&gt;
*The button &amp;quot;Submit Review&amp;quot; will no longer redirect students to the list of reviews. We made the test to click &amp;quot;Save Review&amp;quot; instead of &amp;quot;Submit Review&amp;quot; so the expected behavior could still be tested.&lt;br /&gt;
&lt;br /&gt;
spec/models/tag_prompt_spec.rb&lt;br /&gt;
*This spec file tests functionalities regarding &amp;lt;code&amp;gt;TagPrompt&amp;lt;/code&amp;gt;. Some tests break because we incorporated the logic of calling the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to determine the slider's style. We fixed these tests by always letting the &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt; method to return false, so as to strip that part of logic out of testing.&lt;br /&gt;
&lt;br /&gt;
Two new spec files are written for the new code:&lt;br /&gt;
&lt;br /&gt;
spec/models/review_metrics_query_spec.rb &lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;cache_ws_results&amp;lt;/code&amp;gt; method calls &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; with the right parameters and saves the results into the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_value&amp;lt;/code&amp;gt; method interprets the web service results correctly&lt;br /&gt;
*Ensure the &amp;lt;code&amp;gt;inferred_confidence&amp;lt;/code&amp;gt; method flips the confidence value for predictions that have a negative meaning&lt;br /&gt;
*Ensure &amp;lt;code&amp;gt;confident?&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;confidence()&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;has?&amp;lt;/code&amp;gt; methods access the right column in the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table.&lt;br /&gt;
&lt;br /&gt;
=== UI Testing===&lt;br /&gt;
&lt;br /&gt;
Following UI tests were done to ensure the following:&lt;br /&gt;
&lt;br /&gt;
# The 'Submit' button is disabled after the student clicks it to prevent multiple queries to the web service.&lt;br /&gt;
# The student gets redirected to the analysis page after the web service request completes.&lt;br /&gt;
# The student sees the analysis of their review comments on the analysis page.&lt;br /&gt;
# The slider for inferred tags are gray-out.&lt;br /&gt;
# The student can override the gray-out tag with a new value, and the slider changes to the overridden style.&lt;br /&gt;
# The instructor sees the new bar chart for review metrics.&lt;br /&gt;
# The instructor sees the column summarizing inferred tags in the answer-tagging report.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136648</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136648"/>
		<updated>2020-11-02T05:05:33Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136647</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136647"/>
		<updated>2020-11-02T05:05:11Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|400px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136646</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136646"/>
		<updated>2020-11-02T04:53:13Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|500px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136645</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136645"/>
		<updated>2020-11-02T04:50:21Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|400px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer_tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Answer_tagging_report_page.png&amp;diff=136644</id>
		<title>File:Answer tagging report page.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Answer_tagging_report_page.png&amp;diff=136644"/>
		<updated>2020-11-02T04:49:29Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Yzhan114 uploaded a new version of File:Answer tagging report page.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136643</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136643"/>
		<updated>2020-11-02T04:49:20Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|400px|center]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer-tagging_report_page.png|800px|center]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_2.png&amp;diff=136642</id>
		<title>File:Control flow diagram 2.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_2.png&amp;diff=136642"/>
		<updated>2020-11-02T04:45:42Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Yzhan114 uploaded a new version of File:Control flow diagram 2.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_1.png&amp;diff=136641</id>
		<title>File:Control flow diagram 1.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_1.png&amp;diff=136641"/>
		<updated>2020-11-02T04:44:30Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Yzhan114 uploaded a new version of File:Control flow diagram 1.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_1.png&amp;diff=136640</id>
		<title>File:Control flow diagram 1.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_1.png&amp;diff=136640"/>
		<updated>2020-11-02T04:44:23Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Yzhan114 uploaded a new version of File:Control flow diagram 1.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_1.png&amp;diff=136639</id>
		<title>File:Control flow diagram 1.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_1.png&amp;diff=136639"/>
		<updated>2020-11-02T04:44:21Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Yzhan114 uploaded a new version of File:Control flow diagram 1.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Answer_tagging_report_page.png&amp;diff=136638</id>
		<title>File:Answer tagging report page.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Answer_tagging_report_page.png&amp;diff=136638"/>
		<updated>2020-11-02T04:41:58Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Yzhan114 uploaded a new version of File:Answer tagging report page.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_2.png&amp;diff=136637</id>
		<title>File:Control flow diagram 2.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_2.png&amp;diff=136637"/>
		<updated>2020-11-02T04:41:46Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_1.png&amp;diff=136636</id>
		<title>File:Control flow diagram 1.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Control_flow_diagram_1.png&amp;diff=136636"/>
		<updated>2020-11-02T04:41:33Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136635</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136635"/>
		<updated>2020-11-02T04:40:12Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_1.png|200px]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_flow_diagram_2.png|200px]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:Loader.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
[[File:Metrics_analysis_page.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:Review_tagging_page.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:Original_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:Gray_out_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:Overridden_tag.png|100px]]&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:Review_report_page.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:Answer-tagging_report_page.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Overridden_tag.png&amp;diff=136634</id>
		<title>File:Overridden tag.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Overridden_tag.png&amp;diff=136634"/>
		<updated>2020-11-02T04:35:07Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_tagging_page.png&amp;diff=136633</id>
		<title>File:Review tagging page.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_tagging_page.png&amp;diff=136633"/>
		<updated>2020-11-02T04:34:49Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_report_page.png&amp;diff=136632</id>
		<title>File:Review report page.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_report_page.png&amp;diff=136632"/>
		<updated>2020-11-02T04:34:27Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Loader.png&amp;diff=136631</id>
		<title>File:Loader.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Loader.png&amp;diff=136631"/>
		<updated>2020-11-02T04:34:14Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Original_tag.png&amp;diff=136630</id>
		<title>File:Original tag.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Original_tag.png&amp;diff=136630"/>
		<updated>2020-11-02T04:34:03Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Metrics_analysis_page.png&amp;diff=136629</id>
		<title>File:Metrics analysis page.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Metrics_analysis_page.png&amp;diff=136629"/>
		<updated>2020-11-02T04:33:50Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Answer_tagging_report_page.png&amp;diff=136628</id>
		<title>File:Answer tagging report page.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Answer_tagging_report_page.png&amp;diff=136628"/>
		<updated>2020-11-02T04:33:24Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Gray_out_tag.png&amp;diff=136627</id>
		<title>File:Gray out tag.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Gray_out_tag.png&amp;diff=136627"/>
		<updated>2020-11-02T04:33:10Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136626</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136626"/>
		<updated>2020-11-02T04:32:27Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
[[File:control_flow_diagram_1.png]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
[[File:control_flow_diagram_2.png]]&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
[[File:metrics_analysis_page.png]]&lt;br /&gt;
&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
[[File:review_tagging_page.png]]&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
[[File:original_tag.png]]&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
[[File:gray_out_tag.png]]&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
[[File:overridden_tag.png]]&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
[[File:review_report_page.png]]&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
[[File:answer-tagging_report_page.png]]&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==Test Plan==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136621</id>
		<title>CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136621"/>
		<updated>2020-11-01T20:54:14Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Active Learning for Review Tagging draft&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020&amp;diff=136620</id>
		<title>CSC/ECE 517 Summer 2020</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020&amp;diff=136620"/>
		<updated>2020-11-01T20:54:00Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[CSC/ECE 517 Summer 2020 - Active Learning for Review Tagging]]&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136619</id>
		<title>CSC/ECE 517 Fall 2020 - Active Learning for Review Tagging</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_Active_Learning_for_Review_Tagging&amp;diff=136619"/>
		<updated>2020-11-01T20:52:13Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Active Learning for Review Tagging Draft&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based independent development project.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
The web application Expertiza, used by students in CSC 517 and other courses, allows students to peer-review and give suggestive comments to each other's work. Students will later be asked to voluntarily participate in an extra-credit review-tagging assignment in which they tag comments they received for helpfulness, positive tone, and other characteristics interested by researchers. Currently, students have to tag hundreds of comments they received in order to get full participation credits. Researchers are concerned that this amount of work would cause inattentive participants to submit responses deviating from what they should be, thus corrupting the established model. Therefore, by having the machine-learning algorithm to pre-determine the confidence level of the presence of the asked characteristic in a comment, one can ask students to assign only tags that the algorithm is unsure of, so students can focus on fewer tags with more attention and accuracy.&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The goal of this project is to construct a workable infrastructure for active learning, by incorporating machine-learning algorithms in evaluating which tags, by having a manual input, can help the AI learn more effectively. In particular, the following requirements are fulfilled:&lt;br /&gt;
&lt;br /&gt;
*Incorporate metrics analysis into the review-giving process&lt;br /&gt;
*Reduce the number of tags students have to assign&lt;br /&gt;
*Reveal gathered information to report pages&lt;br /&gt;
*Update the web service to include paths to the confidence level of each prediction&lt;br /&gt;
*Decide a proper tag certainty threshold that says how certain the ML algorithm must be of a tag value before it will ask the author to tag it manually&lt;br /&gt;
&lt;br /&gt;
===Notes===&lt;br /&gt;
This project is simultaneously being held with another project named 'Integrate Suggestion Detection Algorithm'. Whereas that project focuses on forming a central outlet to external web services, this project focuses more on interpreting results gotten back from external web services.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
===Control Flow Diagram===&lt;br /&gt;
&lt;br /&gt;
Peer Logic is an NSF-funded research project that provides services for educational peer-review systems. It has a set of mature machine learning algorithms and models that compute metrics on the reviews. It would be helpful for Expertiza to integrate these algorithms into the peer-review process. Specifically, we want to&lt;br /&gt;
&lt;br /&gt;
*# Let students see the quality of their review before submission, and&lt;br /&gt;
*# Selectively query manual tagging that are used to further train the models (active learning)&lt;br /&gt;
&lt;br /&gt;
In order to integrate these algorithms into the Expertiza system, we have to build a translator-like model, which we named &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt;, that converts outputs from external sources into a form that our system can understand and use. &lt;br /&gt;
&lt;br /&gt;
Below we show the control flow diagram to help illustrate the usage of &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; model.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class is closely tied to the peer-review process. It is first called when students finish and about to submit their reviews on other students' work. Our plan is that, before the system marks their reviews as submitted, the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class intercepts the content of these reviews and send them to the Peer Logic web service for predictions. After it receives the results, it caches them to the local Expertiza database and then releases the intercept. Students, instead of being redirected to the list of reviews, are presented with an analysis report on the quality of their reviews. They may go back and edit their review comments, or confirm to submit, depending on whether they are satisfied with the results displayed to them.&lt;br /&gt;
&lt;br /&gt;
Every prediction from the web service comes with a confidence level, indicating how confident the algorithm is in this prediction. Whenever a student goes to the review tagging page, before rendering any tags, the system consults the &amp;lt;code&amp;gt;ReviewMetricsQuery&amp;lt;/code&amp;gt; class to see whether the value of the tag has previously been determined by the web service and whether its confidence level exceeds the pre-set threshold. If yes, meaning the algorithm is confident of its prediction, then it applies a lightening effect onto the tag to make it less noticeable. Students who do the tagging can easily distinguish the difference between normal tags and gray-out tags, and focus their attention more on normal tags. This is what active learning is about, to query manual inputs only if it adds to the knowledge of the algorithm.&lt;br /&gt;
&lt;br /&gt;
These cached data would also be used in the instructor's report views, and that's the reason why these data must be cached locally. One review consists of about 10 to 20 comments and takes about minutes to process, and a report composes of thousands of such reviews. Querying web service results in real-time is impractical with respect to the time it consumes. We limit the number of contacts with the web service the least, by sending requests only when students decide to submit their reviews. In this way, the predicted values of each tag are up to date with the stored reviews.&lt;br /&gt;
&lt;br /&gt;
===Database Design===&lt;br /&gt;
The only change to the database is to add a &amp;quot;confidence_level&amp;quot; column to the existing answer_tags table which is originally used to store tags assigned by students. One can imagine results from web service being a stack of tags assigned by the outside tool, with confidence level indicating how confident the outside tool is to each tag is assigned. The answer_tags table will therefore have two types of tags, one from the student, which has user_id but not confidence_level, and the other inferred from web service, which has confidence_level but not user_id. The system can determine what type of tags they are by checking the presence of values in these two fields.&lt;br /&gt;
&lt;br /&gt;
===UI Design===&lt;br /&gt;
Four pages are needed to be modified to reflect the addition of new functionality.&lt;br /&gt;
&lt;br /&gt;
====Metrics Analysis Page====&lt;br /&gt;
When students click the 'submit' button on the review giving page, the button will be put on the disabled effect to prevent students from submitting requests multiple times. The consequence of submitting request multiple times is that the same set of comments are being sent to the external web service for processing, wasting resources on both sides. To reassure students that the request has been made, we add to the bottom of the 'submit' button a loader and message, asking them to wait patiently to avoid overload the system.&lt;br /&gt;
&lt;br /&gt;
About half a minute after students click the 'submit' button, they are redirected to a page that shows the analysis of their submitted reviews. On that page, students can see every of their submitted comments along with the analyzed result on each metric. These metrics came from tag prompt deployments set by the instructor in a per questionnaire scope. Predictions with confidence levels under the predefined threshold are not rendered to the report so students do not see predictions that are uncertain or inaccurate. When students confirm to submit, they are returned to the list of reviews to perform other actions.&lt;br /&gt;
&lt;br /&gt;
====Review Tagging Page====&lt;br /&gt;
From the above image, one can see that the slider has been changed into three forms:&lt;br /&gt;
&lt;br /&gt;
The original form, meaning it needs input from the user.&lt;br /&gt;
&lt;br /&gt;
The gray-out form, presenting tag inferred by the web service. Tag in this form is editable, meaning students can override some of the inferred tags if they wish.&lt;br /&gt;
&lt;br /&gt;
The overridden form, which is used to represent a tag that originally has a value assigned by the web service but gets overrode by the user.&lt;br /&gt;
&lt;br /&gt;
====Review Report Page====&lt;br /&gt;
In each row that represents each student, a metrics chart is added below the volume chart that's already there. Graders get useful information by looking at these two charts combined and are able to offer more accurate review grades to students. Due to the space limitation, each metric name cannot be fully expanded. Grader could hover the cursor above each bar to see its corresponding metric name.&lt;br /&gt;
&lt;br /&gt;
====Answer-Tagging Report Page====&lt;br /&gt;
Changes made to this page include changing column names and adding an additional column for the number of inferred tags. Below we explained how each column is calculated.&lt;br /&gt;
&lt;br /&gt;
*% tags applied by author = # tags applied by author / # appliable tags&lt;br /&gt;
* # tags applied by author = from # appliable tags, how many are tagged by the author&lt;br /&gt;
* # tags not applied by author = # appliable tags - # tags applied by the author&lt;br /&gt;
* # appliable tags = # tags whose comment is longer than the length threshold - # tags inferred by ML&lt;br /&gt;
* # tags inferred by ML = # tags whose comment is predicted by the machine learning algorithm with high confidence&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
app/models/answer.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;de_tag_comments&amp;lt;/code&amp;gt; method which remove html tags from the submitted review comment&lt;br /&gt;
&lt;br /&gt;
app/models/answer_tag.rb&lt;br /&gt;
*Corrected typo (tag_prompt_deployment instead of tag_prompts_deployment)&lt;br /&gt;
*Added validation clause that checks the presence of either the &amp;lt;code&amp;gt;user_id&amp;lt;/code&amp;gt; or the &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
app/models/review_metrics_query.rb&lt;br /&gt;
*The only model class that is responsible for communications between `MetricsController` and the rest of the Expertiza system where tags are used&lt;br /&gt;
*Added &amp;lt;code&amp;gt;average_number_of_qualifying_comments&amp;lt;/code&amp;gt; method which returns either the average for one reviewer or the average for the whole class, depending on whether the reviewer is supplied&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt.rb&lt;br /&gt;
*Added codes to set up the style of the slider (none, gray-out, or overridden) when it is about to be rendered&lt;br /&gt;
&lt;br /&gt;
app/models/tag_prompt_deployment.rb&lt;br /&gt;
*Slightly changed how each column in the answer_tagging report is calculated&lt;br /&gt;
&lt;br /&gt;
app/models/vm_user_answer_tagging.rb &amp;amp; app/helpers/report_formatter_helper.rb&lt;br /&gt;
*Added one variable that stores the number of tags inferred by ML&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
app/controllers/metrics_controller.rb&lt;br /&gt;
*Created an empty &amp;lt;code&amp;gt;MetricsController&amp;lt;/code&amp;gt; class so tests could be passed&lt;br /&gt;
&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Alternated the redirection so students could be redirected to the analysis page after they click to submit their reviews&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; method which marks the review in the parameter to 'submitted'&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
app/views/popup/view_review_scores_popup.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_answer_tagging_report.html.erb&lt;br /&gt;
*Renamed columns in the answer_tagging report&lt;br /&gt;
*Added a new column to the table named &amp;quot;# tags inferred by ML&amp;quot;&lt;br /&gt;
&lt;br /&gt;
app/views/reports/_review_report.html.erb &amp;amp; app/helpers/review_mapping_helper.rb&lt;br /&gt;
*Added an additional bar chart in each row of the review report for metrics&lt;br /&gt;
*Fixed redirection bug&lt;br /&gt;
&lt;br /&gt;
app/views/response/analysis.html.erb&lt;br /&gt;
*Drafted the analysis page, which shows the web service's prediction for each comment on each metric&lt;br /&gt;
&lt;br /&gt;
app/views/response/response.html.erb&lt;br /&gt;
*Added codes that disable the &amp;quot;Submit&amp;quot; button after it is being clicked&lt;br /&gt;
&lt;br /&gt;
app/views/versions/search.html.erb&lt;br /&gt;
*Fixed syntax error&lt;br /&gt;
&lt;br /&gt;
===Peripheral Changes===&lt;br /&gt;
app/assets/javascripts/answer_tags.js&lt;br /&gt;
*Controlled the dynamic effect of overriding an inferred tag&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/response.scss&lt;br /&gt;
*Styles for disabled button and spinning loader&lt;br /&gt;
&lt;br /&gt;
app/assets/stylesheets/three_state_toogle.scss&lt;br /&gt;
*Styles for different forms of tags&lt;br /&gt;
&lt;br /&gt;
config/routes.rb&lt;br /&gt;
*Added a &amp;lt;code&amp;gt;confirm_submit&amp;lt;/code&amp;gt; route&lt;br /&gt;
&lt;br /&gt;
db/migrate/20200825210644_add_confidence_level_to_answer_tags_table.rb&lt;br /&gt;
*Added &amp;lt;code&amp;gt;confidence_level&amp;lt;/code&amp;gt; column to the &amp;lt;code&amp;gt;answer_tags&amp;lt;/code&amp;gt; table&lt;br /&gt;
&lt;br /&gt;
db/schema.rb&lt;br /&gt;
*Updated schema&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Yulin Zhang (yzhan114@ncsu.edu)&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020&amp;diff=136618</id>
		<title>CSC/ECE 517 Summer 2020</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Summer_2020&amp;diff=136618"/>
		<updated>2020-11-01T20:35:45Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: Created page with &amp;quot;* CSC/ECE 517 Fall 2020 - Active Learning for Review Tagging&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[CSC/ECE 517 Fall 2020 - Active Learning for Review Tagging]]&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=MainPage&amp;diff=136617</id>
		<title>MainPage</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=MainPage&amp;diff=136617"/>
		<updated>2020-11-01T20:34:23Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Expertiza */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Expertiza==&lt;br /&gt;
* [[Expertiza documentation]]&lt;br /&gt;
&lt;br /&gt;
* [[CSC/ECE 517 Summer 2008]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2010]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2011]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2012]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2013]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2014]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2015]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2016]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2014]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2015]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2016]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2017]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2017]]&lt;br /&gt;
* [[CSC/Independent Study Spring 2018]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2018]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2018]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2019]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2020]]&lt;br /&gt;
* [[CSC/ECE 517 Summer 2020]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2020]]&lt;br /&gt;
* [[CSC 456 Spring 2011|CSC 456 Spring 2012]]&lt;br /&gt;
* [[ECE 633]]&lt;br /&gt;
* [[KCU]]&lt;br /&gt;
* [[Progress reports]]&lt;br /&gt;
* [[ReactJs Frontend]]&lt;br /&gt;
* [[Front-End/Back-End]]&lt;br /&gt;
&lt;br /&gt;
==Application Behavior==&lt;br /&gt;
* [[Grading]]&lt;br /&gt;
&lt;br /&gt;
==Metaprogramming==&lt;br /&gt;
* [[CSC/ECE_517_Spring_2013/ch1b_1k_hf|Lecture on Metaprogramming]]&lt;br /&gt;
&lt;br /&gt;
==Development==&lt;br /&gt;
&lt;br /&gt;
''Expertiza now has a Java dependency, so the machine you are using to develop Expertiza on should have the JVM installed.''&lt;br /&gt;
&lt;br /&gt;
* [[Setting Up a Development Machine]]&lt;br /&gt;
* [[Creating a Linux Development Environment for Expertiza - Installation Guide]]&lt;br /&gt;
* [[Using git and github for projects]]&lt;br /&gt;
* [[Using heroku to deploy your projects]]&lt;br /&gt;
* [[How to Begin a Project from the Current Expertiza Repository]]&lt;br /&gt;
* [[Git]]&lt;br /&gt;
* [[How to Change a User's Password on a Development Machine]]&lt;br /&gt;
* [[Debugging Rails]]&lt;br /&gt;
* [http://rajanalwan.com/ui_guidelines/ Design Template]&lt;br /&gt;
&lt;br /&gt;
==Production==&lt;br /&gt;
* [[Deploying to Production]]&lt;br /&gt;
* [[Downloading Production Data]]&lt;br /&gt;
* [[Accessing the Production Server]]&lt;br /&gt;
&lt;br /&gt;
==Testing==&lt;br /&gt;
* [[Using Cucumber with Expertiza]]&lt;br /&gt;
* [[Rails Testing Overview]]&lt;br /&gt;
* [[Expertiza Continuous Integration]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [[Object-Oriented Design and Programming]]&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133597</id>
		<title>CSC/ECE 517 Spring 2020 E2016 Revision planning tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133597"/>
		<updated>2020-04-14T03:14:11Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Revision planning page */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
==About our team==&lt;br /&gt;
&lt;br /&gt;
Team members:&lt;br /&gt;
*Tianji Gao (tgao5@ncsu.edu)&lt;br /&gt;
*Guoyi Wang (gwang25@ncsu.edu)&lt;br /&gt;
*Yulin Zhang (yzhan114@ncsu.edu)&lt;br /&gt;
*Boxuan Zhong (bzhong2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
Project mentor: Edward Gehringer (efg@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
==What is Revision Planning?==&lt;br /&gt;
&lt;br /&gt;
In the first round of the Expertiza reviews, reviewers are asked to give authors some guidance on how to improve their work. Then in the second round, reviewers rate how well authors have followed their suggestions. Revision planning is a mechanism used to carry the interaction one step further by having authors to supply a revision plan based on the previous round reviews. That is, the authors would derive their plan for code improvement from the previous round reviews and reviewers would later assess how well they did it. &lt;br /&gt;
&lt;br /&gt;
Revision planning is helpful because it makes the author think about what's necessary to improve the work before putting forth the effort to improve it.  This leads to a more reflective work process and is likely to produce a better-finished product.  When reviewers have an opportunity to give feedback to the author, they too will learn what a good revision plan looks like.&lt;br /&gt;
&lt;br /&gt;
According to the given instructions, a revision plan consists of a description of the plan, followed by any number of questions that would later be appended to the future review questionnaire. The revision plan is per AssignmentTeam-based, which means the authors’ questions would only be used to evaluate their submission and not anyone else. By adding the functionality of revision planning, it helps researchers study the effect of the reviewer’s suggestions on the code improvement.&lt;br /&gt;
&lt;br /&gt;
==Previous Implementation==&lt;br /&gt;
&lt;br /&gt;
This functionality has previously been done by a team of students from the Fall semester of 2018. Their implementation was merged into the master branch but was reverted due to the following design concerns:&lt;br /&gt;
*The relationship between `Questionnaire` and `SubmissionRecord` is unclear.&lt;br /&gt;
*Uses a lot of '''special-purpose''' code when existing codes may fulfill the same job.&lt;br /&gt;
*Revision planning cannot be enabled or disabled for an assignment.&lt;br /&gt;
*Numeric labelings for the revision plan questions begin from 1 again, instead of continuing after the original rubric questions.&lt;br /&gt;
*Codebase contains commented codes that are no longer wanted.&lt;br /&gt;
Check out the wiki page and the pull request on GitHub if you would like to learn more about the previous implementation of this project.&lt;br /&gt;
*http://wiki.expertiza.ncsu.edu/index.php/E1875_Revision_Planning_Tool&lt;br /&gt;
*https://github.com/expertiza/expertiza/pull/1302&lt;br /&gt;
Please note that unlike the other teams we have reviewed, this project is a complete redo rather than modifications built upon the previous team’s codes because our approach to this problem would be different than theirs. Therefore, we will not mention the previous implementation in the later content.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
For this project, we identified 4 major work items that together fulfill the stated requirements.&lt;br /&gt;
&lt;br /&gt;
'''Sort out the relationship among classes and introduce the new abstraction of the revision plan to the system in a way that it doesn’t interfere with the majority of codes'''&lt;br /&gt;
&lt;br /&gt;
We decided to relate each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with team_id. A &amp;lt;code&amp;gt;ReviewQuestionnaire&amp;lt;/code&amp;gt; will have either questions with no &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; or with a &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;. A question with no team_id indicates that it does not belong to any assignment teams so it is a question set up by the instructor. A question with a team_id, in contrast, indicates that it belongs to a particular team so it is a revision plan question. Both types of questions will be saved under the same questionnaire used for a given round. In this way, we can maximize the usage of existing codes and the only major change should be contained within the &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; class.&lt;br /&gt;
&lt;br /&gt;
'''Modify the existing views and controllers to accommodate the new functionality which includes'''&lt;br /&gt;
*Allowing teaching staff to enable/disable revision planning for an assignment.&lt;br /&gt;
*Allowing team members to create/edit their revision plan during each submission period after the first round.&lt;br /&gt;
*Allowing both rubric questions and revision plan questions to appear on the same page and be serialized correctly.&lt;br /&gt;
*Allowing feedback on the revision plan only to be viewed by the team that creates the plan and that team's instructor.&lt;br /&gt;
&lt;br /&gt;
This will involve some minor changes such as appending some method signatures with an optional trailing parameter, adding interactive elements to the views, and slightly adjusting the structure of certain view templates.&lt;br /&gt;
&lt;br /&gt;
In addition, we planned to:&lt;br /&gt;
*Provide an adequate amount of tests to improve code coverage.&lt;br /&gt;
*Do necessary refactoring and resolve any CodeClimate issues.&lt;br /&gt;
&lt;br /&gt;
After communicated with our mentor Dr. Gehringer, we have been clarified with the following two problem statements.&lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the second-round questionnaire.'''&lt;br /&gt;
&lt;br /&gt;
This means both questions from the team’s revision plan and questions from the review rubric should be displayed together in the frontend. Since we decided to add revision plan questions to the review rubric of the round, we automatically linked every new question to the questionnaire of that round. &lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the author’s submission (this will probably involve a DB migration)'''&lt;br /&gt;
&lt;br /&gt;
By saying every new question must be linked to the author’s submission, it means that there should be some relationships between the team and the team’s revision plan questions presented in the database. We addressed this problem by associating them with a team_id field. See Database Design section for more details.&lt;br /&gt;
&lt;br /&gt;
=Design=&lt;br /&gt;
&lt;br /&gt;
==Control Flow Diagram==&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_control_flow_diagram.png]]&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality. It involves 3 types of actors, student(reviewee), student(reviewer) and instructor/TA who manages the assignment and review processes. To understand each actor’s responsibility, trace each colored line that arose from each actor in the direction specified by the arrows. The diamond shape represents a decision or precondition, that is, only after the condition meets can the next action proceeds.&lt;br /&gt;
&lt;br /&gt;
Summary of actions&lt;br /&gt;
*A TA/Instructor can&lt;br /&gt;
*#Enable revision planning&lt;br /&gt;
*#Impersonate students to perform their responsibility&lt;br /&gt;
*#View feedback report of all teams&lt;br /&gt;
*A student(reviewee) can&lt;br /&gt;
*#Make revision during the second round submission period, which includes reading first-round feedback and adding revision plan questions according to that feedback.&lt;br /&gt;
*#View feedback report of the team it belongs to&lt;br /&gt;
*A student(reviewer) can&lt;br /&gt;
*#Give feedback on the team’s revised work by answering each question (including the team's revision plan questions) appeared on the review page.&lt;br /&gt;
*#View the feedback it wrote to the team.&lt;br /&gt;
&lt;br /&gt;
==UI Design==&lt;br /&gt;
&lt;br /&gt;
A revision plan should be similar to other review questionnaires. Since functionalities on the review questionnaire have been maturely implemented, we expected to make the least amount of interface changes by utilizing the existing view templates whenever possible. The subsections listed the changes we planned to make.&lt;br /&gt;
&lt;br /&gt;
===Enabling revision planning===&lt;br /&gt;
&lt;br /&gt;
Implementation of enabling/disabling revision planning for each assignment can be rather straightforward. We looked to add an additional checkbox under the &amp;quot;Review strategy&amp;quot; tab of the assignment’s edit page. This checkbox is labeled as &amp;quot;Enable Revision Planning?&amp;quot; to indicate whether the instructor wants to include this functionality in the newly-created assignment. It is most reasonable to place the checkbox here because it is review related and other similar functionalities like Self Reviews are also implemented in this manner.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_enabling_revision_planning.png]]&lt;br /&gt;
&lt;br /&gt;
===Link to the revision planning page===&lt;br /&gt;
&lt;br /&gt;
If the instructor decided to include revision planning in this assignment, then the link to “Revision Planning” would appear on the student’s assignment page but would stay disabled during the first round. After that, It would become clickable during every submission period and greyed again during every review period. By clicking it, students would be redirected to a whole new page explained under the ‘Revision planning page’ subsection.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_link_to_the_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Revision planning page===&lt;br /&gt;
&lt;br /&gt;
The revision plan is just like other questionnaires in that it contains a set of questions for reviewers to answer. The only difference is that the revision plan comes with an additional description to help reviewers understand what changes have been made so far. Therefore, it should make use of most existing view templates and controller codes with minimized changes. As the image is shown, the only modification made from the existing questionnaire creation template would be to include a link that redirects students to the submission page, where the uploading of the revision plan will be handled by the existing implementation. The advantage to upload an external link rather than typing everything to the textbox element is that the description can be well-formatted if it displays outside the form and not causing a distraction effect for reviewers during the review. We also decided to leave out (or hide) the place where instructors set the configuration stuff like the range of scores and the questionnaire's visibility. These configurations should use default values defined in the system rather than having students come up with their own.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Review page===&lt;br /&gt;
&lt;br /&gt;
The format of the review page remains almost exactly the same. To distinguish between rubric questions set up by the instructor and the revision plan questions created by the team under review, all the revision plan questions are placed after the rubric questions, split by an enlarged “Revision Planning” subheader. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page_2.png]]&lt;br /&gt;
&lt;br /&gt;
===Feedback report===&lt;br /&gt;
&lt;br /&gt;
Teaching staff and students have different windows to access the feedback report. &lt;br /&gt;
*'''Teaching staff''': Manage-&amp;gt;Assignments-&amp;gt;Edit Assignment-&amp;gt;Other stuff-&amp;gt;View scores&lt;br /&gt;
*'''Students''': Assignments-&amp;gt;View Assignment-&amp;gt;Alternative View&lt;br /&gt;
In addition, either instructor and TA can impersonate students to access the feedback report from their views. We would like to consider both cases and illustrate each of them separately.&lt;br /&gt;
&lt;br /&gt;
====TA/Instructor====&lt;br /&gt;
&lt;br /&gt;
Scores for the second round review rubric and the author’s revision plan questions will be displayed on the same table and are serialized correctly. See the figure below for an example. Let say the second round rubric has only 5 questions, the remaining questions (6-10) will be revision planning questions written by a particular team.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_instructor.png]]&lt;br /&gt;
&lt;br /&gt;
For reviewer: if you reviewed our first draft design, you should notice that we originally chose to place revision plan scores on a distinct table. After our mentor clarified to us, we realized that there can be possibly more than 2 rounds of submission and review periods for a given assignment. Therefore, scores for revision planning questions can vary round by round. Therefore, our previous solution will not work since it confuses the user of which round the revision plan scores refer to. &lt;br /&gt;
&lt;br /&gt;
====Student====&lt;br /&gt;
&lt;br /&gt;
The revision planning section will be added to the students’ view as shown in the snapshot below. It displays in the same order as how the review page does. A “Revision Planning” subheader is also used here to indicate the starting of the revision planning section.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_student.png]]&lt;br /&gt;
&lt;br /&gt;
==Database Design==&lt;br /&gt;
&lt;br /&gt;
Here we present the diagram of our database design. As the yellow borders show, we only plan to modify the structure of the Question table and the Assignment table. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_database_design.png]]&lt;br /&gt;
&lt;br /&gt;
In the Assignment table, the column ''is_revision_planning_enabled?'' will be needed to indicate whether the instructor would like to incorporate the revision planning feature. &lt;br /&gt;
&lt;br /&gt;
Additionally, we add &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object to distinguish where each question belongs to. A &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with an empty &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; value will be the question under the original rubric, while the object with a non-empty team_id field will be the question created by the team associated with the team_id. That is, instead of creating a whole new &amp;lt;code&amp;gt;RevisionPlanQuestionnaire&amp;lt;/code&amp;gt; class, we decided to dump all the revision plan questions that are created in a given round to the rubric that is used for that round. In this way, we minimize the change to the system to make the original rubric questions and the revision planning questions retrieved together more easily.&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
&lt;br /&gt;
app/controllers/questionnaires_controller.rb&lt;br /&gt;
*edit_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that prepares view template and supplies revision planning questions that belong only to the current team&lt;br /&gt;
*update_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that saves revision plan questions under the current round rubric&lt;br /&gt;
*Require some refactoring to share some existing functionalities with the two new methods we described above&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;@questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;@questionnaire.questions(@map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
app/controllers/grades_controller.rb&lt;br /&gt;
*Call the retrieve_questions method every time with an extra parameter “&amp;lt;code&amp;gt;@team_id&amp;lt;/code&amp;gt;”.&lt;br /&gt;
&lt;br /&gt;
===Models ===&lt;br /&gt;
&lt;br /&gt;
app/models/question.rb&lt;br /&gt;
*Form association relationship with &amp;lt;code&amp;gt;Team&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;belongs_to :team, class_name: ‘AssignmentTeam’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
*questions: change the method signature to &amp;lt;code&amp;gt;questions(team_id=nil)&amp;lt;/code&amp;gt; which uses nil as the parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;’s default value, so users can choose to supply a team_id argument or not. When team_id is supplied, it returns both questions with no team_id as well as questions that have this team_id. In addition, it will add to the return list an unsaved “Revision Planning” QuestionnaireHeader so the list can be displayed nicely on the browser with each section separated.&lt;br /&gt;
app/models/assignment_team.rb&lt;br /&gt;
*Form aggregation relationship with &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;has_many :revision_plan_questions, class_name: ‘Question’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
app/models/response.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;questionnaire.questions(self.response_map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
&lt;br /&gt;
app/views/questionnaires/edit_revision_plan.html.erb &lt;br /&gt;
*It is a newly-added view file for students to create and edit their revision plan. It will utilize some existing codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to reduce code redundancy.&lt;br /&gt;
app/views/questionnaires/_questions.html.erb&lt;br /&gt;
*Extract codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to make it a standalone partial template that will later be loaded by the ''app/views/questionnaires/edit_revision_plan.html.erb'' view file described above.&lt;br /&gt;
app/views/student_task/view.html.erb&lt;br /&gt;
*Add a “Revision Planning” link for students to edit their revision plan. The link will lead students to the “Edit Revision Plan” page. If the revision planning feature is enabled, this link will appear disabled at first and only become clickable during each submission period after round 1.&lt;br /&gt;
app/views/assignments/edit/_review_strategy.html.erb&lt;br /&gt;
*Add “Enable Revision Planning?” checkbox for each assignment because not every assignment needs this feature. TA/instructor have the option to include this feature more flexibly. The “Revision Planning” link will disappear from the assignment page if the checkbox is not checked.&lt;br /&gt;
&lt;br /&gt;
===Helpers===&lt;br /&gt;
&lt;br /&gt;
app/helpers/grades_helper.rb&lt;br /&gt;
*retrieve_questions: add an extra parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; whose default value is set to be nil. It then invoke &amp;lt;code&amp;gt;Questionnaire&amp;lt;/code&amp;gt; model’s &amp;lt;code&amp;gt;questions&amp;lt;/code&amp;gt; method with this &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; and retrieve a proper question set for each team.&lt;br /&gt;
&lt;br /&gt;
===Schema===&lt;br /&gt;
&lt;br /&gt;
*Assignment table: add one column named &amp;lt;code&amp;gt;is_revision_planning_enabled?&amp;lt;/code&amp;gt; to indicate whether this feature has been activated.&lt;br /&gt;
*Question table: add one column named &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to distinguish whether the question is from the official review rubric or from a particular team's revision plan.&lt;br /&gt;
&lt;br /&gt;
=Testing=&lt;br /&gt;
&lt;br /&gt;
==RSpec Test Plan==&lt;br /&gt;
&lt;br /&gt;
'''Controllers'''&lt;br /&gt;
*spec/controllers/questionnaires_controller_spec.rb&lt;br /&gt;
**Describe ‘#edit_revision_plan’&lt;br /&gt;
***Context ‘when params[:id] is valid’&lt;br /&gt;
***Context ‘when params[:id] is not valid'&lt;br /&gt;
**Describe ‘#update_revision_plan’&lt;br /&gt;
***Context 'when params[:add_new_questions] is not nil'&lt;br /&gt;
***Context 'when params[:view_advice] is not nil'&lt;br /&gt;
***Context 'when both params[:add_new_questions] and params[:view_advice] are nil'&lt;br /&gt;
*spec/controllers/grades_controller_spec.rb&lt;br /&gt;
**Describe ‘#view’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
**Describe ‘#view_my_scores’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
*spec/controllers/response_controller_spec.rb&lt;br /&gt;
**Fix all failed tests to accommodate modified codes.&lt;br /&gt;
**Add more tests regarding revision planning.&lt;br /&gt;
&lt;br /&gt;
'''Models'''&lt;br /&gt;
*spec/models/question_spec.rb&lt;br /&gt;
**Describe ‘#questions’&lt;br /&gt;
***Context ‘when team_id is supplied’&lt;br /&gt;
***Context ‘when team_id is not supplied’&lt;br /&gt;
&lt;br /&gt;
==UI Testing Instructions (For Reviewers)==&lt;br /&gt;
&lt;br /&gt;
'''Setup'''&lt;br /&gt;
&lt;br /&gt;
Login information&lt;br /&gt;
*Visit xxx [expertiza deployment link]&lt;br /&gt;
    User name: instructor6/student8030/student8031&lt;br /&gt;
    Password: password&lt;br /&gt;
Enable/Disable Revision Planning&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Review strategy'''” tab, check the checkbox labeled “'''Enable Revision Planning?'''” to enable the revision planning feature. &lt;br /&gt;
Enable/Disable the “Revision Planning” link&lt;br /&gt;
*After the instructor configures the assignment to include the revision planning feature, the “'''Revision Planning'''” link will appear on the student's assignment page but will remain disabled and only be enabled during each submission period after round 1. Therefore, to enable the link:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date in the future.&lt;br /&gt;
*To disable the link after round 2 submission period:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date from the past and change the round 2 review date to whenever date in the future.&lt;br /&gt;
Create teams for the assignment&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Add participant'''”.&lt;br /&gt;
*In the new page, add two students, student8030 and student8031, so one student can create revision plan questions during the submission period while the other can respond to these questions during the review period.&lt;br /&gt;
*Go back to the assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Create teams'''”.&lt;br /&gt;
*In the new page, group the two added students to separate teams.&lt;br /&gt;
&lt;br /&gt;
'''Functionalities'''&lt;br /&gt;
&lt;br /&gt;
Edit a Revision Plan&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link. which redirects the user to a new page used to create a revision plan. Fill the blanks and click on the “'''Save review questionnaire'''” button, and the revision plan should be saved.&lt;br /&gt;
Test retrieval of revision plan questions for a specific team&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link after steps under '''Edit a Revision Plan''' have been done. The “'''Revision Planning'''” link should redirect the user to the Revision Planning edit page that is populated with previously saved questions.&lt;br /&gt;
Check the Revision Plan questions in the questionnaire.&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Adjust the time frame to the second round review period.&lt;br /&gt;
*Log out and log in again as student8031.&lt;br /&gt;
*In the assignment page, click on the “'''Others’ work'''” link, which takes the user to the review page where one requests a new team’s submission to review. Go to the only other team’s review page and check if the questions are properly displayed under the “Revision Planning” subheader.&lt;br /&gt;
Check responses to the Revision Plan questions&lt;br /&gt;
*Login again as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Alternative View'''” link, and see if student8030 gets responses for both the original rubric questions as well as its revision plan questions.&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133595</id>
		<title>CSC/ECE 517 Spring 2020 E2016 Revision planning tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133595"/>
		<updated>2020-04-14T03:13:00Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Revision planning page */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
==About our team==&lt;br /&gt;
&lt;br /&gt;
Team members:&lt;br /&gt;
*Tianji Gao (tgao5@ncsu.edu)&lt;br /&gt;
*Guoyi Wang (gwang25@ncsu.edu)&lt;br /&gt;
*Yulin Zhang (yzhan114@ncsu.edu)&lt;br /&gt;
*Boxuan Zhong (bzhong2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
Project mentor: Edward Gehringer (efg@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
==What is Revision Planning?==&lt;br /&gt;
&lt;br /&gt;
In the first round of the Expertiza reviews, reviewers are asked to give authors some guidance on how to improve their work. Then in the second round, reviewers rate how well authors have followed their suggestions. Revision planning is a mechanism used to carry the interaction one step further by having authors to supply a revision plan based on the previous round reviews. That is, the authors would derive their plan for code improvement from the previous round reviews and reviewers would later assess how well they did it. &lt;br /&gt;
&lt;br /&gt;
Revision planning is helpful because it makes the author think about what's necessary to improve the work before putting forth the effort to improve it.  This leads to a more reflective work process and is likely to produce a better-finished product.  When reviewers have an opportunity to give feedback to the author, they too will learn what a good revision plan looks like.&lt;br /&gt;
&lt;br /&gt;
According to the given instructions, a revision plan consists of a description of the plan, followed by any number of questions that would later be appended to the future review questionnaire. The revision plan is per AssignmentTeam-based, which means the authors’ questions would only be used to evaluate their submission and not anyone else. By adding the functionality of revision planning, it helps researchers study the effect of the reviewer’s suggestions on the code improvement.&lt;br /&gt;
&lt;br /&gt;
==Previous Implementation==&lt;br /&gt;
&lt;br /&gt;
This functionality has previously been done by a team of students from the Fall semester of 2018. Their implementation was merged into the master branch but was reverted due to the following design concerns:&lt;br /&gt;
*The relationship between `Questionnaire` and `SubmissionRecord` is unclear.&lt;br /&gt;
*Uses a lot of '''special-purpose''' code when existing codes may fulfill the same job.&lt;br /&gt;
*Revision planning cannot be enabled or disabled for an assignment.&lt;br /&gt;
*Numeric labelings for the revision plan questions begin from 1 again, instead of continuing after the original rubric questions.&lt;br /&gt;
*Codebase contains commented codes that are no longer wanted.&lt;br /&gt;
Check out the wiki page and the pull request on GitHub if you would like to learn more about the previous implementation of this project.&lt;br /&gt;
*http://wiki.expertiza.ncsu.edu/index.php/E1875_Revision_Planning_Tool&lt;br /&gt;
*https://github.com/expertiza/expertiza/pull/1302&lt;br /&gt;
Please note that unlike the other teams we have reviewed, this project is a complete redo rather than modifications built upon the previous team’s codes because our approach to this problem would be different than theirs. Therefore, we will not mention the previous implementation in the later content.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
For this project, we identified 4 major work items that together fulfill the stated requirements.&lt;br /&gt;
&lt;br /&gt;
'''Sort out the relationship among classes and introduce the new abstraction of the revision plan to the system in a way that it doesn’t interfere with the majority of codes'''&lt;br /&gt;
&lt;br /&gt;
We decided to relate each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with team_id. A &amp;lt;code&amp;gt;ReviewQuestionnaire&amp;lt;/code&amp;gt; will have either questions with no &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; or with a &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;. A question with no team_id indicates that it does not belong to any assignment teams so it is a question set up by the instructor. A question with a team_id, in contrast, indicates that it belongs to a particular team so it is a revision plan question. Both types of questions will be saved under the same questionnaire used for a given round. In this way, we can maximize the usage of existing codes and the only major change should be contained within the &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; class.&lt;br /&gt;
&lt;br /&gt;
'''Modify the existing views and controllers to accommodate the new functionality which includes'''&lt;br /&gt;
*Allowing teaching staff to enable/disable revision planning for an assignment.&lt;br /&gt;
*Allowing team members to create/edit their revision plan during each submission period after the first round.&lt;br /&gt;
*Allowing both rubric questions and revision plan questions to appear on the same page and be serialized correctly.&lt;br /&gt;
*Allowing feedback on the revision plan only to be viewed by the team that creates the plan and that team's instructor.&lt;br /&gt;
&lt;br /&gt;
This will involve some minor changes such as appending some method signatures with an optional trailing parameter, adding interactive elements to the views, and slightly adjusting the structure of certain view templates.&lt;br /&gt;
&lt;br /&gt;
In addition, we planned to:&lt;br /&gt;
*Provide an adequate amount of tests to improve code coverage.&lt;br /&gt;
*Do necessary refactoring and resolve any CodeClimate issues.&lt;br /&gt;
&lt;br /&gt;
After communicated with our mentor Dr. Gehringer, we have been clarified with the following two problem statements.&lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the second-round questionnaire.'''&lt;br /&gt;
&lt;br /&gt;
This means both questions from the team’s revision plan and questions from the review rubric should be displayed together in the frontend. Since we decided to add revision plan questions to the review rubric of the round, we automatically linked every new question to the questionnaire of that round. &lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the author’s submission (this will probably involve a DB migration)'''&lt;br /&gt;
&lt;br /&gt;
By saying every new question must be linked to the author’s submission, it means that there should be some relationships between the team and the team’s revision plan questions presented in the database. We addressed this problem by associating them with a team_id field. See Database Design section for more details.&lt;br /&gt;
&lt;br /&gt;
=Design=&lt;br /&gt;
&lt;br /&gt;
==Control Flow Diagram==&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_control_flow_diagram.png]]&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality. It involves 3 types of actors, student(reviewee), student(reviewer) and instructor/TA who manages the assignment and review processes. To understand each actor’s responsibility, trace each colored line that arose from each actor in the direction specified by the arrows. The diamond shape represents a decision or precondition, that is, only after the condition meets can the next action proceeds.&lt;br /&gt;
&lt;br /&gt;
Summary of actions&lt;br /&gt;
*A TA/Instructor can&lt;br /&gt;
*#Enable revision planning&lt;br /&gt;
*#Impersonate students to perform their responsibility&lt;br /&gt;
*#View feedback report of all teams&lt;br /&gt;
*A student(reviewee) can&lt;br /&gt;
*#Make revision during the second round submission period, which includes reading first-round feedback and adding revision plan questions according to that feedback.&lt;br /&gt;
*#View feedback report of the team it belongs to&lt;br /&gt;
*A student(reviewer) can&lt;br /&gt;
*#Give feedback on the team’s revised work by answering each question (including the team's revision plan questions) appeared on the review page.&lt;br /&gt;
*#View the feedback it wrote to the team.&lt;br /&gt;
&lt;br /&gt;
==UI Design==&lt;br /&gt;
&lt;br /&gt;
A revision plan should be similar to other review questionnaires. Since functionalities on the review questionnaire have been maturely implemented, we expected to make the least amount of interface changes by utilizing the existing view templates whenever possible. The subsections listed the changes we planned to make.&lt;br /&gt;
&lt;br /&gt;
===Enabling revision planning===&lt;br /&gt;
&lt;br /&gt;
Implementation of enabling/disabling revision planning for each assignment can be rather straightforward. We looked to add an additional checkbox under the &amp;quot;Review strategy&amp;quot; tab of the assignment’s edit page. This checkbox is labeled as &amp;quot;Enable Revision Planning?&amp;quot; to indicate whether the instructor wants to include this functionality in the newly-created assignment. It is most reasonable to place the checkbox here because it is review related and other similar functionalities like Self Reviews are also implemented in this manner.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_enabling_revision_planning.png]]&lt;br /&gt;
&lt;br /&gt;
===Link to the revision planning page===&lt;br /&gt;
&lt;br /&gt;
If the instructor decided to include revision planning in this assignment, then the link to “Revision Planning” would appear on the student’s assignment page but would stay disabled during the first round. After that, It would become clickable during every submission period and greyed again during every review period. By clicking it, students would be redirected to a whole new page explained under the ‘Revision planning page’ subsection.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_link_to_the_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Revision planning page===&lt;br /&gt;
&lt;br /&gt;
The revision plan is just like other questionnaires in that it contains a set of questions for reviewers to answer. The only difference is that the revision plan comes with an additional description to help reviewers understand what changes have been made so far. Therefore, it should make use of most existing view templates and controller codes with minimized changes. As the image is shown, the only modification made from the existing questionnaire creation template would be to include a link that redirects students to the submission page, where the uploading of the revision plan will be handled by the existing implementation. The advantage to upload an external link rather than typing everything to the textbox element is that the description can be well-formatted if it displays outside the form and not causing a distraction effect for reviewers. We also decided to leave out (or hide) the place where instructors set the configuration stuff like the range of scores and the questionnaire's visibility. These configurations should use default values defined in the system rather than having students come up with their own.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Review page===&lt;br /&gt;
&lt;br /&gt;
The format of the review page remains almost exactly the same. To distinguish between rubric questions set up by the instructor and the revision plan questions created by the team under review, all the revision plan questions are placed after the rubric questions, split by an enlarged “Revision Planning” subheader. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page_2.png]]&lt;br /&gt;
&lt;br /&gt;
===Feedback report===&lt;br /&gt;
&lt;br /&gt;
Teaching staff and students have different windows to access the feedback report. &lt;br /&gt;
*'''Teaching staff''': Manage-&amp;gt;Assignments-&amp;gt;Edit Assignment-&amp;gt;Other stuff-&amp;gt;View scores&lt;br /&gt;
*'''Students''': Assignments-&amp;gt;View Assignment-&amp;gt;Alternative View&lt;br /&gt;
In addition, either instructor and TA can impersonate students to access the feedback report from their views. We would like to consider both cases and illustrate each of them separately.&lt;br /&gt;
&lt;br /&gt;
====TA/Instructor====&lt;br /&gt;
&lt;br /&gt;
Scores for the second round review rubric and the author’s revision plan questions will be displayed on the same table and are serialized correctly. See the figure below for an example. Let say the second round rubric has only 5 questions, the remaining questions (6-10) will be revision planning questions written by a particular team.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_instructor.png]]&lt;br /&gt;
&lt;br /&gt;
For reviewer: if you reviewed our first draft design, you should notice that we originally chose to place revision plan scores on a distinct table. After our mentor clarified to us, we realized that there can be possibly more than 2 rounds of submission and review periods for a given assignment. Therefore, scores for revision planning questions can vary round by round. Therefore, our previous solution will not work since it confuses the user of which round the revision plan scores refer to. &lt;br /&gt;
&lt;br /&gt;
====Student====&lt;br /&gt;
&lt;br /&gt;
The revision planning section will be added to the students’ view as shown in the snapshot below. It displays in the same order as how the review page does. A “Revision Planning” subheader is also used here to indicate the starting of the revision planning section.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_student.png]]&lt;br /&gt;
&lt;br /&gt;
==Database Design==&lt;br /&gt;
&lt;br /&gt;
Here we present the diagram of our database design. As the yellow borders show, we only plan to modify the structure of the Question table and the Assignment table. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_database_design.png]]&lt;br /&gt;
&lt;br /&gt;
In the Assignment table, the column ''is_revision_planning_enabled?'' will be needed to indicate whether the instructor would like to incorporate the revision planning feature. &lt;br /&gt;
&lt;br /&gt;
Additionally, we add &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object to distinguish where each question belongs to. A &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with an empty &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; value will be the question under the original rubric, while the object with a non-empty team_id field will be the question created by the team associated with the team_id. That is, instead of creating a whole new &amp;lt;code&amp;gt;RevisionPlanQuestionnaire&amp;lt;/code&amp;gt; class, we decided to dump all the revision plan questions that are created in a given round to the rubric that is used for that round. In this way, we minimize the change to the system to make the original rubric questions and the revision planning questions retrieved together more easily.&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
&lt;br /&gt;
app/controllers/questionnaires_controller.rb&lt;br /&gt;
*edit_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that prepares view template and supplies revision planning questions that belong only to the current team&lt;br /&gt;
*update_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that saves revision plan questions under the current round rubric&lt;br /&gt;
*Require some refactoring to share some existing functionalities with the two new methods we described above&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;@questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;@questionnaire.questions(@map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
app/controllers/grades_controller.rb&lt;br /&gt;
*Call the retrieve_questions method every time with an extra parameter “&amp;lt;code&amp;gt;@team_id&amp;lt;/code&amp;gt;”.&lt;br /&gt;
&lt;br /&gt;
===Models ===&lt;br /&gt;
&lt;br /&gt;
app/models/question.rb&lt;br /&gt;
*Form association relationship with &amp;lt;code&amp;gt;Team&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;belongs_to :team, class_name: ‘AssignmentTeam’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
*questions: change the method signature to &amp;lt;code&amp;gt;questions(team_id=nil)&amp;lt;/code&amp;gt; which uses nil as the parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;’s default value, so users can choose to supply a team_id argument or not. When team_id is supplied, it returns both questions with no team_id as well as questions that have this team_id. In addition, it will add to the return list an unsaved “Revision Planning” QuestionnaireHeader so the list can be displayed nicely on the browser with each section separated.&lt;br /&gt;
app/models/assignment_team.rb&lt;br /&gt;
*Form aggregation relationship with &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;has_many :revision_plan_questions, class_name: ‘Question’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
app/models/response.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;questionnaire.questions(self.response_map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
&lt;br /&gt;
app/views/questionnaires/edit_revision_plan.html.erb &lt;br /&gt;
*It is a newly-added view file for students to create and edit their revision plan. It will utilize some existing codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to reduce code redundancy.&lt;br /&gt;
app/views/questionnaires/_questions.html.erb&lt;br /&gt;
*Extract codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to make it a standalone partial template that will later be loaded by the ''app/views/questionnaires/edit_revision_plan.html.erb'' view file described above.&lt;br /&gt;
app/views/student_task/view.html.erb&lt;br /&gt;
*Add a “Revision Planning” link for students to edit their revision plan. The link will lead students to the “Edit Revision Plan” page. If the revision planning feature is enabled, this link will appear disabled at first and only become clickable during each submission period after round 1.&lt;br /&gt;
app/views/assignments/edit/_review_strategy.html.erb&lt;br /&gt;
*Add “Enable Revision Planning?” checkbox for each assignment because not every assignment needs this feature. TA/instructor have the option to include this feature more flexibly. The “Revision Planning” link will disappear from the assignment page if the checkbox is not checked.&lt;br /&gt;
&lt;br /&gt;
===Helpers===&lt;br /&gt;
&lt;br /&gt;
app/helpers/grades_helper.rb&lt;br /&gt;
*retrieve_questions: add an extra parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; whose default value is set to be nil. It then invoke &amp;lt;code&amp;gt;Questionnaire&amp;lt;/code&amp;gt; model’s &amp;lt;code&amp;gt;questions&amp;lt;/code&amp;gt; method with this &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; and retrieve a proper question set for each team.&lt;br /&gt;
&lt;br /&gt;
===Schema===&lt;br /&gt;
&lt;br /&gt;
*Assignment table: add one column named &amp;lt;code&amp;gt;is_revision_planning_enabled?&amp;lt;/code&amp;gt; to indicate whether this feature has been activated.&lt;br /&gt;
*Question table: add one column named &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to distinguish whether the question is from the official review rubric or from a particular team's revision plan.&lt;br /&gt;
&lt;br /&gt;
=Testing=&lt;br /&gt;
&lt;br /&gt;
==RSpec Test Plan==&lt;br /&gt;
&lt;br /&gt;
'''Controllers'''&lt;br /&gt;
*spec/controllers/questionnaires_controller_spec.rb&lt;br /&gt;
**Describe ‘#edit_revision_plan’&lt;br /&gt;
***Context ‘when params[:id] is valid’&lt;br /&gt;
***Context ‘when params[:id] is not valid'&lt;br /&gt;
**Describe ‘#update_revision_plan’&lt;br /&gt;
***Context 'when params[:add_new_questions] is not nil'&lt;br /&gt;
***Context 'when params[:view_advice] is not nil'&lt;br /&gt;
***Context 'when both params[:add_new_questions] and params[:view_advice] are nil'&lt;br /&gt;
*spec/controllers/grades_controller_spec.rb&lt;br /&gt;
**Describe ‘#view’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
**Describe ‘#view_my_scores’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
*spec/controllers/response_controller_spec.rb&lt;br /&gt;
**Fix all failed tests to accommodate modified codes.&lt;br /&gt;
**Add more tests regarding revision planning.&lt;br /&gt;
&lt;br /&gt;
'''Models'''&lt;br /&gt;
*spec/models/question_spec.rb&lt;br /&gt;
**Describe ‘#questions’&lt;br /&gt;
***Context ‘when team_id is supplied’&lt;br /&gt;
***Context ‘when team_id is not supplied’&lt;br /&gt;
&lt;br /&gt;
==UI Testing Instructions (For Reviewers)==&lt;br /&gt;
&lt;br /&gt;
'''Setup'''&lt;br /&gt;
&lt;br /&gt;
Login information&lt;br /&gt;
*Visit xxx [expertiza deployment link]&lt;br /&gt;
    User name: instructor6/student8030/student8031&lt;br /&gt;
    Password: password&lt;br /&gt;
Enable/Disable Revision Planning&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Review strategy'''” tab, check the checkbox labeled “'''Enable Revision Planning?'''” to enable the revision planning feature. &lt;br /&gt;
Enable/Disable the “Revision Planning” link&lt;br /&gt;
*After the instructor configures the assignment to include the revision planning feature, the “'''Revision Planning'''” link will appear on the student's assignment page but will remain disabled and only be enabled during each submission period after round 1. Therefore, to enable the link:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date in the future.&lt;br /&gt;
*To disable the link after round 2 submission period:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date from the past and change the round 2 review date to whenever date in the future.&lt;br /&gt;
Create teams for the assignment&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Add participant'''”.&lt;br /&gt;
*In the new page, add two students, student8030 and student8031, so one student can create revision plan questions during the submission period while the other can respond to these questions during the review period.&lt;br /&gt;
*Go back to the assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Create teams'''”.&lt;br /&gt;
*In the new page, group the two added students to separate teams.&lt;br /&gt;
&lt;br /&gt;
'''Functionalities'''&lt;br /&gt;
&lt;br /&gt;
Edit a Revision Plan&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link. which redirects the user to a new page used to create a revision plan. Fill the blanks and click on the “'''Save review questionnaire'''” button, and the revision plan should be saved.&lt;br /&gt;
Test retrieval of revision plan questions for a specific team&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link after steps under '''Edit a Revision Plan''' have been done. The “'''Revision Planning'''” link should redirect the user to the Revision Planning edit page that is populated with previously saved questions.&lt;br /&gt;
Check the Revision Plan questions in the questionnaire.&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Adjust the time frame to the second round review period.&lt;br /&gt;
*Log out and log in again as student8031.&lt;br /&gt;
*In the assignment page, click on the “'''Others’ work'''” link, which takes the user to the review page where one requests a new team’s submission to review. Go to the only other team’s review page and check if the questions are properly displayed under the “Revision Planning” subheader.&lt;br /&gt;
Check responses to the Revision Plan questions&lt;br /&gt;
*Login again as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Alternative View'''” link, and see if student8030 gets responses for both the original rubric questions as well as its revision plan questions.&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133593</id>
		<title>CSC/ECE 517 Spring 2020 E2016 Revision planning tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133593"/>
		<updated>2020-04-14T03:10:58Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* Database Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
==About our team==&lt;br /&gt;
&lt;br /&gt;
Team members:&lt;br /&gt;
*Tianji Gao (tgao5@ncsu.edu)&lt;br /&gt;
*Guoyi Wang (gwang25@ncsu.edu)&lt;br /&gt;
*Yulin Zhang (yzhan114@ncsu.edu)&lt;br /&gt;
*Boxuan Zhong (bzhong2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
Project mentor: Edward Gehringer (efg@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
==What is Revision Planning?==&lt;br /&gt;
&lt;br /&gt;
In the first round of the Expertiza reviews, reviewers are asked to give authors some guidance on how to improve their work. Then in the second round, reviewers rate how well authors have followed their suggestions. Revision planning is a mechanism used to carry the interaction one step further by having authors to supply a revision plan based on the previous round reviews. That is, the authors would derive their plan for code improvement from the previous round reviews and reviewers would later assess how well they did it. &lt;br /&gt;
&lt;br /&gt;
Revision planning is helpful because it makes the author think about what's necessary to improve the work before putting forth the effort to improve it.  This leads to a more reflective work process and is likely to produce a better-finished product.  When reviewers have an opportunity to give feedback to the author, they too will learn what a good revision plan looks like.&lt;br /&gt;
&lt;br /&gt;
According to the given instructions, a revision plan consists of a description of the plan, followed by any number of questions that would later be appended to the future review questionnaire. The revision plan is per AssignmentTeam-based, which means the authors’ questions would only be used to evaluate their submission and not anyone else. By adding the functionality of revision planning, it helps researchers study the effect of the reviewer’s suggestions on the code improvement.&lt;br /&gt;
&lt;br /&gt;
==Previous Implementation==&lt;br /&gt;
&lt;br /&gt;
This functionality has previously been done by a team of students from the Fall semester of 2018. Their implementation was merged into the master branch but was reverted due to the following design concerns:&lt;br /&gt;
*The relationship between `Questionnaire` and `SubmissionRecord` is unclear.&lt;br /&gt;
*Uses a lot of '''special-purpose''' code when existing codes may fulfill the same job.&lt;br /&gt;
*Revision planning cannot be enabled or disabled for an assignment.&lt;br /&gt;
*Numeric labelings for the revision plan questions begin from 1 again, instead of continuing after the original rubric questions.&lt;br /&gt;
*Codebase contains commented codes that are no longer wanted.&lt;br /&gt;
Check out the wiki page and the pull request on GitHub if you would like to learn more about the previous implementation of this project.&lt;br /&gt;
*http://wiki.expertiza.ncsu.edu/index.php/E1875_Revision_Planning_Tool&lt;br /&gt;
*https://github.com/expertiza/expertiza/pull/1302&lt;br /&gt;
Please note that unlike the other teams we have reviewed, this project is a complete redo rather than modifications built upon the previous team’s codes because our approach to this problem would be different than theirs. Therefore, we will not mention the previous implementation in the later content.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
For this project, we identified 4 major work items that together fulfill the stated requirements.&lt;br /&gt;
&lt;br /&gt;
'''Sort out the relationship among classes and introduce the new abstraction of the revision plan to the system in a way that it doesn’t interfere with the majority of codes'''&lt;br /&gt;
&lt;br /&gt;
We decided to relate each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with team_id. A &amp;lt;code&amp;gt;ReviewQuestionnaire&amp;lt;/code&amp;gt; will have either questions with no &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; or with a &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;. A question with no team_id indicates that it does not belong to any assignment teams so it is a question set up by the instructor. A question with a team_id, in contrast, indicates that it belongs to a particular team so it is a revision plan question. Both types of questions will be saved under the same questionnaire used for a given round. In this way, we can maximize the usage of existing codes and the only major change should be contained within the &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; class.&lt;br /&gt;
&lt;br /&gt;
'''Modify the existing views and controllers to accommodate the new functionality which includes'''&lt;br /&gt;
*Allowing teaching staff to enable/disable revision planning for an assignment.&lt;br /&gt;
*Allowing team members to create/edit their revision plan during each submission period after the first round.&lt;br /&gt;
*Allowing both rubric questions and revision plan questions to appear on the same page and be serialized correctly.&lt;br /&gt;
*Allowing feedback on the revision plan only to be viewed by the team that creates the plan and that team's instructor.&lt;br /&gt;
&lt;br /&gt;
This will involve some minor changes such as appending some method signatures with an optional trailing parameter, adding interactive elements to the views, and slightly adjusting the structure of certain view templates.&lt;br /&gt;
&lt;br /&gt;
In addition, we planned to:&lt;br /&gt;
*Provide an adequate amount of tests to improve code coverage.&lt;br /&gt;
*Do necessary refactoring and resolve any CodeClimate issues.&lt;br /&gt;
&lt;br /&gt;
After communicated with our mentor Dr. Gehringer, we have been clarified with the following two problem statements.&lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the second-round questionnaire.'''&lt;br /&gt;
&lt;br /&gt;
This means both questions from the team’s revision plan and questions from the review rubric should be displayed together in the frontend. Since we decided to add revision plan questions to the review rubric of the round, we automatically linked every new question to the questionnaire of that round. &lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the author’s submission (this will probably involve a DB migration)'''&lt;br /&gt;
&lt;br /&gt;
By saying every new question must be linked to the author’s submission, it means that there should be some relationships between the team and the team’s revision plan questions presented in the database. We addressed this problem by associating them with a team_id field. See Database Design section for more details.&lt;br /&gt;
&lt;br /&gt;
=Design=&lt;br /&gt;
&lt;br /&gt;
==Control Flow Diagram==&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_control_flow_diagram.png]]&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality. It involves 3 types of actors, student(reviewee), student(reviewer) and instructor/TA who manages the assignment and review processes. To understand each actor’s responsibility, trace each colored line that arose from each actor in the direction specified by the arrows. The diamond shape represents a decision or precondition, that is, only after the condition meets can the next action proceeds.&lt;br /&gt;
&lt;br /&gt;
Summary of actions&lt;br /&gt;
*A TA/Instructor can&lt;br /&gt;
*#Enable revision planning&lt;br /&gt;
*#Impersonate students to perform their responsibility&lt;br /&gt;
*#View feedback report of all teams&lt;br /&gt;
*A student(reviewee) can&lt;br /&gt;
*#Make revision during the second round submission period, which includes reading first-round feedback and adding revision plan questions according to that feedback.&lt;br /&gt;
*#View feedback report of the team it belongs to&lt;br /&gt;
*A student(reviewer) can&lt;br /&gt;
*#Give feedback on the team’s revised work by answering each question (including the team's revision plan questions) appeared on the review page.&lt;br /&gt;
*#View the feedback it wrote to the team.&lt;br /&gt;
&lt;br /&gt;
==UI Design==&lt;br /&gt;
&lt;br /&gt;
A revision plan should be similar to other review questionnaires. Since functionalities on the review questionnaire have been maturely implemented, we expected to make the least amount of interface changes by utilizing the existing view templates whenever possible. The subsections listed the changes we planned to make.&lt;br /&gt;
&lt;br /&gt;
===Enabling revision planning===&lt;br /&gt;
&lt;br /&gt;
Implementation of enabling/disabling revision planning for each assignment can be rather straightforward. We looked to add an additional checkbox under the &amp;quot;Review strategy&amp;quot; tab of the assignment’s edit page. This checkbox is labeled as &amp;quot;Enable Revision Planning?&amp;quot; to indicate whether the instructor wants to include this functionality in the newly-created assignment. It is most reasonable to place the checkbox here because it is review related and other similar functionalities like Self Reviews are also implemented in this manner.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_enabling_revision_planning.png]]&lt;br /&gt;
&lt;br /&gt;
===Link to the revision planning page===&lt;br /&gt;
&lt;br /&gt;
If the instructor decided to include revision planning in this assignment, then the link to “Revision Planning” would appear on the student’s assignment page but would stay disabled during the first round. After that, It would become clickable during every submission period and greyed again during every review period. By clicking it, students would be redirected to a whole new page explained under the ‘Revision planning page’ subsection.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_link_to_the_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Revision planning page===&lt;br /&gt;
&lt;br /&gt;
The revision plan is just like other questionnaires that it contains a set of questions for reviewers to answer. The only difference is that the revision plan comes with an additional description to help reviewers understand what changes have been made so far. &lt;br /&gt;
&lt;br /&gt;
Therefore, it should make use of most existing view templates and controller codes with minimized changes. As the image is shown, the only modification made from the existing questionnaire creation template would be to include a link that redirects students to the submission page, where the uploading of the revision plan will be handled by the existing implementation. The advantage to upload an external link rather than typing everything to the textbox element is that the description can be well-formatted if it displays outside the form and not causing a distraction effect for reviewers. We also decided to leave out (or hide) the place where instructors set the configuration stuff like the range of scores and the questionnaire's visibility. These configurations should use default values defined in the system rather than having students come up with their own.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Review page===&lt;br /&gt;
&lt;br /&gt;
The format of the review page remains almost exactly the same. To distinguish between rubric questions set up by the instructor and the revision plan questions created by the team under review, all the revision plan questions are placed after the rubric questions, split by an enlarged “Revision Planning” subheader. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page_2.png]]&lt;br /&gt;
&lt;br /&gt;
===Feedback report===&lt;br /&gt;
&lt;br /&gt;
Teaching staff and students have different windows to access the feedback report. &lt;br /&gt;
*'''Teaching staff''': Manage-&amp;gt;Assignments-&amp;gt;Edit Assignment-&amp;gt;Other stuff-&amp;gt;View scores&lt;br /&gt;
*'''Students''': Assignments-&amp;gt;View Assignment-&amp;gt;Alternative View&lt;br /&gt;
In addition, either instructor and TA can impersonate students to access the feedback report from their views. We would like to consider both cases and illustrate each of them separately.&lt;br /&gt;
&lt;br /&gt;
====TA/Instructor====&lt;br /&gt;
&lt;br /&gt;
Scores for the second round review rubric and the author’s revision plan questions will be displayed on the same table and are serialized correctly. See the figure below for an example. Let say the second round rubric has only 5 questions, the remaining questions (6-10) will be revision planning questions written by a particular team.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_instructor.png]]&lt;br /&gt;
&lt;br /&gt;
For reviewer: if you reviewed our first draft design, you should notice that we originally chose to place revision plan scores on a distinct table. After our mentor clarified to us, we realized that there can be possibly more than 2 rounds of submission and review periods for a given assignment. Therefore, scores for revision planning questions can vary round by round. Therefore, our previous solution will not work since it confuses the user of which round the revision plan scores refer to. &lt;br /&gt;
&lt;br /&gt;
====Student====&lt;br /&gt;
&lt;br /&gt;
The revision planning section will be added to the students’ view as shown in the snapshot below. It displays in the same order as how the review page does. A “Revision Planning” subheader is also used here to indicate the starting of the revision planning section.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_student.png]]&lt;br /&gt;
&lt;br /&gt;
==Database Design==&lt;br /&gt;
&lt;br /&gt;
Here we present the diagram of our database design. As the yellow borders show, we only plan to modify the structure of the Question table and the Assignment table. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_database_design.png]]&lt;br /&gt;
&lt;br /&gt;
In the Assignment table, the column ''is_revision_planning_enabled?'' will be needed to indicate whether the instructor would like to incorporate the revision planning feature. &lt;br /&gt;
&lt;br /&gt;
Additionally, we add &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object to distinguish where each question belongs to. A &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with an empty &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; value will be the question under the original rubric, while the object with a non-empty team_id field will be the question created by the team associated with the team_id. That is, instead of creating a whole new &amp;lt;code&amp;gt;RevisionPlanQuestionnaire&amp;lt;/code&amp;gt; class, we decided to dump all the revision plan questions that are created in a given round to the rubric that is used for that round. In this way, we minimize the change to the system to make the original rubric questions and the revision planning questions retrieved together more easily.&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
&lt;br /&gt;
app/controllers/questionnaires_controller.rb&lt;br /&gt;
*edit_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that prepares view template and supplies revision planning questions that belong only to the current team&lt;br /&gt;
*update_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that saves revision plan questions under the current round rubric&lt;br /&gt;
*Require some refactoring to share some existing functionalities with the two new methods we described above&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;@questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;@questionnaire.questions(@map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
app/controllers/grades_controller.rb&lt;br /&gt;
*Call the retrieve_questions method every time with an extra parameter “&amp;lt;code&amp;gt;@team_id&amp;lt;/code&amp;gt;”.&lt;br /&gt;
&lt;br /&gt;
===Models ===&lt;br /&gt;
&lt;br /&gt;
app/models/question.rb&lt;br /&gt;
*Form association relationship with &amp;lt;code&amp;gt;Team&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;belongs_to :team, class_name: ‘AssignmentTeam’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
*questions: change the method signature to &amp;lt;code&amp;gt;questions(team_id=nil)&amp;lt;/code&amp;gt; which uses nil as the parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;’s default value, so users can choose to supply a team_id argument or not. When team_id is supplied, it returns both questions with no team_id as well as questions that have this team_id. In addition, it will add to the return list an unsaved “Revision Planning” QuestionnaireHeader so the list can be displayed nicely on the browser with each section separated.&lt;br /&gt;
app/models/assignment_team.rb&lt;br /&gt;
*Form aggregation relationship with &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;has_many :revision_plan_questions, class_name: ‘Question’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
app/models/response.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;questionnaire.questions(self.response_map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
&lt;br /&gt;
app/views/questionnaires/edit_revision_plan.html.erb &lt;br /&gt;
*It is a newly-added view file for students to create and edit their revision plan. It will utilize some existing codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to reduce code redundancy.&lt;br /&gt;
app/views/questionnaires/_questions.html.erb&lt;br /&gt;
*Extract codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to make it a standalone partial template that will later be loaded by the ''app/views/questionnaires/edit_revision_plan.html.erb'' view file described above.&lt;br /&gt;
app/views/student_task/view.html.erb&lt;br /&gt;
*Add a “Revision Planning” link for students to edit their revision plan. The link will lead students to the “Edit Revision Plan” page. If the revision planning feature is enabled, this link will appear disabled at first and only become clickable during each submission period after round 1.&lt;br /&gt;
app/views/assignments/edit/_review_strategy.html.erb&lt;br /&gt;
*Add “Enable Revision Planning?” checkbox for each assignment because not every assignment needs this feature. TA/instructor have the option to include this feature more flexibly. The “Revision Planning” link will disappear from the assignment page if the checkbox is not checked.&lt;br /&gt;
&lt;br /&gt;
===Helpers===&lt;br /&gt;
&lt;br /&gt;
app/helpers/grades_helper.rb&lt;br /&gt;
*retrieve_questions: add an extra parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; whose default value is set to be nil. It then invoke &amp;lt;code&amp;gt;Questionnaire&amp;lt;/code&amp;gt; model’s &amp;lt;code&amp;gt;questions&amp;lt;/code&amp;gt; method with this &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; and retrieve a proper question set for each team.&lt;br /&gt;
&lt;br /&gt;
===Schema===&lt;br /&gt;
&lt;br /&gt;
*Assignment table: add one column named &amp;lt;code&amp;gt;is_revision_planning_enabled?&amp;lt;/code&amp;gt; to indicate whether this feature has been activated.&lt;br /&gt;
*Question table: add one column named &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to distinguish whether the question is from the official review rubric or from a particular team's revision plan.&lt;br /&gt;
&lt;br /&gt;
=Testing=&lt;br /&gt;
&lt;br /&gt;
==RSpec Test Plan==&lt;br /&gt;
&lt;br /&gt;
'''Controllers'''&lt;br /&gt;
*spec/controllers/questionnaires_controller_spec.rb&lt;br /&gt;
**Describe ‘#edit_revision_plan’&lt;br /&gt;
***Context ‘when params[:id] is valid’&lt;br /&gt;
***Context ‘when params[:id] is not valid'&lt;br /&gt;
**Describe ‘#update_revision_plan’&lt;br /&gt;
***Context 'when params[:add_new_questions] is not nil'&lt;br /&gt;
***Context 'when params[:view_advice] is not nil'&lt;br /&gt;
***Context 'when both params[:add_new_questions] and params[:view_advice] are nil'&lt;br /&gt;
*spec/controllers/grades_controller_spec.rb&lt;br /&gt;
**Describe ‘#view’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
**Describe ‘#view_my_scores’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
*spec/controllers/response_controller_spec.rb&lt;br /&gt;
**Fix all failed tests to accommodate modified codes.&lt;br /&gt;
**Add more tests regarding revision planning.&lt;br /&gt;
&lt;br /&gt;
'''Models'''&lt;br /&gt;
*spec/models/question_spec.rb&lt;br /&gt;
**Describe ‘#questions’&lt;br /&gt;
***Context ‘when team_id is supplied’&lt;br /&gt;
***Context ‘when team_id is not supplied’&lt;br /&gt;
&lt;br /&gt;
==UI Testing Instructions (For Reviewers)==&lt;br /&gt;
&lt;br /&gt;
'''Setup'''&lt;br /&gt;
&lt;br /&gt;
Login information&lt;br /&gt;
*Visit xxx [expertiza deployment link]&lt;br /&gt;
    User name: instructor6/student8030/student8031&lt;br /&gt;
    Password: password&lt;br /&gt;
Enable/Disable Revision Planning&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Review strategy'''” tab, check the checkbox labeled “'''Enable Revision Planning?'''” to enable the revision planning feature. &lt;br /&gt;
Enable/Disable the “Revision Planning” link&lt;br /&gt;
*After the instructor configures the assignment to include the revision planning feature, the “'''Revision Planning'''” link will appear on the student's assignment page but will remain disabled and only be enabled during each submission period after round 1. Therefore, to enable the link:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date in the future.&lt;br /&gt;
*To disable the link after round 2 submission period:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date from the past and change the round 2 review date to whenever date in the future.&lt;br /&gt;
Create teams for the assignment&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Add participant'''”.&lt;br /&gt;
*In the new page, add two students, student8030 and student8031, so one student can create revision plan questions during the submission period while the other can respond to these questions during the review period.&lt;br /&gt;
*Go back to the assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Create teams'''”.&lt;br /&gt;
*In the new page, group the two added students to separate teams.&lt;br /&gt;
&lt;br /&gt;
'''Functionalities'''&lt;br /&gt;
&lt;br /&gt;
Edit a Revision Plan&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link. which redirects the user to a new page used to create a revision plan. Fill the blanks and click on the “'''Save review questionnaire'''” button, and the revision plan should be saved.&lt;br /&gt;
Test retrieval of revision plan questions for a specific team&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link after steps under '''Edit a Revision Plan''' have been done. The “'''Revision Planning'''” link should redirect the user to the Revision Planning edit page that is populated with previously saved questions.&lt;br /&gt;
Check the Revision Plan questions in the questionnaire.&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Adjust the time frame to the second round review period.&lt;br /&gt;
*Log out and log in again as student8031.&lt;br /&gt;
*In the assignment page, click on the “'''Others’ work'''” link, which takes the user to the review page where one requests a new team’s submission to review. Go to the only other team’s review page and check if the questions are properly displayed under the “Revision Planning” subheader.&lt;br /&gt;
Check responses to the Revision Plan questions&lt;br /&gt;
*Login again as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Alternative View'''” link, and see if student8030 gets responses for both the original rubric questions as well as its revision plan questions.&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133590</id>
		<title>CSC/ECE 517 Spring 2020 E2016 Revision planning tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133590"/>
		<updated>2020-04-14T03:08:52Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* RSpec Test Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
==About our team==&lt;br /&gt;
&lt;br /&gt;
Team members:&lt;br /&gt;
*Tianji Gao (tgao5@ncsu.edu)&lt;br /&gt;
*Guoyi Wang (gwang25@ncsu.edu)&lt;br /&gt;
*Yulin Zhang (yzhan114@ncsu.edu)&lt;br /&gt;
*Boxuan Zhong (bzhong2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
Project mentor: Edward Gehringer (efg@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
==What is Revision Planning?==&lt;br /&gt;
&lt;br /&gt;
In the first round of the Expertiza reviews, reviewers are asked to give authors some guidance on how to improve their work. Then in the second round, reviewers rate how well authors have followed their suggestions. Revision planning is a mechanism used to carry the interaction one step further by having authors to supply a revision plan based on the previous round reviews. That is, the authors would derive their plan for code improvement from the previous round reviews and reviewers would later assess how well they did it. &lt;br /&gt;
&lt;br /&gt;
Revision planning is helpful because it makes the author think about what's necessary to improve the work before putting forth the effort to improve it.  This leads to a more reflective work process and is likely to produce a better-finished product.  When reviewers have an opportunity to give feedback to the author, they too will learn what a good revision plan looks like.&lt;br /&gt;
&lt;br /&gt;
According to the given instructions, a revision plan consists of a description of the plan, followed by any number of questions that would later be appended to the future review questionnaire. The revision plan is per AssignmentTeam-based, which means the authors’ questions would only be used to evaluate their submission and not anyone else. By adding the functionality of revision planning, it helps researchers study the effect of the reviewer’s suggestions on the code improvement.&lt;br /&gt;
&lt;br /&gt;
==Previous Implementation==&lt;br /&gt;
&lt;br /&gt;
This functionality has previously been done by a team of students from the Fall semester of 2018. Their implementation was merged into the master branch but was reverted due to the following design concerns:&lt;br /&gt;
*The relationship between `Questionnaire` and `SubmissionRecord` is unclear.&lt;br /&gt;
*Uses a lot of '''special-purpose''' code when existing codes may fulfill the same job.&lt;br /&gt;
*Revision planning cannot be enabled or disabled for an assignment.&lt;br /&gt;
*Numeric labelings for the revision plan questions begin from 1 again, instead of continuing after the original rubric questions.&lt;br /&gt;
*Codebase contains commented codes that are no longer wanted.&lt;br /&gt;
Check out the wiki page and the pull request on GitHub if you would like to learn more about the previous implementation of this project.&lt;br /&gt;
*http://wiki.expertiza.ncsu.edu/index.php/E1875_Revision_Planning_Tool&lt;br /&gt;
*https://github.com/expertiza/expertiza/pull/1302&lt;br /&gt;
Please note that unlike the other teams we have reviewed, this project is a complete redo rather than modifications built upon the previous team’s codes because our approach to this problem would be different than theirs. Therefore, we will not mention the previous implementation in the later content.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
For this project, we identified 4 major work items that together fulfill the stated requirements.&lt;br /&gt;
&lt;br /&gt;
'''Sort out the relationship among classes and introduce the new abstraction of the revision plan to the system in a way that it doesn’t interfere with the majority of codes'''&lt;br /&gt;
&lt;br /&gt;
We decided to relate each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with team_id. A &amp;lt;code&amp;gt;ReviewQuestionnaire&amp;lt;/code&amp;gt; will have either questions with no &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; or with a &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;. A question with no team_id indicates that it does not belong to any assignment teams so it is a question set up by the instructor. A question with a team_id, in contrast, indicates that it belongs to a particular team so it is a revision plan question. Both types of questions will be saved under the same questionnaire used for a given round. In this way, we can maximize the usage of existing codes and the only major change should be contained within the &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; class.&lt;br /&gt;
&lt;br /&gt;
'''Modify the existing views and controllers to accommodate the new functionality which includes'''&lt;br /&gt;
*Allowing teaching staff to enable/disable revision planning for an assignment.&lt;br /&gt;
*Allowing team members to create/edit their revision plan during each submission period after the first round.&lt;br /&gt;
*Allowing both rubric questions and revision plan questions to appear on the same page and be serialized correctly.&lt;br /&gt;
*Allowing feedback on the revision plan only to be viewed by the team that creates the plan and that team's instructor.&lt;br /&gt;
&lt;br /&gt;
This will involve some minor changes such as appending some method signatures with an optional trailing parameter, adding interactive elements to the views, and slightly adjusting the structure of certain view templates.&lt;br /&gt;
&lt;br /&gt;
In addition, we planned to:&lt;br /&gt;
*Provide an adequate amount of tests to improve code coverage.&lt;br /&gt;
*Do necessary refactoring and resolve any CodeClimate issues.&lt;br /&gt;
&lt;br /&gt;
After communicated with our mentor Dr. Gehringer, we have been clarified with the following two problem statements.&lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the second-round questionnaire.'''&lt;br /&gt;
&lt;br /&gt;
This means both questions from the team’s revision plan and questions from the review rubric should be displayed together in the frontend. Since we decided to add revision plan questions to the review rubric of the round, we automatically linked every new question to the questionnaire of that round. &lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the author’s submission (this will probably involve a DB migration)'''&lt;br /&gt;
&lt;br /&gt;
By saying every new question must be linked to the author’s submission, it means that there should be some relationships between the team and the team’s revision plan questions presented in the database. We addressed this problem by associating them with a team_id field. See Database Design section for more details.&lt;br /&gt;
&lt;br /&gt;
=Design=&lt;br /&gt;
&lt;br /&gt;
==Control Flow Diagram==&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_control_flow_diagram.png]]&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality. It involves 3 types of actors, student(reviewee), student(reviewer) and instructor/TA who manages the assignment and review processes. To understand each actor’s responsibility, trace each colored line that arose from each actor in the direction specified by the arrows. The diamond shape represents a decision or precondition, that is, only after the condition meets can the next action proceeds.&lt;br /&gt;
&lt;br /&gt;
Summary of actions&lt;br /&gt;
*A TA/Instructor can&lt;br /&gt;
*#Enable revision planning&lt;br /&gt;
*#Impersonate students to perform their responsibility&lt;br /&gt;
*#View feedback report of all teams&lt;br /&gt;
*A student(reviewee) can&lt;br /&gt;
*#Make revision during the second round submission period, which includes reading first-round feedback and adding revision plan questions according to that feedback.&lt;br /&gt;
*#View feedback report of the team it belongs to&lt;br /&gt;
*A student(reviewer) can&lt;br /&gt;
*#Give feedback on the team’s revised work by answering each question (including the team's revision plan questions) appeared on the review page.&lt;br /&gt;
*#View the feedback it wrote to the team.&lt;br /&gt;
&lt;br /&gt;
==UI Design==&lt;br /&gt;
&lt;br /&gt;
A revision plan should be similar to other review questionnaires. Since functionalities on the review questionnaire have been maturely implemented, we expected to make the least amount of interface changes by utilizing the existing view templates whenever possible. The subsections listed the changes we planned to make.&lt;br /&gt;
&lt;br /&gt;
===Enabling revision planning===&lt;br /&gt;
&lt;br /&gt;
Implementation of enabling/disabling revision planning for each assignment can be rather straightforward. We looked to add an additional checkbox under the &amp;quot;Review strategy&amp;quot; tab of the assignment’s edit page. This checkbox is labeled as &amp;quot;Enable Revision Planning?&amp;quot; to indicate whether the instructor wants to include this functionality in the newly-created assignment. It is most reasonable to place the checkbox here because it is review related and other similar functionalities like Self Reviews are also implemented in this manner.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_enabling_revision_planning.png]]&lt;br /&gt;
&lt;br /&gt;
===Link to the revision planning page===&lt;br /&gt;
&lt;br /&gt;
If the instructor decided to include revision planning in this assignment, then the link to “Revision Planning” would appear on the student’s assignment page but would stay disabled during the first round. After that, It would become clickable during every submission period and greyed again during every review period. By clicking it, students would be redirected to a whole new page explained under the ‘Revision planning page’ subsection.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_link_to_the_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Revision planning page===&lt;br /&gt;
&lt;br /&gt;
The revision plan is just like other questionnaires that it contains a set of questions for reviewers to answer. The only difference is that the revision plan comes with an additional description to help reviewers understand what changes have been made so far. &lt;br /&gt;
&lt;br /&gt;
Therefore, it should make use of most existing view templates and controller codes with minimized changes. As the image is shown, the only modification made from the existing questionnaire creation template would be to include a link that redirects students to the submission page, where the uploading of the revision plan will be handled by the existing implementation. The advantage to upload an external link rather than typing everything to the textbox element is that the description can be well-formatted if it displays outside the form and not causing a distraction effect for reviewers. We also decided to leave out (or hide) the place where instructors set the configuration stuff like the range of scores and the questionnaire's visibility. These configurations should use default values defined in the system rather than having students come up with their own.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Review page===&lt;br /&gt;
&lt;br /&gt;
The format of the review page remains almost exactly the same. To distinguish between rubric questions set up by the instructor and the revision plan questions created by the team under review, all the revision plan questions are placed after the rubric questions, split by an enlarged “Revision Planning” subheader. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page_2.png]]&lt;br /&gt;
&lt;br /&gt;
===Feedback report===&lt;br /&gt;
&lt;br /&gt;
Teaching staff and students have different windows to access the feedback report. &lt;br /&gt;
*'''Teaching staff''': Manage-&amp;gt;Assignments-&amp;gt;Edit Assignment-&amp;gt;Other stuff-&amp;gt;View scores&lt;br /&gt;
*'''Students''': Assignments-&amp;gt;View Assignment-&amp;gt;Alternative View&lt;br /&gt;
In addition, either instructor and TA can impersonate students to access the feedback report from their views. We would like to consider both cases and illustrate each of them separately.&lt;br /&gt;
&lt;br /&gt;
====TA/Instructor====&lt;br /&gt;
&lt;br /&gt;
Scores for the second round review rubric and the author’s revision plan questions will be displayed on the same table and are serialized correctly. See the figure below for an example. Let say the second round rubric has only 5 questions, the remaining questions (6-10) will be revision planning questions written by a particular team.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_instructor.png]]&lt;br /&gt;
&lt;br /&gt;
For reviewer: if you reviewed our first draft design, you should notice that we originally chose to place revision plan scores on a distinct table. After our mentor clarified to us, we realized that there can be possibly more than 2 rounds of submission and review periods for a given assignment. Therefore, scores for revision planning questions can vary round by round. Therefore, our previous solution will not work since it confuses the user of which round the revision plan scores refer to. &lt;br /&gt;
&lt;br /&gt;
====Student====&lt;br /&gt;
&lt;br /&gt;
The revision planning section will be added to the students’ view as shown in the snapshot below. It displays in the same order as how the review page does. A “Revision Planning” subheader is also used here to indicate the starting of the revision planning section.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_student.png]]&lt;br /&gt;
&lt;br /&gt;
==Database Design==&lt;br /&gt;
&lt;br /&gt;
Here we present the diagram of our database design. As the yellow borders show, we only plan to modify the structure of the Question table and the Assignment table. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_database_design.png]]&lt;br /&gt;
&lt;br /&gt;
In the Assignment table, the column ''is_revision_planning_enabled?'' will be needed to indicate whether the instructor would like to incorporate the revision planning feature. &lt;br /&gt;
&lt;br /&gt;
Additionally, we add &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object to distinguish whether the question is on the original rubric or is added by students as part of their revision plan. A &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with an empty &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; value will be the question under the original rubric, while the object with a non-empty team_id field will be regarded as the question created by the team that the team_id refers to. That is, instead of creating a whole new &amp;lt;code&amp;gt;RevisionPlanQuestionnaire&amp;lt;/code&amp;gt; class, we decided to dump all the revision plan questions that are created in a given round to the rubric that is used for that round. In this way, we minimize the change to the system to make the original rubric questions and the revision planning questions retrieved together more easily.&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
&lt;br /&gt;
app/controllers/questionnaires_controller.rb&lt;br /&gt;
*edit_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that prepares view template and supplies revision planning questions that belong only to the current team&lt;br /&gt;
*update_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that saves revision plan questions under the current round rubric&lt;br /&gt;
*Require some refactoring to share some existing functionalities with the two new methods we described above&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;@questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;@questionnaire.questions(@map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
app/controllers/grades_controller.rb&lt;br /&gt;
*Call the retrieve_questions method every time with an extra parameter “&amp;lt;code&amp;gt;@team_id&amp;lt;/code&amp;gt;”.&lt;br /&gt;
&lt;br /&gt;
===Models ===&lt;br /&gt;
&lt;br /&gt;
app/models/question.rb&lt;br /&gt;
*Form association relationship with &amp;lt;code&amp;gt;Team&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;belongs_to :team, class_name: ‘AssignmentTeam’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
*questions: change the method signature to &amp;lt;code&amp;gt;questions(team_id=nil)&amp;lt;/code&amp;gt; which uses nil as the parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;’s default value, so users can choose to supply a team_id argument or not. When team_id is supplied, it returns both questions with no team_id as well as questions that have this team_id. In addition, it will add to the return list an unsaved “Revision Planning” QuestionnaireHeader so the list can be displayed nicely on the browser with each section separated.&lt;br /&gt;
app/models/assignment_team.rb&lt;br /&gt;
*Form aggregation relationship with &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;has_many :revision_plan_questions, class_name: ‘Question’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
app/models/response.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;questionnaire.questions(self.response_map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
&lt;br /&gt;
app/views/questionnaires/edit_revision_plan.html.erb &lt;br /&gt;
*It is a newly-added view file for students to create and edit their revision plan. It will utilize some existing codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to reduce code redundancy.&lt;br /&gt;
app/views/questionnaires/_questions.html.erb&lt;br /&gt;
*Extract codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to make it a standalone partial template that will later be loaded by the ''app/views/questionnaires/edit_revision_plan.html.erb'' view file described above.&lt;br /&gt;
app/views/student_task/view.html.erb&lt;br /&gt;
*Add a “Revision Planning” link for students to edit their revision plan. The link will lead students to the “Edit Revision Plan” page. If the revision planning feature is enabled, this link will appear disabled at first and only become clickable during each submission period after round 1.&lt;br /&gt;
app/views/assignments/edit/_review_strategy.html.erb&lt;br /&gt;
*Add “Enable Revision Planning?” checkbox for each assignment because not every assignment needs this feature. TA/instructor have the option to include this feature more flexibly. The “Revision Planning” link will disappear from the assignment page if the checkbox is not checked.&lt;br /&gt;
&lt;br /&gt;
===Helpers===&lt;br /&gt;
&lt;br /&gt;
app/helpers/grades_helper.rb&lt;br /&gt;
*retrieve_questions: add an extra parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; whose default value is set to be nil. It then invoke &amp;lt;code&amp;gt;Questionnaire&amp;lt;/code&amp;gt; model’s &amp;lt;code&amp;gt;questions&amp;lt;/code&amp;gt; method with this &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; and retrieve a proper question set for each team.&lt;br /&gt;
&lt;br /&gt;
===Schema===&lt;br /&gt;
&lt;br /&gt;
*Assignment table: add one column named &amp;lt;code&amp;gt;is_revision_planning_enabled?&amp;lt;/code&amp;gt; to indicate whether this feature has been activated.&lt;br /&gt;
*Question table: add one column named &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to distinguish whether the question is from the official review rubric or from a particular team's revision plan.&lt;br /&gt;
&lt;br /&gt;
=Testing=&lt;br /&gt;
&lt;br /&gt;
==RSpec Test Plan==&lt;br /&gt;
&lt;br /&gt;
'''Controllers'''&lt;br /&gt;
*spec/controllers/questionnaires_controller_spec.rb&lt;br /&gt;
**Describe ‘#edit_revision_plan’&lt;br /&gt;
***Context ‘when params[:id] is valid’&lt;br /&gt;
***Context ‘when params[:id] is not valid'&lt;br /&gt;
**Describe ‘#update_revision_plan’&lt;br /&gt;
***Context 'when params[:add_new_questions] is not nil'&lt;br /&gt;
***Context 'when params[:view_advice] is not nil'&lt;br /&gt;
***Context 'when both params[:add_new_questions] and params[:view_advice] are nil'&lt;br /&gt;
*spec/controllers/grades_controller_spec.rb&lt;br /&gt;
**Describe ‘#view’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
**Describe ‘#view_my_scores’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
*spec/controllers/response_controller_spec.rb&lt;br /&gt;
**Fix all failed tests to accommodate modified codes.&lt;br /&gt;
**Add more tests regarding revision planning.&lt;br /&gt;
&lt;br /&gt;
'''Models'''&lt;br /&gt;
*spec/models/question_spec.rb&lt;br /&gt;
**Describe ‘#questions’&lt;br /&gt;
***Context ‘when team_id is supplied’&lt;br /&gt;
***Context ‘when team_id is not supplied’&lt;br /&gt;
&lt;br /&gt;
==UI Testing Instructions (For Reviewers)==&lt;br /&gt;
&lt;br /&gt;
'''Setup'''&lt;br /&gt;
&lt;br /&gt;
Login information&lt;br /&gt;
*Visit xxx [expertiza deployment link]&lt;br /&gt;
    User name: instructor6/student8030/student8031&lt;br /&gt;
    Password: password&lt;br /&gt;
Enable/Disable Revision Planning&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Review strategy'''” tab, check the checkbox labeled “'''Enable Revision Planning?'''” to enable the revision planning feature. &lt;br /&gt;
Enable/Disable the “Revision Planning” link&lt;br /&gt;
*After the instructor configures the assignment to include the revision planning feature, the “'''Revision Planning'''” link will appear on the student's assignment page but will remain disabled and only be enabled during each submission period after round 1. Therefore, to enable the link:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date in the future.&lt;br /&gt;
*To disable the link after round 2 submission period:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date from the past and change the round 2 review date to whenever date in the future.&lt;br /&gt;
Create teams for the assignment&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Add participant'''”.&lt;br /&gt;
*In the new page, add two students, student8030 and student8031, so one student can create revision plan questions during the submission period while the other can respond to these questions during the review period.&lt;br /&gt;
*Go back to the assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Create teams'''”.&lt;br /&gt;
*In the new page, group the two added students to separate teams.&lt;br /&gt;
&lt;br /&gt;
'''Functionalities'''&lt;br /&gt;
&lt;br /&gt;
Edit a Revision Plan&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link. which redirects the user to a new page used to create a revision plan. Fill the blanks and click on the “'''Save review questionnaire'''” button, and the revision plan should be saved.&lt;br /&gt;
Test retrieval of revision plan questions for a specific team&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link after steps under '''Edit a Revision Plan''' have been done. The “'''Revision Planning'''” link should redirect the user to the Revision Planning edit page that is populated with previously saved questions.&lt;br /&gt;
Check the Revision Plan questions in the questionnaire.&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Adjust the time frame to the second round review period.&lt;br /&gt;
*Log out and log in again as student8031.&lt;br /&gt;
*In the assignment page, click on the “'''Others’ work'''” link, which takes the user to the review page where one requests a new team’s submission to review. Go to the only other team’s review page and check if the questions are properly displayed under the “Revision Planning” subheader.&lt;br /&gt;
Check responses to the Revision Plan questions&lt;br /&gt;
*Login again as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Alternative View'''” link, and see if student8030 gets responses for both the original rubric questions as well as its revision plan questions.&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133582</id>
		<title>CSC/ECE 517 Spring 2020 E2016 Revision planning tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133582"/>
		<updated>2020-04-14T03:00:19Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* UI Testing Instructions (For Reviewers) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
==About our team==&lt;br /&gt;
&lt;br /&gt;
Team members:&lt;br /&gt;
*Tianji Gao (tgao5@ncsu.edu)&lt;br /&gt;
*Guoyi Wang (gwang25@ncsu.edu)&lt;br /&gt;
*Yulin Zhang (yzhan114@ncsu.edu)&lt;br /&gt;
*Boxuan Zhong (bzhong2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
Project mentor: Edward Gehringer (efg@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
==What is Revision Planning?==&lt;br /&gt;
&lt;br /&gt;
In the first round of the Expertiza reviews, reviewers are asked to give authors some guidance on how to improve their work. Then in the second round, reviewers rate how well authors have followed their suggestions. Revision planning is a mechanism used to carry the interaction one step further by having authors to supply a revision plan based on the previous round reviews. That is, the authors would derive their plan for code improvement from the previous round reviews and reviewers would later assess how well they did it. &lt;br /&gt;
&lt;br /&gt;
Revision planning is helpful because it makes the author think about what's necessary to improve the work before putting forth the effort to improve it.  This leads to a more reflective work process and is likely to produce a better-finished product.  When reviewers have an opportunity to give feedback to the author, they too will learn what a good revision plan looks like.&lt;br /&gt;
&lt;br /&gt;
According to the given instructions, a revision plan consists of a description of the plan, followed by any number of questions that would later be appended to the future review questionnaire. The revision plan is per AssignmentTeam-based, which means the authors’ questions would only be used to evaluate their submission and not anyone else. By adding the functionality of revision planning, it helps researchers study the effect of the reviewer’s suggestions on the code improvement.&lt;br /&gt;
&lt;br /&gt;
==Previous Implementation==&lt;br /&gt;
&lt;br /&gt;
This functionality has previously been done by a team of students from the Fall semester of 2018. Their implementation was merged into the master branch but was reverted due to the following design concerns:&lt;br /&gt;
*The relationship between `Questionnaire` and `SubmissionRecord` is unclear.&lt;br /&gt;
*Uses a lot of '''special-purpose''' code when existing codes may fulfill the same job.&lt;br /&gt;
*Revision planning cannot be enabled or disabled for an assignment.&lt;br /&gt;
*Numeric labelings for the revision plan questions begin from 1 again, instead of continuing after the original rubric questions.&lt;br /&gt;
*Codebase contains commented codes that are no longer wanted.&lt;br /&gt;
Check out the wiki page and the pull request on GitHub if you would like to learn more about the previous implementation of this project.&lt;br /&gt;
*http://wiki.expertiza.ncsu.edu/index.php/E1875_Revision_Planning_Tool&lt;br /&gt;
*https://github.com/expertiza/expertiza/pull/1302&lt;br /&gt;
Please note that unlike the other teams we have reviewed, this project is a complete redo rather than modifications built upon the previous team’s codes because our approach to this problem would be different than theirs. Therefore, we will not mention the previous implementation in the later content.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
For this project, we identified 4 major work items that together fulfill the stated requirements.&lt;br /&gt;
&lt;br /&gt;
'''Sort out the relationship among classes and introduce the new abstraction of the revision plan to the system in a way that it doesn’t interfere with the majority of codes'''&lt;br /&gt;
&lt;br /&gt;
We decided to relate each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with team_id. A &amp;lt;code&amp;gt;ReviewQuestionnaire&amp;lt;/code&amp;gt; will have either questions with no &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; or with a &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;. A question with no team_id indicates that it does not belong to any assignment teams so it is a question set up by the instructor. A question with a team_id, in contrast, indicates that it belongs to a particular team so it is a revision plan question. Both types of questions will be saved under the same questionnaire used for a given round. In this way, we can maximize the usage of existing codes and the only major change should be contained within the &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; class.&lt;br /&gt;
&lt;br /&gt;
'''Modify the existing views and controllers to accommodate the new functionality which includes'''&lt;br /&gt;
*Allowing teaching staff to enable/disable revision planning for an assignment.&lt;br /&gt;
*Allowing team members to create/edit their revision plan during each submission period after the first round.&lt;br /&gt;
*Allowing both rubric questions and revision plan questions to appear on the same page and be serialized correctly.&lt;br /&gt;
*Allowing feedback on the revision plan only to be viewed by the team that creates the plan and that team's instructor.&lt;br /&gt;
&lt;br /&gt;
This will involve some minor changes such as appending some method signatures with an optional trailing parameter, adding interactive elements to the views, and slightly adjusting the structure of certain view templates.&lt;br /&gt;
&lt;br /&gt;
In addition, we planned to:&lt;br /&gt;
*Provide an adequate amount of tests to improve code coverage.&lt;br /&gt;
*Do necessary refactoring and resolve any CodeClimate issues.&lt;br /&gt;
&lt;br /&gt;
After communicated with our mentor Dr. Gehringer, we have been clarified with the following two problem statements.&lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the second-round questionnaire.'''&lt;br /&gt;
&lt;br /&gt;
This means both questions from the team’s revision plan and questions from the review rubric should be displayed together in the frontend. Since we decided to add revision plan questions to the review rubric of the round, we automatically linked every new question to the questionnaire of that round. &lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the author’s submission (this will probably involve a DB migration)'''&lt;br /&gt;
&lt;br /&gt;
By saying every new question must be linked to the author’s submission, it means that there should be some relationships between the team and the team’s revision plan questions presented in the database. We addressed this problem by associating them with a team_id field. See Database Design section for more details.&lt;br /&gt;
&lt;br /&gt;
=Design=&lt;br /&gt;
&lt;br /&gt;
==Control Flow Diagram==&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_control_flow_diagram.png]]&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality. It involves 3 types of actors, student(reviewee), student(reviewer) and instructor/TA who manages the assignment and review processes. To understand each actor’s responsibility, trace each colored line that arose from each actor in the direction specified by the arrows. The diamond shape represents a decision or precondition, that is, only after the condition meets can the next action proceeds.&lt;br /&gt;
&lt;br /&gt;
Summary of actions&lt;br /&gt;
*A TA/Instructor can&lt;br /&gt;
*#Enable revision planning&lt;br /&gt;
*#Impersonate students to perform their responsibility&lt;br /&gt;
*#View feedback report of all teams&lt;br /&gt;
*A student(reviewee) can&lt;br /&gt;
*#Make revision during the second round submission period, which includes reading first-round feedback and adding revision plan questions according to that feedback.&lt;br /&gt;
*#View feedback report of the team it belongs to&lt;br /&gt;
*A student(reviewer) can&lt;br /&gt;
*#Give feedback on the team’s revised work by answering each question (including the team's revision plan questions) appeared on the review page.&lt;br /&gt;
*#View the feedback it wrote to the team.&lt;br /&gt;
&lt;br /&gt;
==UI Design==&lt;br /&gt;
&lt;br /&gt;
A revision plan should be similar to other review questionnaires. Since functionalities on the review questionnaire have been maturely implemented, we expected to make the least amount of interface changes by utilizing the existing view templates whenever possible. The subsections listed the changes we planned to make.&lt;br /&gt;
&lt;br /&gt;
===Enabling revision planning===&lt;br /&gt;
&lt;br /&gt;
Implementation of enabling/disabling revision planning for each assignment can be rather straightforward. We looked to add an additional checkbox under the &amp;quot;Review strategy&amp;quot; tab of the assignment’s edit page. This checkbox is labeled as &amp;quot;Enable Revision Planning?&amp;quot; to indicate whether the instructor wants to include this functionality in the newly-created assignment. It is most reasonable to place the checkbox here because it is review related and other similar functionalities like Self Reviews are also implemented in this manner.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_enabling_revision_planning.png]]&lt;br /&gt;
&lt;br /&gt;
===Link to the revision planning page===&lt;br /&gt;
&lt;br /&gt;
If the instructor decided to include revision planning in this assignment, then the link to “Revision Planning” would appear on the student’s assignment page but would stay disabled during the first round. After that, It would become clickable during every submission period and greyed again during every review period. By clicking it, students would be redirected to a whole new page explained under the ‘Revision planning page’ subsection.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_link_to_the_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Revision planning page===&lt;br /&gt;
&lt;br /&gt;
The revision plan is just like other questionnaires that it contains a set of questions for reviewers to answer. The only difference is that the revision plan comes with an additional description to help reviewers understand what changes have been made so far. &lt;br /&gt;
&lt;br /&gt;
Therefore, it should make use of most existing view templates and controller codes with minimized changes. As the image is shown, the only modification made from the existing questionnaire creation template would be to include a link that redirects students to the submission page, where the uploading of the revision plan will be handled by the existing implementation. The advantage to upload an external link rather than typing everything to the textbox element is that the description can be well-formatted if it displays outside the form and not causing a distraction effect for reviewers. We also decided to leave out (or hide) the place where instructors set the configuration stuff like the range of scores and the questionnaire's visibility. These configurations should use default values defined in the system rather than having students come up with their own.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Review page===&lt;br /&gt;
&lt;br /&gt;
The format of the review page remains almost exactly the same. To distinguish between rubric questions set up by the instructor and the revision plan questions created by the team under review, all the revision plan questions are placed after the rubric questions, split by an enlarged “Revision Planning” subheader. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page_2.png]]&lt;br /&gt;
&lt;br /&gt;
===Feedback report===&lt;br /&gt;
&lt;br /&gt;
Teaching staff and students have different windows to access the feedback report. &lt;br /&gt;
*'''Teaching staff''': Manage-&amp;gt;Assignments-&amp;gt;Edit Assignment-&amp;gt;Other stuff-&amp;gt;View scores&lt;br /&gt;
*'''Students''': Assignments-&amp;gt;View Assignment-&amp;gt;Alternative View&lt;br /&gt;
In addition, either instructor and TA can impersonate students to access the feedback report from their views. We would like to consider both cases and illustrate each of them separately.&lt;br /&gt;
&lt;br /&gt;
====TA/Instructor====&lt;br /&gt;
&lt;br /&gt;
Scores for the second round review rubric and the author’s revision plan questions will be displayed on the same table and are serialized correctly. See the figure below for an example. Let say the second round rubric has only 5 questions, the remaining questions (6-10) will be revision planning questions written by a particular team.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_instructor.png]]&lt;br /&gt;
&lt;br /&gt;
For reviewer: if you reviewed our first draft design, you should notice that we originally chose to place revision plan scores on a distinct table. After our mentor clarified to us, we realized that there can be possibly more than 2 rounds of submission and review periods for a given assignment. Therefore, scores for revision planning questions can vary round by round. Therefore, our previous solution will not work since it confuses the user of which round the revision plan scores refer to. &lt;br /&gt;
&lt;br /&gt;
====Student====&lt;br /&gt;
&lt;br /&gt;
The revision planning section will be added to the students’ view as shown in the snapshot below. It displays in the same order as how the review page does. A “Revision Planning” subheader is also used here to indicate the starting of the revision planning section.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_student.png]]&lt;br /&gt;
&lt;br /&gt;
==Database Design==&lt;br /&gt;
&lt;br /&gt;
Here we present the diagram of our database design. As the yellow borders show, we only plan to modify the structure of the Question table and the Assignment table. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_database_design.png]]&lt;br /&gt;
&lt;br /&gt;
In the Assignment table, the column ''is_revision_planning_enabled?'' will be needed to indicate whether the instructor would like to incorporate the revision planning feature. &lt;br /&gt;
&lt;br /&gt;
Additionally, we add &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object to distinguish whether the question is on the original rubric or is added by students as part of their revision plan. A &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with an empty &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; value will be the question under the original rubric, while the object with a non-empty team_id field will be regarded as the question created by the team that the team_id refers to. That is, instead of creating a whole new &amp;lt;code&amp;gt;RevisionPlanQuestionnaire&amp;lt;/code&amp;gt; class, we decided to dump all the revision plan questions that are created in a given round to the rubric that is used for that round. In this way, we minimize the change to the system to make the original rubric questions and the revision planning questions retrieved together more easily.&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
&lt;br /&gt;
app/controllers/questionnaires_controller.rb&lt;br /&gt;
*edit_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that prepares view template and supplies revision planning questions that belong only to the current team&lt;br /&gt;
*update_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that saves revision plan questions under the current round rubric&lt;br /&gt;
*Require some refactoring to share some existing functionalities with the two new methods we described above&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;@questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;@questionnaire.questions(@map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
app/controllers/grades_controller.rb&lt;br /&gt;
*Call the retrieve_questions method every time with an extra parameter “&amp;lt;code&amp;gt;@team_id&amp;lt;/code&amp;gt;”.&lt;br /&gt;
&lt;br /&gt;
===Models ===&lt;br /&gt;
&lt;br /&gt;
app/models/question.rb&lt;br /&gt;
*Form association relationship with &amp;lt;code&amp;gt;Team&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;belongs_to :team, class_name: ‘AssignmentTeam’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
*questions: change the method signature to &amp;lt;code&amp;gt;questions(team_id=nil)&amp;lt;/code&amp;gt; which uses nil as the parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;’s default value, so users can choose to supply a team_id argument or not. When team_id is supplied, it returns both questions with no team_id as well as questions that have this team_id. In addition, it will add to the return list an unsaved “Revision Planning” QuestionnaireHeader so the list can be displayed nicely on the browser with each section separated.&lt;br /&gt;
app/models/assignment_team.rb&lt;br /&gt;
*Form aggregation relationship with &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;has_many :revision_plan_questions, class_name: ‘Question’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
app/models/response.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;questionnaire.questions(self.response_map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
&lt;br /&gt;
app/views/questionnaires/edit_revision_plan.html.erb &lt;br /&gt;
*It is a newly-added view file for students to create and edit their revision plan. It will utilize some existing codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to reduce code redundancy.&lt;br /&gt;
app/views/questionnaires/_questions.html.erb&lt;br /&gt;
*Extract codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to make it a standalone partial template that will later be loaded by the ''app/views/questionnaires/edit_revision_plan.html.erb'' view file described above.&lt;br /&gt;
app/views/student_task/view.html.erb&lt;br /&gt;
*Add a “Revision Planning” link for students to edit their revision plan. The link will lead students to the “Edit Revision Plan” page. If the revision planning feature is enabled, this link will appear disabled at first and only become clickable during each submission period after round 1.&lt;br /&gt;
app/views/assignments/edit/_review_strategy.html.erb&lt;br /&gt;
*Add “Enable Revision Planning?” checkbox for each assignment because not every assignment needs this feature. TA/instructor have the option to include this feature more flexibly. The “Revision Planning” link will disappear from the assignment page if the checkbox is not checked.&lt;br /&gt;
&lt;br /&gt;
===Helpers===&lt;br /&gt;
&lt;br /&gt;
app/helpers/grades_helper.rb&lt;br /&gt;
*retrieve_questions: add an extra parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; whose default value is set to be nil. It then invoke &amp;lt;code&amp;gt;Questionnaire&amp;lt;/code&amp;gt; model’s &amp;lt;code&amp;gt;questions&amp;lt;/code&amp;gt; method with this &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; and retrieve a proper question set for each team.&lt;br /&gt;
&lt;br /&gt;
===Schema===&lt;br /&gt;
&lt;br /&gt;
*Assignment table: add one column named &amp;lt;code&amp;gt;is_revision_planning_enabled?&amp;lt;/code&amp;gt; to indicate whether this feature has been activated.&lt;br /&gt;
*Question table: add one column named &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to distinguish whether the question is from the official review rubric or from a particular team's revision plan.&lt;br /&gt;
&lt;br /&gt;
=Testing=&lt;br /&gt;
&lt;br /&gt;
==RSpec Test Plan==&lt;br /&gt;
&lt;br /&gt;
'''Controllers'''&lt;br /&gt;
*spec/controllers/questionnaires_controller_spec.rb&lt;br /&gt;
**Describe ‘#edit_revision_plan’&lt;br /&gt;
***Context ‘when params[:id] is valid’&lt;br /&gt;
***Context ‘when params[:id] is not valid'&lt;br /&gt;
**Describe ‘#update_revision_plan’&lt;br /&gt;
***Context 'when params[:add_new_questions] is not nil'&lt;br /&gt;
***Context 'when params[:view_advice] is not nil'&lt;br /&gt;
***Context 'when both params[:add_new_questions] and params[:view_advice] are nil'&lt;br /&gt;
*spec/controllers/grades_controller_spec.rb&lt;br /&gt;
**Describe ‘#view’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
**Describe ‘#view_my_scores’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
*spec/controllers/response_controller_spec.rb&lt;br /&gt;
**Describe&lt;br /&gt;
&lt;br /&gt;
'''Models'''&lt;br /&gt;
*spec/models/question_spec.rb&lt;br /&gt;
**Describe ‘#questions’&lt;br /&gt;
***Context ‘when team_id is supplied’&lt;br /&gt;
***Context ‘when team_id is not supplied’&lt;br /&gt;
&lt;br /&gt;
==UI Testing Instructions (For Reviewers)==&lt;br /&gt;
&lt;br /&gt;
'''Setup'''&lt;br /&gt;
&lt;br /&gt;
Login information&lt;br /&gt;
*Visit xxx [expertiza deployment link]&lt;br /&gt;
    User name: instructor6/student8030/student8031&lt;br /&gt;
    Password: password&lt;br /&gt;
Enable/Disable Revision Planning&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Review strategy'''” tab, check the checkbox labeled “'''Enable Revision Planning?'''” to enable the revision planning feature. &lt;br /&gt;
Enable/Disable the “Revision Planning” link&lt;br /&gt;
*After the instructor configures the assignment to include the revision planning feature, the “'''Revision Planning'''” link will appear on the student's assignment page but will remain disabled and only be enabled during each submission period after round 1. Therefore, to enable the link:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date in the future.&lt;br /&gt;
*To disable the link after round 2 submission period:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date from the past and change the round 2 review date to whenever date in the future.&lt;br /&gt;
Create teams for the assignment&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Add participant'''”.&lt;br /&gt;
*In the new page, add two students, student8030 and student8031, so one student can create revision plan questions during the submission period while the other can respond to these questions during the review period.&lt;br /&gt;
*Go back to the assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Create teams'''”.&lt;br /&gt;
*In the new page, group the two added students to separate teams.&lt;br /&gt;
&lt;br /&gt;
'''Functionalities'''&lt;br /&gt;
&lt;br /&gt;
Edit a Revision Plan&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link. which redirects the user to a new page used to create a revision plan. Fill the blanks and click on the “'''Save review questionnaire'''” button, and the revision plan should be saved.&lt;br /&gt;
Test retrieval of revision plan questions for a specific team&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link after steps under '''Edit a Revision Plan''' have been done. The “'''Revision Planning'''” link should redirect the user to the Revision Planning edit page that is populated with previously saved questions.&lt;br /&gt;
Check the Revision Plan questions in the questionnaire.&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Adjust the time frame to the second round review period.&lt;br /&gt;
*Log out and log in again as student8031.&lt;br /&gt;
*In the assignment page, click on the “'''Others’ work'''” link, which takes the user to the review page where one requests a new team’s submission to review. Go to the only other team’s review page and check if the questions are properly displayed under the “Revision Planning” subheader.&lt;br /&gt;
Check responses to the Revision Plan questions&lt;br /&gt;
*Login again as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Alternative View'''” link, and see if student8030 gets responses for both the original rubric questions as well as its revision plan questions.&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133579</id>
		<title>CSC/ECE 517 Spring 2020 E2016 Revision planning tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2020_E2016_Revision_planning_tool&amp;diff=133579"/>
		<updated>2020-04-14T02:59:46Z</updated>

		<summary type="html">&lt;p&gt;Yzhan114: /* RSpec Test Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
==About our team==&lt;br /&gt;
&lt;br /&gt;
Team members:&lt;br /&gt;
*Tianji Gao (tgao5@ncsu.edu)&lt;br /&gt;
*Guoyi Wang (gwang25@ncsu.edu)&lt;br /&gt;
*Yulin Zhang (yzhan114@ncsu.edu)&lt;br /&gt;
*Boxuan Zhong (bzhong2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
Project mentor: Edward Gehringer (efg@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
==What is Revision Planning?==&lt;br /&gt;
&lt;br /&gt;
In the first round of the Expertiza reviews, reviewers are asked to give authors some guidance on how to improve their work. Then in the second round, reviewers rate how well authors have followed their suggestions. Revision planning is a mechanism used to carry the interaction one step further by having authors to supply a revision plan based on the previous round reviews. That is, the authors would derive their plan for code improvement from the previous round reviews and reviewers would later assess how well they did it. &lt;br /&gt;
&lt;br /&gt;
Revision planning is helpful because it makes the author think about what's necessary to improve the work before putting forth the effort to improve it.  This leads to a more reflective work process and is likely to produce a better-finished product.  When reviewers have an opportunity to give feedback to the author, they too will learn what a good revision plan looks like.&lt;br /&gt;
&lt;br /&gt;
According to the given instructions, a revision plan consists of a description of the plan, followed by any number of questions that would later be appended to the future review questionnaire. The revision plan is per AssignmentTeam-based, which means the authors’ questions would only be used to evaluate their submission and not anyone else. By adding the functionality of revision planning, it helps researchers study the effect of the reviewer’s suggestions on the code improvement.&lt;br /&gt;
&lt;br /&gt;
==Previous Implementation==&lt;br /&gt;
&lt;br /&gt;
This functionality has previously been done by a team of students from the Fall semester of 2018. Their implementation was merged into the master branch but was reverted due to the following design concerns:&lt;br /&gt;
*The relationship between `Questionnaire` and `SubmissionRecord` is unclear.&lt;br /&gt;
*Uses a lot of '''special-purpose''' code when existing codes may fulfill the same job.&lt;br /&gt;
*Revision planning cannot be enabled or disabled for an assignment.&lt;br /&gt;
*Numeric labelings for the revision plan questions begin from 1 again, instead of continuing after the original rubric questions.&lt;br /&gt;
*Codebase contains commented codes that are no longer wanted.&lt;br /&gt;
Check out the wiki page and the pull request on GitHub if you would like to learn more about the previous implementation of this project.&lt;br /&gt;
*http://wiki.expertiza.ncsu.edu/index.php/E1875_Revision_Planning_Tool&lt;br /&gt;
*https://github.com/expertiza/expertiza/pull/1302&lt;br /&gt;
Please note that unlike the other teams we have reviewed, this project is a complete redo rather than modifications built upon the previous team’s codes because our approach to this problem would be different than theirs. Therefore, we will not mention the previous implementation in the later content.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
For this project, we identified 4 major work items that together fulfill the stated requirements.&lt;br /&gt;
&lt;br /&gt;
'''Sort out the relationship among classes and introduce the new abstraction of the revision plan to the system in a way that it doesn’t interfere with the majority of codes'''&lt;br /&gt;
&lt;br /&gt;
We decided to relate each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with team_id. A &amp;lt;code&amp;gt;ReviewQuestionnaire&amp;lt;/code&amp;gt; will have either questions with no &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; or with a &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;. A question with no team_id indicates that it does not belong to any assignment teams so it is a question set up by the instructor. A question with a team_id, in contrast, indicates that it belongs to a particular team so it is a revision plan question. Both types of questions will be saved under the same questionnaire used for a given round. In this way, we can maximize the usage of existing codes and the only major change should be contained within the &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; class.&lt;br /&gt;
&lt;br /&gt;
'''Modify the existing views and controllers to accommodate the new functionality which includes'''&lt;br /&gt;
*Allowing teaching staff to enable/disable revision planning for an assignment.&lt;br /&gt;
*Allowing team members to create/edit their revision plan during each submission period after the first round.&lt;br /&gt;
*Allowing both rubric questions and revision plan questions to appear on the same page and be serialized correctly.&lt;br /&gt;
*Allowing feedback on the revision plan only to be viewed by the team that creates the plan and that team's instructor.&lt;br /&gt;
&lt;br /&gt;
This will involve some minor changes such as appending some method signatures with an optional trailing parameter, adding interactive elements to the views, and slightly adjusting the structure of certain view templates.&lt;br /&gt;
&lt;br /&gt;
In addition, we planned to:&lt;br /&gt;
*Provide an adequate amount of tests to improve code coverage.&lt;br /&gt;
*Do necessary refactoring and resolve any CodeClimate issues.&lt;br /&gt;
&lt;br /&gt;
After communicated with our mentor Dr. Gehringer, we have been clarified with the following two problem statements.&lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the second-round questionnaire.'''&lt;br /&gt;
&lt;br /&gt;
This means both questions from the team’s revision plan and questions from the review rubric should be displayed together in the frontend. Since we decided to add revision plan questions to the review rubric of the round, we automatically linked every new question to the questionnaire of that round. &lt;br /&gt;
&lt;br /&gt;
'''Every new question must be linked to the author’s submission (this will probably involve a DB migration)'''&lt;br /&gt;
&lt;br /&gt;
By saying every new question must be linked to the author’s submission, it means that there should be some relationships between the team and the team’s revision plan questions presented in the database. We addressed this problem by associating them with a team_id field. See Database Design section for more details.&lt;br /&gt;
&lt;br /&gt;
=Design=&lt;br /&gt;
&lt;br /&gt;
==Control Flow Diagram==&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_control_flow_diagram.png]]&lt;br /&gt;
&lt;br /&gt;
The below image shows the control flow of the revision planning functionality. It involves 3 types of actors, student(reviewee), student(reviewer) and instructor/TA who manages the assignment and review processes. To understand each actor’s responsibility, trace each colored line that arose from each actor in the direction specified by the arrows. The diamond shape represents a decision or precondition, that is, only after the condition meets can the next action proceeds.&lt;br /&gt;
&lt;br /&gt;
Summary of actions&lt;br /&gt;
*A TA/Instructor can&lt;br /&gt;
*#Enable revision planning&lt;br /&gt;
*#Impersonate students to perform their responsibility&lt;br /&gt;
*#View feedback report of all teams&lt;br /&gt;
*A student(reviewee) can&lt;br /&gt;
*#Make revision during the second round submission period, which includes reading first-round feedback and adding revision plan questions according to that feedback.&lt;br /&gt;
*#View feedback report of the team it belongs to&lt;br /&gt;
*A student(reviewer) can&lt;br /&gt;
*#Give feedback on the team’s revised work by answering each question (including the team's revision plan questions) appeared on the review page.&lt;br /&gt;
*#View the feedback it wrote to the team.&lt;br /&gt;
&lt;br /&gt;
==UI Design==&lt;br /&gt;
&lt;br /&gt;
A revision plan should be similar to other review questionnaires. Since functionalities on the review questionnaire have been maturely implemented, we expected to make the least amount of interface changes by utilizing the existing view templates whenever possible. The subsections listed the changes we planned to make.&lt;br /&gt;
&lt;br /&gt;
===Enabling revision planning===&lt;br /&gt;
&lt;br /&gt;
Implementation of enabling/disabling revision planning for each assignment can be rather straightforward. We looked to add an additional checkbox under the &amp;quot;Review strategy&amp;quot; tab of the assignment’s edit page. This checkbox is labeled as &amp;quot;Enable Revision Planning?&amp;quot; to indicate whether the instructor wants to include this functionality in the newly-created assignment. It is most reasonable to place the checkbox here because it is review related and other similar functionalities like Self Reviews are also implemented in this manner.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_enabling_revision_planning.png]]&lt;br /&gt;
&lt;br /&gt;
===Link to the revision planning page===&lt;br /&gt;
&lt;br /&gt;
If the instructor decided to include revision planning in this assignment, then the link to “Revision Planning” would appear on the student’s assignment page but would stay disabled during the first round. After that, It would become clickable during every submission period and greyed again during every review period. By clicking it, students would be redirected to a whole new page explained under the ‘Revision planning page’ subsection.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_link_to_the_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Revision planning page===&lt;br /&gt;
&lt;br /&gt;
The revision plan is just like other questionnaires that it contains a set of questions for reviewers to answer. The only difference is that the revision plan comes with an additional description to help reviewers understand what changes have been made so far. &lt;br /&gt;
&lt;br /&gt;
Therefore, it should make use of most existing view templates and controller codes with minimized changes. As the image is shown, the only modification made from the existing questionnaire creation template would be to include a link that redirects students to the submission page, where the uploading of the revision plan will be handled by the existing implementation. The advantage to upload an external link rather than typing everything to the textbox element is that the description can be well-formatted if it displays outside the form and not causing a distraction effect for reviewers. We also decided to leave out (or hide) the place where instructors set the configuration stuff like the range of scores and the questionnaire's visibility. These configurations should use default values defined in the system rather than having students come up with their own.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_revision_planning_page.png]]&lt;br /&gt;
&lt;br /&gt;
===Review page===&lt;br /&gt;
&lt;br /&gt;
The format of the review page remains almost exactly the same. To distinguish between rubric questions set up by the instructor and the revision plan questions created by the team under review, all the revision plan questions are placed after the rubric questions, split by an enlarged “Revision Planning” subheader. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_review_page_2.png]]&lt;br /&gt;
&lt;br /&gt;
===Feedback report===&lt;br /&gt;
&lt;br /&gt;
Teaching staff and students have different windows to access the feedback report. &lt;br /&gt;
*'''Teaching staff''': Manage-&amp;gt;Assignments-&amp;gt;Edit Assignment-&amp;gt;Other stuff-&amp;gt;View scores&lt;br /&gt;
*'''Students''': Assignments-&amp;gt;View Assignment-&amp;gt;Alternative View&lt;br /&gt;
In addition, either instructor and TA can impersonate students to access the feedback report from their views. We would like to consider both cases and illustrate each of them separately.&lt;br /&gt;
&lt;br /&gt;
====TA/Instructor====&lt;br /&gt;
&lt;br /&gt;
Scores for the second round review rubric and the author’s revision plan questions will be displayed on the same table and are serialized correctly. See the figure below for an example. Let say the second round rubric has only 5 questions, the remaining questions (6-10) will be revision planning questions written by a particular team.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_instructor.png]]&lt;br /&gt;
&lt;br /&gt;
For reviewer: if you reviewed our first draft design, you should notice that we originally chose to place revision plan scores on a distinct table. After our mentor clarified to us, we realized that there can be possibly more than 2 rounds of submission and review periods for a given assignment. Therefore, scores for revision planning questions can vary round by round. Therefore, our previous solution will not work since it confuses the user of which round the revision plan scores refer to. &lt;br /&gt;
&lt;br /&gt;
====Student====&lt;br /&gt;
&lt;br /&gt;
The revision planning section will be added to the students’ view as shown in the snapshot below. It displays in the same order as how the review page does. A “Revision Planning” subheader is also used here to indicate the starting of the revision planning section.&lt;br /&gt;
&lt;br /&gt;
[[File:E2016_feedback_report_student.png]]&lt;br /&gt;
&lt;br /&gt;
==Database Design==&lt;br /&gt;
&lt;br /&gt;
Here we present the diagram of our database design. As the yellow borders show, we only plan to modify the structure of the Question table and the Assignment table. &lt;br /&gt;
&lt;br /&gt;
[[File:E2016_database_design.png]]&lt;br /&gt;
&lt;br /&gt;
In the Assignment table, the column ''is_revision_planning_enabled?'' will be needed to indicate whether the instructor would like to incorporate the revision planning feature. &lt;br /&gt;
&lt;br /&gt;
Additionally, we add &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to each &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object to distinguish whether the question is on the original rubric or is added by students as part of their revision plan. A &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; object with an empty &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; value will be the question under the original rubric, while the object with a non-empty team_id field will be regarded as the question created by the team that the team_id refers to. That is, instead of creating a whole new &amp;lt;code&amp;gt;RevisionPlanQuestionnaire&amp;lt;/code&amp;gt; class, we decided to dump all the revision plan questions that are created in a given round to the rubric that is used for that round. In this way, we minimize the change to the system to make the original rubric questions and the revision planning questions retrieved together more easily.&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&lt;br /&gt;
===Controllers===&lt;br /&gt;
&lt;br /&gt;
app/controllers/questionnaires_controller.rb&lt;br /&gt;
*edit_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that prepares view template and supplies revision planning questions that belong only to the current team&lt;br /&gt;
*update_revision_plan: a new method added to the &amp;lt;code&amp;gt;QuestionnaireController&amp;lt;/code&amp;gt; that saves revision plan questions under the current round rubric&lt;br /&gt;
*Require some refactoring to share some existing functionalities with the two new methods we described above&lt;br /&gt;
app/controllers/response_controller.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;@questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;@questionnaire.questions(@map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
app/controllers/grades_controller.rb&lt;br /&gt;
*Call the retrieve_questions method every time with an extra parameter “&amp;lt;code&amp;gt;@team_id&amp;lt;/code&amp;gt;”.&lt;br /&gt;
&lt;br /&gt;
===Models ===&lt;br /&gt;
&lt;br /&gt;
app/models/question.rb&lt;br /&gt;
*Form association relationship with &amp;lt;code&amp;gt;Team&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;belongs_to :team, class_name: ‘AssignmentTeam’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
*questions: change the method signature to &amp;lt;code&amp;gt;questions(team_id=nil)&amp;lt;/code&amp;gt; which uses nil as the parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;’s default value, so users can choose to supply a team_id argument or not. When team_id is supplied, it returns both questions with no team_id as well as questions that have this team_id. In addition, it will add to the return list an unsaved “Revision Planning” QuestionnaireHeader so the list can be displayed nicely on the browser with each section separated.&lt;br /&gt;
app/models/assignment_team.rb&lt;br /&gt;
*Form aggregation relationship with &amp;lt;code&amp;gt;Question&amp;lt;/code&amp;gt; via &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt;&lt;br /&gt;
**e.g. &amp;lt;code&amp;gt;has_many :revision_plan_questions, class_name: ‘Question’, foreign_key: ‘team_id’&amp;lt;/code&amp;gt;&lt;br /&gt;
app/models/response.rb&lt;br /&gt;
*Replace all the occurrences of &amp;lt;code&amp;gt;questionnaire.questions&amp;lt;/code&amp;gt; with &amp;lt;code&amp;gt;questionnaire.questions(self.response_map.reviewee_id)&amp;lt;/code&amp;gt; so it not only gets questions from the original rubric but also from the revision plan proposed by the team with the corresponding &amp;lt;code&amp;gt;reviewee_id&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Views===&lt;br /&gt;
&lt;br /&gt;
app/views/questionnaires/edit_revision_plan.html.erb &lt;br /&gt;
*It is a newly-added view file for students to create and edit their revision plan. It will utilize some existing codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to reduce code redundancy.&lt;br /&gt;
app/views/questionnaires/_questions.html.erb&lt;br /&gt;
*Extract codes from the ''app/views/questionnaires/_questionnaire.html.erb'' view file to make it a standalone partial template that will later be loaded by the ''app/views/questionnaires/edit_revision_plan.html.erb'' view file described above.&lt;br /&gt;
app/views/student_task/view.html.erb&lt;br /&gt;
*Add a “Revision Planning” link for students to edit their revision plan. The link will lead students to the “Edit Revision Plan” page. If the revision planning feature is enabled, this link will appear disabled at first and only become clickable during each submission period after round 1.&lt;br /&gt;
app/views/assignments/edit/_review_strategy.html.erb&lt;br /&gt;
*Add “Enable Revision Planning?” checkbox for each assignment because not every assignment needs this feature. TA/instructor have the option to include this feature more flexibly. The “Revision Planning” link will disappear from the assignment page if the checkbox is not checked.&lt;br /&gt;
&lt;br /&gt;
===Helpers===&lt;br /&gt;
&lt;br /&gt;
app/helpers/grades_helper.rb&lt;br /&gt;
*retrieve_questions: add an extra parameter &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; whose default value is set to be nil. It then invoke &amp;lt;code&amp;gt;Questionnaire&amp;lt;/code&amp;gt; model’s &amp;lt;code&amp;gt;questions&amp;lt;/code&amp;gt; method with this &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; and retrieve a proper question set for each team.&lt;br /&gt;
&lt;br /&gt;
===Schema===&lt;br /&gt;
&lt;br /&gt;
*Assignment table: add one column named &amp;lt;code&amp;gt;is_revision_planning_enabled?&amp;lt;/code&amp;gt; to indicate whether this feature has been activated.&lt;br /&gt;
*Question table: add one column named &amp;lt;code&amp;gt;team_id&amp;lt;/code&amp;gt; to distinguish whether the question is from the official review rubric or from a particular team's revision plan.&lt;br /&gt;
&lt;br /&gt;
=Testing=&lt;br /&gt;
&lt;br /&gt;
==RSpec Test Plan==&lt;br /&gt;
&lt;br /&gt;
'''Controllers'''&lt;br /&gt;
*spec/controllers/questionnaires_controller_spec.rb&lt;br /&gt;
**Describe ‘#edit_revision_plan’&lt;br /&gt;
***Context ‘when params[:id] is valid’&lt;br /&gt;
***Context ‘when params[:id] is not valid'&lt;br /&gt;
**Describe ‘#update_revision_plan’&lt;br /&gt;
***Context 'when params[:add_new_questions] is not nil'&lt;br /&gt;
***Context 'when params[:view_advice] is not nil'&lt;br /&gt;
***Context 'when both params[:add_new_questions] and params[:view_advice] are nil'&lt;br /&gt;
*spec/controllers/grades_controller_spec.rb&lt;br /&gt;
**Describe ‘#view’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
**Describe ‘#view_my_scores’&lt;br /&gt;
***Context ‘When the revision plan is included in one round’&lt;br /&gt;
*spec/controllers/response_controller_spec.rb&lt;br /&gt;
**Describe&lt;br /&gt;
&lt;br /&gt;
'''Models'''&lt;br /&gt;
*spec/models/question_spec.rb&lt;br /&gt;
**Describe ‘#questions’&lt;br /&gt;
***Context ‘when team_id is supplied’&lt;br /&gt;
***Context ‘when team_id is not supplied’&lt;br /&gt;
&lt;br /&gt;
==UI Testing Instructions (For Reviewers)==&lt;br /&gt;
&lt;br /&gt;
'''Setup'''&lt;br /&gt;
&lt;br /&gt;
Login information&lt;br /&gt;
*Visit xxx [expertiza deployment link]&lt;br /&gt;
    User name: instructor6/student8030/student8031&lt;br /&gt;
    Password: password&lt;br /&gt;
Enable/Disable Revision Planning&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Review strategy'''” tab, check the checkbox labeled “'''Enable Revision Planning?'''” to enable the revision planning feature. &lt;br /&gt;
Enable/Disable the “Revision Planning” link&lt;br /&gt;
*After the instructor configures the assignment to include the revision planning feature, the “'''Revision Planning'''” link will appear on the student's assignment page but will remain disabled and only be enabled during each submission period after round 1. Therefore, to enable the link:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date in the future.&lt;br /&gt;
*To disable the link after round 2 submission period:&lt;br /&gt;
**Login as instructor6.&lt;br /&gt;
**Go to an assignment’s edit page. Under the “'''Due dates'''” tab, change the round 2 submission date to whenever date from the past and change the round 2 review date to whenever date in the future.&lt;br /&gt;
Create teams for the assignment&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Go to an assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Add participant'''”.&lt;br /&gt;
*In the new page, add two students, student8030 and student8031, so one student can create revision plan questions during the submission period while the other can respond to these questions during the review period.&lt;br /&gt;
*Go back to the assignment’s edit page. Under the “'''Other stuff'''” tab, click “'''Create teams'''”.&lt;br /&gt;
*In the new page, group the two added students to separate teams.&lt;br /&gt;
&lt;br /&gt;
Edit a Revision Plan&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link. which redirects the user to a new page used to create a revision plan. Fill the blanks and click on the “'''Save review questionnaire'''” button, and the revision plan should be saved.&lt;br /&gt;
Test retrieval of revision plan questions for a specific team&lt;br /&gt;
*Login as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Revision Planning'''” link after steps under '''Edit a Revision Plan''' have been done. The “'''Revision Planning'''” link should redirect the user to the Revision Planning edit page that is populated with previously saved questions.&lt;br /&gt;
Check the Revision Plan questions in the questionnaire.&lt;br /&gt;
*Login as instructor6.&lt;br /&gt;
*Adjust the time frame to the second round review period.&lt;br /&gt;
*Log out and log in again as student8031.&lt;br /&gt;
*In the assignment page, click on the “'''Others’ work'''” link, which takes the user to the review page where one requests a new team’s submission to review. Go to the only other team’s review page and check if the questions are properly displayed under the “Revision Planning” subheader.&lt;br /&gt;
Check responses to the Revision Plan questions&lt;br /&gt;
*Login again as student8030.&lt;br /&gt;
*In the assignment page, click on the “'''Alternative View'''” link, and see if student8030 gets responses for both the original rubric questions as well as its revision plan questions.&lt;/div&gt;</summary>
		<author><name>Yzhan114</name></author>
	</entry>
</feed>