<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Msaikia</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Msaikia"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Msaikia"/>
	<updated>2026-05-16T16:28:56Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100344</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100344"/>
		<updated>2015-12-05T04:55:59Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Database Table Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Pattern ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we have iterated over the comments in each response(answers table) relating to a particular round to calculate the individual metrics. Also while calculating the aggregate metrics we have iterated through the review_metrics table to read the metric values for a particular user using all the responses for the particular response map corresponding to the submission of the review.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Review_table.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is captured when a user submits a reviewer submits a review and passed on to the ReviewMetric model.This is used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''total_word_count:''' This attribute contains the total number of words for a particular review(response_id).&lt;br /&gt;
*'''diff_word_count:''' This attribute contains the total number of different words for a particular review(response_id).&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''complete_count:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
[[File:UML_Review_Metrics.png]]&lt;br /&gt;
&lt;br /&gt;
=== Tasks Implemented ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# Create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:Dbtable.png]]&lt;br /&gt;
# Create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calculate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calculate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence[[File:Model.png]]&lt;br /&gt;
# Create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&amp;lt;br&amp;gt;[[File:ViewCode1.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode2.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstTextMetric.png]]&lt;br /&gt;
# Make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
#* The ''calculate_metric'' code works for each review submission in a way where it uses the response_id of each review saved/submitted to find the review text saved in ''Answer'' table. The entire review text is then thoroughly checked to calculate the required metrics. Hence, any variation in the review rubrics does not affect this metric calculation.&lt;br /&gt;
&lt;br /&gt;
=== Unit Tests ===&lt;br /&gt;
# Tested for a valid object, i.e. a valid ReviewMetric entry.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it only calculates the metrics for a ReviewMetric entry with a valid response_id.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it does not calculate the metrics for a ReviewMetric entry with an invalid response_id.&lt;br /&gt;
[[File: Testing1580.png]]&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_table.png&amp;diff=100341</id>
		<title>File:Review table.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_table.png&amp;diff=100341"/>
		<updated>2015-12-05T04:55:27Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: review_metrics&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;review_metrics&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Table.PNG&amp;diff=100339</id>
		<title>File:Table.PNG</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Table.PNG&amp;diff=100339"/>
		<updated>2015-12-05T04:54:57Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: uploaded a new version of &amp;amp;quot;File:Table.PNG&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;metrics_table&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Table.PNG&amp;diff=100335</id>
		<title>File:Table.PNG</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Table.PNG&amp;diff=100335"/>
		<updated>2015-12-05T04:53:47Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: uploaded a new version of &amp;amp;quot;File:Table.PNG&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;metrics_table&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100327</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100327"/>
		<updated>2015-12-05T04:49:06Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Pattern ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we have iterated over the comments in each response(answers table) relating to a particular round to calculate the individual metrics. Also while calculating the aggregate metrics we have iterated through the review_metrics table to read the metric values for a particular user using all the responses for the particular response map corresponding to the submission of the review.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is captured when a user submits a reviewer submits a review and passed on to the ReviewMetric model.This is used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''total_word_count:''' This attribute contains the total number of words for a particular review(response_id).&lt;br /&gt;
*'''diff_word_count:''' This attribute contains the total number of different words for a particular review(response_id).&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''complete_count:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
[[File:UML_Review_Metrics.png]]&lt;br /&gt;
&lt;br /&gt;
=== Tasks Implemented ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# Create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:Dbtable.png]]&lt;br /&gt;
# Create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calculate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calculate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence[[File:Model.png]]&lt;br /&gt;
# Create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&amp;lt;br&amp;gt;[[File:ViewCode1.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode2.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstTextMetric.png]]&lt;br /&gt;
# Make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
#* The ''calculate_metric'' code works for each review submission in a way where it uses the response_id of each review saved/submitted to find the review text saved in ''Answer'' table. The entire review text is then thoroughly checked to calculate the required metrics. Hence, any variation in the review rubrics does not affect this metric calculation.&lt;br /&gt;
&lt;br /&gt;
=== Unit Tests ===&lt;br /&gt;
# Tested for a valid object, i.e. a valid ReviewMetric entry.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it only calculates the metrics for a ReviewMetric entry with a valid response_id.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it does not calculate the metrics for a ReviewMetric entry with an invalid response_id.&lt;br /&gt;
[[File: Testing1580.png]]&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100325</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100325"/>
		<updated>2015-12-05T04:48:04Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Design Patterns */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Pattern ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we have iterated over the comments in each response(answers table) relating to a particular round to calculate the individual metrics. Also while calculating the aggregate metrics we have iterated through the review_metrics table to read the metric values for a particular user using all the responses for the particular response map corresponding to the submission of the review.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is captured when a user submits a reviewer submits a review and passed on to the ReviewMetric model.This is used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''total_word_count:''' This attribute contains the total number of words for a particular review(response_id).&lt;br /&gt;
*'''diff_word_count:''' This attribute contains the total number of different words for a particular review(response_id).&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''complete_count:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
[[File:UML_Review_Metrics.png]]&lt;br /&gt;
&lt;br /&gt;
=== Tasks Implemented ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# Create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:Dbtable.png]]&lt;br /&gt;
# Create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calculate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calculate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence[[File:Model.png]]&lt;br /&gt;
# Create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&amp;lt;br&amp;gt;[[File:ViewCode1.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode2.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstTextMetric.png]]&lt;br /&gt;
# Make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
#* The ''calculate_metric'' code works for each review submission in a way where it uses the response_id of each review saved/submitted to find the review text saved in ''Answer'' table. The entire review text is then thoroughly checked to calculate the required metrics. Hence, any variation in the review rubrics does not affect this metric calculation.&lt;br /&gt;
&lt;br /&gt;
=== Unit Tests ===&lt;br /&gt;
# Tested for a valid object, i.e. a valid ReviewMetric entry.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it only calculates the metrics for a ReviewMetric entry with a valid response_id.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it does not calculate the metrics for a ReviewMetric entry with an invalid response_id.&lt;br /&gt;
[[File: Testing1580.png]]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100324</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100324"/>
		<updated>2015-12-05T04:47:42Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Design Patterns */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we have iterated over the comments in each response(answers table) relating to a particular round to calculate the individual metrics. Also while calculating the aggregate metrics we have iterated through the review_metrics table to read the metric values for a particular user using all the responses for the particular response map corresponding to the submission of the review.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is captured when a user submits a reviewer submits a review and passed on to the ReviewMetric model.This is used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''total_word_count:''' This attribute contains the total number of words for a particular review(response_id).&lt;br /&gt;
*'''diff_word_count:''' This attribute contains the total number of different words for a particular review(response_id).&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''complete_count:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
[[File:UML_Review_Metrics.png]]&lt;br /&gt;
&lt;br /&gt;
=== Tasks Implemented ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# Create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:Dbtable.png]]&lt;br /&gt;
# Create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calculate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calculate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence[[File:Model.png]]&lt;br /&gt;
# Create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&amp;lt;br&amp;gt;[[File:ViewCode1.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode2.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstTextMetric.png]]&lt;br /&gt;
# Make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
#* The ''calculate_metric'' code works for each review submission in a way where it uses the response_id of each review saved/submitted to find the review text saved in ''Answer'' table. The entire review text is then thoroughly checked to calculate the required metrics. Hence, any variation in the review rubrics does not affect this metric calculation.&lt;br /&gt;
&lt;br /&gt;
=== Unit Tests ===&lt;br /&gt;
# Tested for a valid object, i.e. a valid ReviewMetric entry.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it only calculates the metrics for a ReviewMetric entry with a valid response_id.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it does not calculate the metrics for a ReviewMetric entry with an invalid response_id.&lt;br /&gt;
[[File: Testing1580.png]]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100315</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100315"/>
		<updated>2015-12-05T04:43:05Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Test Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is captured when a user submits a reviewer submits a review and passed on to the ReviewMetric model.This is used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''total_word_count:''' This attribute contains the total number of words for a particular review(response_id).&lt;br /&gt;
*'''diff_word_count:''' This attribute contains the total number of different words for a particular review(response_id).&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''complete_count:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
[[File:UML_Review_Metrics.png]]&lt;br /&gt;
&lt;br /&gt;
=== Tasks Implemented ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# Create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:Dbtable.png]]&lt;br /&gt;
# Create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calculate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calculate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence[[File:Model.png]]&lt;br /&gt;
# Create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&amp;lt;br&amp;gt;[[File:ViewCode1.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode2.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstTextMetric.png]]&lt;br /&gt;
# Make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
#* The ''calculate_metric'' code works for each review submission in a way where it uses the response_id of each review saved/submitted to find the review text saved in ''Answer'' table. The entire review text is then thoroughly checked to calculate the required metrics. Hence, any variation in the review rubrics does not affect this metric calculation.&lt;br /&gt;
&lt;br /&gt;
=== Unit Tests ===&lt;br /&gt;
# Tested for a valid object, i.e. a valid ReviewMetric entry.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it only calculates the metrics for a ReviewMetric entry with a valid response_id.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it does not calculate the metrics for a ReviewMetric entry with an invalid response_id.&lt;br /&gt;
[[File: Testing1580.png]]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100314</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100314"/>
		<updated>2015-12-05T04:42:52Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Test Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is captured when a user submits a reviewer submits a review and passed on to the ReviewMetric model.This is used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''total_word_count:''' This attribute contains the total number of words for a particular review(response_id).&lt;br /&gt;
*'''diff_word_count:''' This attribute contains the total number of different words for a particular review(response_id).&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''complete_count:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
[[File:UML_Review_Metrics.png]]&lt;br /&gt;
&lt;br /&gt;
=== Tasks Implemented ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# Create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:Dbtable.png]]&lt;br /&gt;
# Create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calculate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calculate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence[[File:Model.png]]&lt;br /&gt;
# Create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&amp;lt;br&amp;gt;[[File:ViewCode1.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode2.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstTextMetric.png]]&lt;br /&gt;
# Make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
#* The ''calculate_metric'' code works for each review submission in a way where it uses the response_id of each review saved/submitted to find the review text saved in ''Answer'' table. The entire review text is then thoroughly checked to calculate the required metrics. Hence, any variation in the review rubrics does not affect this metric calculation.&lt;br /&gt;
&lt;br /&gt;
=== Unit Tests ===&lt;br /&gt;
# Tested for a valid object, i.e. a valid ReviewMetric entry.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it only calculates the metrics for a ReviewMetric entry with a valid response_id.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it does not calculate the metrics for a ReviewMetric entry with an invalid response_id.&lt;br /&gt;
[[File: Testing1580.png]]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100313</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100313"/>
		<updated>2015-12-05T04:42:07Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Database Table Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is captured when a user submits a reviewer submits a review and passed on to the ReviewMetric model.This is used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''total_word_count:''' This attribute contains the total number of words for a particular review(response_id).&lt;br /&gt;
*'''diff_word_count:''' This attribute contains the total number of different words for a particular review(response_id).&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''complete_count:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
[[File:UML_Review_Metrics.png]]&lt;br /&gt;
&lt;br /&gt;
=== Tasks Implemented ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# Create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:Dbtable.png]]&lt;br /&gt;
# Create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calculate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calculate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence[[File:Model.png]]&lt;br /&gt;
# Create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&amp;lt;br&amp;gt;[[File:ViewCode1.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode2.png]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File: InstTextMetric.png]]&lt;br /&gt;
# Make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
#* The ''calculate_metric'' code works for each review submission in a way where it uses the response_id of each review saved/submitted to find the review text saved in ''Answer'' table. The entire review text is then thoroughly checked to calculate the required metrics. Hence, any variation in the review rubrics does not affect this metric calculation.&lt;br /&gt;
&lt;br /&gt;
=== Unit Tests ===&lt;br /&gt;
# Tested for a valid object, i.e. a valid ReviewMetric entry.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it only calculates the metrics for a ReviewMetric entry with a valid response_id.&lt;br /&gt;
#Tested the ''ReviewMetric'' model so that it does not calculate the metrics for a ReviewMetric entry with an invalid response_id.&lt;br /&gt;
[[File: Testing1580.png]]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100260</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100260"/>
		<updated>2015-12-05T03:04:46Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* UML Diagram */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is automatically generated when a user submits a reviewer submits a review.This can be used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''word_count:''' This attribute contains the total number of different words for a particular review.&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''comp_reviews:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
[[File:UML_Review_Metrics.png]]&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&lt;br /&gt;
#* [[File:Dbtable.png]]&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calulate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calulate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence&lt;br /&gt;
#* [[File:Model.png]]&lt;br /&gt;
# create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&lt;br /&gt;
#* [[File:ViewCode1.png]]&lt;br /&gt;
#* [[File:ViewCode2.png]]&lt;br /&gt;
#* [[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&lt;br /&gt;
#* [[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&lt;br /&gt;
#* [[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&lt;br /&gt;
#* [[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&lt;br /&gt;
#* [[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&lt;br /&gt;
#* [[File: InstTextMetric.png]]&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:UML_Review_Metrics.png&amp;diff=100259</id>
		<title>File:UML Review Metrics.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:UML_Review_Metrics.png&amp;diff=100259"/>
		<updated>2015-12-05T03:04:05Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100258</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100258"/>
		<updated>2015-12-05T03:03:23Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is automatically generated when a user submits a reviewer submits a review.This can be used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''word_count:''' This attribute contains the total number of different words for a particular review.&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''comp_reviews:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&lt;br /&gt;
#* [[File:Dbtable.png]]&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calulate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calulate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence&lt;br /&gt;
#* [[File:Model.png]]&lt;br /&gt;
# create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&lt;br /&gt;
#* [[File:ViewCode1.png]]&lt;br /&gt;
#* [[File:ViewCode2.png]]&lt;br /&gt;
#* [[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&lt;br /&gt;
#* [[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&lt;br /&gt;
#* [[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&lt;br /&gt;
#* [[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&lt;br /&gt;
#* [[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&lt;br /&gt;
#* [[File: InstTextMetric.png]]&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100257</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100257"/>
		<updated>2015-12-05T03:01:48Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Use Cases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is automatically generated when a user submits a reviewer submits a review.This can be used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''word_count:''' This attribute contains the total number of different words for a particular review.&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''comp_reviews:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&lt;br /&gt;
#* [[File:Dbtable.png]]&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calulate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calulate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence&lt;br /&gt;
#* [[File:Model.png]]&lt;br /&gt;
# create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&lt;br /&gt;
#* [[File:ViewCode1.png]]&lt;br /&gt;
#* [[File:ViewCode2.png]]&lt;br /&gt;
#* [[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&lt;br /&gt;
#* [[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&lt;br /&gt;
#* [[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&lt;br /&gt;
#* [[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&lt;br /&gt;
#* [[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&lt;br /&gt;
#* [[File: InstTextMetric.png]]&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100256</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100256"/>
		<updated>2015-12-05T03:01:35Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Use Cases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is automatically generated when a user submits a reviewer submits a review.This can be used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''word_count:''' This attribute contains the total number of different words for a particular review.&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''comp_reviews:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&lt;br /&gt;
#* [[File:Dbtable.png]]&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calulate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calulate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence&lt;br /&gt;
#* [[File:Model.png]]&lt;br /&gt;
# create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&lt;br /&gt;
#* [[File:ViewCode1.png]]&lt;br /&gt;
#* [[File:ViewCode2.png]]&lt;br /&gt;
#* [[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&lt;br /&gt;
#* [[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&lt;br /&gt;
#* [[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&lt;br /&gt;
#* [[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&lt;br /&gt;
#* [[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&lt;br /&gt;
#* [[File: InstTextMetric.png]]&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100255</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100255"/>
		<updated>2015-12-05T02:59:25Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Mockups */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is automatically generated when a user submits a reviewer submits a review.This can be used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''word_count:''' This attribute contains the total number of different words for a particular review.&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''comp_reviews:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&lt;br /&gt;
#* [[File:Dbtable.png]]&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calulate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calulate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence&lt;br /&gt;
#* [[File:Model.png]]&lt;br /&gt;
# create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment. These views are accessible through links in the student report page and instructor ''_review_report.html.erb'' page&lt;br /&gt;
#* The following are screenshots where the links are included in the above mentioned pages&lt;br /&gt;
#* [[File:ViewCode1.png]]&lt;br /&gt;
#* [[File:ViewCode2.png]]&lt;br /&gt;
#* [[File:ViewCode3.png]]&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&lt;br /&gt;
#* [[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&lt;br /&gt;
#* [[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&lt;br /&gt;
#* [[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&lt;br /&gt;
#* [[File: InstAggMetric.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&lt;br /&gt;
#* [[File: InstTextMetric.png]]&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feat&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100217</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=100217"/>
		<updated>2015-12-05T01:31:20Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Task Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
[https://expertiza.ncsu.edu/ Expertiza] is an open-source [https://en.wikipedia.org/wiki/Peer_review peer-review] based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the [https://en.wikipedia.org/wiki/Ruby_on_Rails Ruby on Rails] framework and is supported by the [http://www.nsf.gov National Science Foundation]&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual [https://en.wikipedia.org/wiki/Peer_review peer-review] mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
====Reviewer Panel====&lt;br /&gt;
The diagrams below show the proposed changes through the perspective of a reviewer.The reviewer can click the '''Show aggregate metrics''' link to view the aggregate metrics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reviewer can also click the '''Show metrics''' link for each individual review to view the specific metrics for that review.&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
====Instructor Panel====&lt;br /&gt;
&lt;br /&gt;
The image represents instructors view of the Review report after our modifications.The instructor can click '''Show aggregate metrics''' link to view the aggregate metrics for each team for a given assignment.&lt;br /&gt;
&lt;br /&gt;
[[File:Inst_aggregate_metrics.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Iterator_pattern&amp;lt;/ref&amp;gt;:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Flyweight_pattern&amp;lt;/ref&amp;gt;:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is automatically generated when a user submits a reviewer submits a review.This can be used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''word_count:''' This attribute contains the total number of different words for a particular review.&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''comp_reviews:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: [https://en.wikipedia.org/wiki/Git_(software) Git], Interactive Ruby&lt;br /&gt;
== Implementation ==&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to record all the metrics&lt;br /&gt;
#* A new MVC by the name ''ReviewMetric'' has been added to the project.&lt;br /&gt;
#* A table ''review_metrics'' is used for the same. It has the following columns to record data with respect to each response, i.e. review submitted.&lt;br /&gt;
#* A new row is updated in the table when a new review is saved or submitted and the respective row gets updated when an existing review is re-edited or submitted.&lt;br /&gt;
#* [[File:Dbtable.png]]&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
#* The code for evaluating the metrics value as per each review is saved in the function ''calulate_metric'' at the model ''review_metric.rb''&lt;br /&gt;
#* This code is called from the function ''saving'' residing at ''response_controller.rb''. The ''saving'' function gets called post necessary processing when a review is saved or submitted. The ''calulate_metric'' method is called at the end of the ''saving'' method to analyse the review and save respective information according to the requirement.&lt;br /&gt;
#* The calculation code uses the ''Answer'' table to pull out the saved review content using the ''response_id'' of the review. It incorporates a set each for offensive words list, suggestive words list, and a list of problem pointing words. The function then calculates the following:&lt;br /&gt;
#** Total number of words in the review&lt;br /&gt;
#** Different number of words in the review&lt;br /&gt;
#** Number of offensive words in the review - the method uses a set of offensive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signals a suggestion in the review - the method uses a set of suggestive words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of words which signal a problem being pointed out in the review - the method uses a set of problem pointing words as a dictionary and compares this with each word in the review&lt;br /&gt;
#** Number of questions responded to with complete sentences - each sentence which has more than seven words qualify for a complete sentence&lt;br /&gt;
#* [[File:Model.png]]&lt;br /&gt;
# create partials for both students and instructors:&lt;br /&gt;
#* Views are created for both students and instructors to display the ''text metrics'' calculated for each review and assignment&lt;br /&gt;
#* The following is a screenshot when a student saves or submits a review&lt;br /&gt;
#* [[File:StuReviewHome.png]]&lt;br /&gt;
#* The following is a screenshot when the student clicks the ''View Text Metrics'' link&lt;br /&gt;
#* [[File: StuReviewDetail.png]]&lt;br /&gt;
#* The following is a screenshot when an instructor uses the ''View Review Report'' link at the assignments page for a given assignment&lt;br /&gt;
#* [[File: InstReviewReport.png]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the ''text metrics summary'' link&lt;br /&gt;
#* [[File: ]]&lt;br /&gt;
#* The following is a screenshot when the instructor clicks the individual ''text metrics'' link&lt;br /&gt;
#* [[File: InstTextMetric.png]]&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[https://github.com/moharnab123saikia/expertiza/  Expertiza Github Repository]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99823</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99823"/>
		<updated>2015-11-14T02:22:33Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Database Table Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
====Reviewer Panel====&lt;br /&gt;
The diagrams below show the proposed changes through the perspective of a reviewer.The reviewer can click the '''Show aggregate metrics''' link to view the aggregate metrics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reviewer can also click the '''Show metrics''' link for each individual review to view the specific metrics for that review.&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
====Instructor Panel====&lt;br /&gt;
&lt;br /&gt;
The image represents instructors view of the Review report after our modifications.The instructor can click '''Show aggregate metrics''' link to view the aggregate metrics for each team for a given assignment.&lt;br /&gt;
&lt;br /&gt;
[[File:Inst_aggregate_metrics.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is automatically generated when a user submits a reviewer submits a review.This can be used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''word_count:''' This attribute contains the total number of different words for a particular review.&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''comp_reviews:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99822</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99822"/>
		<updated>2015-11-14T02:22:17Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Database Table Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
====Reviewer Panel====&lt;br /&gt;
The diagrams below show the proposed changes through the perspective of a reviewer.The reviewer can click the '''Show aggregate metrics''' link to view the aggregate metrics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reviewer can also click the '''Show metrics''' link for each individual review to view the specific metrics for that review.&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
====Instructor Panel====&lt;br /&gt;
&lt;br /&gt;
The image represents instructors view of the Review report after our modifications.The instructor can click '''Show aggregate metrics''' link to view the aggregate metrics for each team for a given assignment.&lt;br /&gt;
&lt;br /&gt;
[[File:Inst_aggregate_metrics.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shows the schema for the new table which will be created to store the calculated values of the metrics. Its attributes are explained below:&lt;br /&gt;
&lt;br /&gt;
*'''response_id:''' This attribute is automatically generated when a user submits a reviewer submits a review.This can be used to link the metrics to a particular reviewer-response map.&lt;br /&gt;
*'''word_count:'' This attribute contains the total number of different words for a particular review.&lt;br /&gt;
*'''suggestions_count:''' This column holds the number of suggestions given per review&lt;br /&gt;
*'''error_count:''' Field containing the number of comments which point to errors in the code.&lt;br /&gt;
*'''offensive_count:''' This attribute contains the number of comments containing offensive words.&lt;br /&gt;
*'''comp_reviews:''' This contains the number of comments which have complete sentences in them.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99817</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99817"/>
		<updated>2015-11-14T02:10:27Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
====Reviewer Panel====&lt;br /&gt;
The diagrams below show the proposed changes through the perspective of a reviewer.The reviewer can click the '''Show aggregate metrics''' link to view the aggregate metrics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reviewer can also click the '''Show metrics''' link for each individual review to view the specific metrics for that review.&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
====Instructor Panel====&lt;br /&gt;
&lt;br /&gt;
The image represents instructors view of the Review report after our modifications.The instructor can click '''Show aggregate metrics''' link to view the aggregate metrics for each team for a given assignment.&lt;br /&gt;
&lt;br /&gt;
[[File:Inst_aggregate_metrics.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
=== Database Table Design ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Table.PNG]]&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Table.PNG&amp;diff=99816</id>
		<title>File:Table.PNG</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Table.PNG&amp;diff=99816"/>
		<updated>2015-11-14T02:09:54Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: metrics_table&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;metrics_table&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99810</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99810"/>
		<updated>2015-11-14T01:50:22Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
====Reviewer Panel====&lt;br /&gt;
The diagrams below show the proposed changes through the perspective of a reviewer.The reviewer can click the '''Show aggregate metrics''' link to view the aggregate metrics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reviewer can also click the '''Show metrics''' link for each individual review to view the specific metrics for that review.&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
====Instructor Panel====&lt;br /&gt;
&lt;br /&gt;
The image represents instructors view of the Review report after our modifications.The instructor can click '''Show aggregate metrics''' link to view the aggregate metrics for each team for a given assignment.&lt;br /&gt;
&lt;br /&gt;
[[File:Inst_aggregate_metrics.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Design Patterns ===&lt;br /&gt;
&lt;br /&gt;
'''Iterator Pattern:'''The iterator design pattern uses an iterator to traverse a container and access its elements.In our implementation, we would require showing the review metrics for each response.So, we can use the iterator design pattern to iterate over the calculated metrics values.&lt;br /&gt;
&lt;br /&gt;
'''FlyWeight Pattern:''' The flyweight design pattern tries to reduce memory footprint by reusing already existing similar objects by storing them and only creates new object when there is no match. In our case, we can use the same partials of the metrics pages for both the students and instructors.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99800</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99800"/>
		<updated>2015-11-14T01:34:33Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Mockups */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
====Reviewer Panel====&lt;br /&gt;
The diagrams below show the proposed changes through the perspective of a reviewer.The reviewer can click the '''Show aggregate metrics''' link to view the aggregate metrics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reviewer can also click the '''Show metrics''' link for each individual review to view the specific metrics for that review.&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
====Instructor Panel====&lt;br /&gt;
&lt;br /&gt;
The image represents instructors view of the Review report after our modifications.The instructor can click '''Show aggregate metrics''' link to view the aggregate metrics for each team for a given assignment.&lt;br /&gt;
&lt;br /&gt;
[[File:Inst_aggregate_metrics.png]]&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Inst_aggregate_metrics.png&amp;diff=99798</id>
		<title>File:Inst aggregate metrics.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Inst_aggregate_metrics.png&amp;diff=99798"/>
		<updated>2015-11-14T01:30:57Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99780</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99780"/>
		<updated>2015-11-14T01:05:18Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Reviewer Panel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
====Reviewer Panel====&lt;br /&gt;
The diagrams below show the proposed changes through the perspective of a reviewer.The reviewer can click the '''Show aggregate metrics''' link to view the aggregate metrics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reviewer can also click the '''Show metrics''' link for each individual review to view the specific metrics for that review.&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99773</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99773"/>
		<updated>2015-11-14T00:58:52Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
====Reviewer Panel====&lt;br /&gt;
The diagrams below show the proposed changes through the perspective of a reviewer.The reviewer can click the '''Show aggregate metrics''' link to view the aggregate metrics.&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_metrics.PNG&amp;diff=99766</id>
		<title>File:Review metrics.PNG</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_metrics.PNG&amp;diff=99766"/>
		<updated>2015-11-14T00:55:57Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: uploaded a new version of &amp;amp;quot;File:Review metrics.PNG&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;aggregate metrics for reviewer in expertiza&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:User_metrics.PNG&amp;diff=99765</id>
		<title>File:User metrics.PNG</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:User_metrics.PNG&amp;diff=99765"/>
		<updated>2015-11-14T00:54:21Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: uploaded a new version of &amp;amp;quot;File:User metrics.PNG&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;metrics in reviewer page for particular review&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:User_metrics.PNG&amp;diff=99763</id>
		<title>File:User metrics.PNG</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:User_metrics.PNG&amp;diff=99763"/>
		<updated>2015-11-14T00:53:31Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: uploaded a new version of &amp;amp;quot;File:User metrics.PNG&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;metrics in reviewer page for particular review&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99756</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99756"/>
		<updated>2015-11-14T00:50:33Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /*design*/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
&lt;br /&gt;
[[File:User_metrics.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Review_metrics.PNG]]&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99754</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99754"/>
		<updated>2015-11-14T00:49:05Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: added mockup images&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
=== Mockups ===&lt;br /&gt;
&lt;br /&gt;
[[File:review_metrics.png]]&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:User_metrics.PNG&amp;diff=99750</id>
		<title>File:User metrics.PNG</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:User_metrics.PNG&amp;diff=99750"/>
		<updated>2015-11-14T00:47:05Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: metrics in reviewer page for particular review&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;metrics in reviewer page for particular review&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_metrics.PNG&amp;diff=99747</id>
		<title>File:Review metrics.PNG</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_metrics.PNG&amp;diff=99747"/>
		<updated>2015-11-14T00:45:13Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: aggregate metrics for reviewer in expertiza&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;aggregate metrics for reviewer in expertiza&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99744</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99744"/>
		<updated>2015-11-14T00:43:12Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: added test plan&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
# '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
#  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
#For '''use case 1''', test whether the text metrics Db has entries populated for each type of metrics (no. of words, no. of offensive words, etc), once the reviewer submits any reviews.&lt;br /&gt;
#For '''use case 2''', test if the reviewer can see the text metrics of individual reviews received from all reviewers.&lt;br /&gt;
#For '''use case 3''', test if the instructor can see the text metrics of reviews received by each team for a project/assignment. Also, test if the instructor can see the text metrics done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99740</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99740"/>
		<updated>2015-11-14T00:41:14Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: added use cases&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
* '''View Reviews Text Metrics as Reviewer:''' As a reviewer, he/she can see the text metrics of individual reviews as well as aggregate metrics for all the reviews done for an assignment/project.&lt;br /&gt;
*  '''View Reviews Text Metrics as Reviewee:''' As a reviewee, he/she can see the text metrics of individual reviews received from reviewers as well as aggregate metrics for all the reviews received for an assignment/project.&lt;br /&gt;
*  '''View Reviews Text Metrics as Instructor:'''  As an instructor, he/she can see the text metrics of reviews received by any team for a particular project/assignment. The instructor can also see the text metrics of the reviews done by any reviewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99736</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99736"/>
		<updated>2015-11-14T00:35:44Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Task Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds) as the current auto text metrics functionality is very slow and diminishes user experience.&lt;br /&gt;
# create partials for both students and instructors that show for each assignment:&lt;br /&gt;
#* Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particular round&lt;br /&gt;
#* if there are suggestions in each reviewer's review&lt;br /&gt;
#* the percentage of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out in the reviews&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentage of peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99704</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99704"/>
		<updated>2015-11-13T23:07:28Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Task Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Task Description ===&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds)&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
#*Total no.of words&lt;br /&gt;
#* average no. of words for all the reviews for the particular assignment in a particuar round&lt;br /&gt;
#* if suggestions are present&lt;br /&gt;
#* the % of peer reviews that offer any suggestions&lt;br /&gt;
#* if problems or errors are pointed out&lt;br /&gt;
#* the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
#* if any offensive language is used&lt;br /&gt;
#* the percentages of the peer-reviews containing offensive language&lt;br /&gt;
#* No.of different words in a particular reviewer’s review&lt;br /&gt;
#* No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;br /&gt;
&lt;br /&gt;
==Details of Requirements==&lt;br /&gt;
===Hardware requirements===&lt;br /&gt;
* Computing Power: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Memory: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Disk Storage: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Peripherals: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
* Network: Same as the current Expertiza system.&lt;br /&gt;
&lt;br /&gt;
===Software requirements===&lt;br /&gt;
* Operating system environment : Windows/UNIX/OS X based OS&lt;br /&gt;
&lt;br /&gt;
* Networking environment: Same as it is used in the current Expertiza system&lt;br /&gt;
&lt;br /&gt;
* Tools: Git, Interactive Ruby&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99414</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99414"/>
		<updated>2015-11-10T23:21:28Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: task desc&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
== Task Description ==&lt;br /&gt;
&lt;br /&gt;
The project requires completion of the following tasks&lt;br /&gt;
&lt;br /&gt;
# create database table to the metrics&lt;br /&gt;
# create code to calculate the values of the metrics and also ensure that the code runs fast enough (can give results within 5 seconds)&lt;br /&gt;
# create views for both students and instructors that show for each assignment:&lt;br /&gt;
** Total no.of words&lt;br /&gt;
** average no. of words for all the reviews for the particular assignment in a particuar round&lt;br /&gt;
** if suggestions are present&lt;br /&gt;
** the % of peer reviews that offer any suggestions&lt;br /&gt;
** if problems or errors are pointed out&lt;br /&gt;
** the percentage of the peer-reviews which point out problems in this assignment in this round&lt;br /&gt;
** if any offensive language is used&lt;br /&gt;
** the percentages of the peer-reviews containing offensive language&lt;br /&gt;
** No.of different words in a particular reviewer’s review&lt;br /&gt;
** No. of questions responded to with complete sentences&lt;br /&gt;
# make the code work for an assignment with and without the &amp;quot;vary rubric by rounds&amp;quot; feature&lt;br /&gt;
# create tests to make sure the test coverage increases&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:e1580flow.JPG]]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99410</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99410"/>
		<updated>2015-11-10T22:01:19Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Scope */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.Since the project is mainly related to giving reports about the existing data, we will not be modifying the results saved by the actual peer-review mechanism.The scope also excludes any change of the actual peer review process i.e. submitting a review of an assignment or adding an assignment to a user profile.&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:Workflow_1580.JPG|centre|550x550px|Workflow of Final Project E-1580]]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99409</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99409"/>
		<updated>2015-11-10T21:54:03Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Scope */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for the submitted reviews.The user or the instructor need to manually visit the link to view the metrics.The instructor can view the metrics for every assignment available whereas the user can view only the metrics of the assignments relating to himself.The scope doesn't include any automated viewing of reports for the metrics.&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:Workflow_1580.JPG|centre|550x550px|Workflow of Final Project E-1580]]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99408</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99408"/>
		<updated>2015-11-10T21:49:09Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Scope */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;br /&gt;
&lt;br /&gt;
== Project ==&lt;br /&gt;
&lt;br /&gt;
=== Purpose ===&lt;br /&gt;
The current version of The Expertiza Project has an automated meta-review system wherein the reviewer gets an e-mail containing various metrics of his review like relevance, plagiarism, number of words etc. , whenever a review is submitted. The purpose of this project is to give students some metrics on the content of the review when the automated meta reviews are disabled. This also includes the addition of new relevant metrics which can help the reviewers and instructors to gain insight into the reviews.&lt;br /&gt;
&lt;br /&gt;
=== Scope ===&lt;br /&gt;
The scope of the project is limited creating a system where the reviewer and the instructors can view metrics for each of the submitted reviews.The user or the instructor need to manually visit to the link to view the metrics.The scope doesn't include any automated viewing of reports for the metrics.&lt;br /&gt;
&lt;br /&gt;
=== Workflow ===&lt;br /&gt;
[[File:Workflow_1580.JPG|centre|550x550px|Workflow of Final Project E-1580]]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99402</id>
		<title>CSC/ECE 517 Fall 2015 E1580 Text metrics</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015_E1580_Text_metrics&amp;diff=99402"/>
		<updated>2015-11-10T20:31:40Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: Introduction to Expertiza&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Expertiza is an open-source peer-review based web application which allows for incremental learning. Students can submit learning objects such as articles, wiki pages, repository links and with the help of peer reviews, improve them. The project has been developed using the Ruby on Rails framework and is supported by the National Science Foundation&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015&amp;diff=99401</id>
		<title>CSC/ECE 517 Fall 2015</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015&amp;diff=99401"/>
		<updated>2015-11-10T20:08:52Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Final Project Design Document */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Writing Assignment 2==&lt;br /&gt;
*[[CSC/ECE_517_Fall_2015/sample_page]]&lt;br /&gt;
*[[CSC/ECE_517_Fall_2015/ossE1558BGJ]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss/M1502/AAAASS]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss/M1503/IntegrateXMLParser]]&lt;br /&gt;
*[[CSC/ECE_517_Fall_2015/ossE1568BZHXJS]]&lt;br /&gt;
*[[CSC/ECE_517_Fall_2015/ossE1572VGA]]&lt;br /&gt;
*[[CSC/ECE_517_Fall_2015/oss_E1573_sap]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1559 rrz]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1570 avr]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1556 CHM]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss M1504 JJD]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1562 APS]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss M1501 GSN]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss M1501 GSN]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1550 KMM]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1551 RGS]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1555 GMR]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1552 NFR]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1565 AAJ]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1561 WZL]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1553 AAJ]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1554 AAR]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1569 JNR]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1560 PSV]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss M1505 MSV]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1557 GXM]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1566 ARB]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1567 APT]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1574 BKS]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/ossA1550RAN]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015/oss E1571]]&lt;br /&gt;
&lt;br /&gt;
==Final Project Design Document==&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1577 MayYellowRoverJump]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1586 AnonymousChatBetweenAuthorAndReviewer]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1580 Text metrics]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1582 Create integration tests for the instructor interface using capybara and rspec]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1576 Refactoring submitted content (hyperlinks and files)]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1590 Integration testing for Team creation]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1585 Use Ajax for Add Participants, Add TA ,Edit Questionnaires Screens]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1581 Integration testing for student interface]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1583 Fix the CSS used for Menu Item]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1591 Integration testing for peer review]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1589 Automating production setup and deployment]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 M1502 Improve HTTP monitoring devtool support]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 M1503 Integrate xml5ever XML parser]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 M1504 Implement support for missing XMLHttpRequest APIs]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 M1505 Add conformance tests to unicode-bidi and fix conformance bugs]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 M1501 Report CSS errors to the devtools, both stored and live]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1579 Instructor account creation over the web]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 A1550 Web Socket Implementation in Apache Ambari]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2015 E1584 Send Feedback to Support]]&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015/oss_M1505_MSV&amp;diff=97839</id>
		<title>CSC/ECE 517 Fall 2015/oss M1505 MSV</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015/oss_M1505_MSV&amp;diff=97839"/>
		<updated>2015-10-31T23:26:10Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Testing in Rust */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=='''M1505: Add conformance tests to unicode-bidi and fix conformance bugs'''==&lt;br /&gt;
This project involved adding conformance tests to the Servo implementation of the Unicode Bidirectional algorithm (unicode-bidi).&lt;br /&gt;
&lt;br /&gt;
=='''Problem Statement'''==&lt;br /&gt;
&lt;br /&gt;
Web browsers are expected to support international text, and Servo is no exception. The unicode-bidi library  built into servo implements the [[Unicode Bidirectional Algorithm]] for display of mixed right-to-left and left-to-right text. This library's conformance with the Unicode Bidirectional Algorithm specification has yet to be comprehensively tested.&lt;br /&gt;
&lt;br /&gt;
The primary objectives of this project involved:&lt;br /&gt;
&lt;br /&gt;
* Adding code to tools/generate.py to download the  two specification files listed below, that make up the conformance testsuite:http://www.unicode.org/Public/UNIDATA/BidiTest.txt and http://www.unicode.org/Public/UNIDATA/BidiCharacterTest.txt&lt;br /&gt;
&lt;br /&gt;
* Conversion and extension of one or more test cases from the specification files into Rust test cases that can be run automatically.&lt;br /&gt;
&lt;br /&gt;
=='''Changes and Implementation'''==&lt;br /&gt;
&lt;br /&gt;
===Initial Steps===&lt;br /&gt;
&lt;br /&gt;
The following steps were performed in more or less serial order:&lt;br /&gt;
&lt;br /&gt;
* The current directory being pointed to by the running Python instance was modified. Since this instance points by default to the directory where the source file exists, it was pointing to the /tools/ directory. Instead it was made to point to the /src/ directory, where it could modify / check existence of existing files and download new files.&lt;br /&gt;
&lt;br /&gt;
* After changing the current directory, the predefined fetch() function was used to download and save the two files that make up the conformance test suite.&lt;br /&gt;
&lt;br /&gt;
* Once the files were fetched several test cases were inserted to test the conformance of the unicode-bidi implementation. The test cases that were added included:&lt;br /&gt;
** Several cases of line reordering&lt;br /&gt;
** Several cases where the RTL recognition was checked&lt;br /&gt;
** Several cases where the LTR recognition was checked&lt;br /&gt;
** All cases where the removal of characters as per the step X9 were tested&lt;br /&gt;
** All cases where the characters that weren't supposed to be removed as per the step X9 were tested&lt;br /&gt;
&lt;br /&gt;
===Forked Branch===&lt;br /&gt;
&lt;br /&gt;
The forked branch for our project can be found [https://github.com/moharnab123saikia/unicode-bidi here].&lt;br /&gt;
&lt;br /&gt;
===Pull Request===&lt;br /&gt;
&lt;br /&gt;
The pull request can be found [https://github.com/servo/unicode-bidi/pull/19 here].&lt;br /&gt;
&lt;br /&gt;
===Project Walk-through Video===&lt;br /&gt;
&lt;br /&gt;
The demonstration video of our project can be found [https://www.youtube.com/watch?v=f_Gw-usX0z0 here].&lt;br /&gt;
&lt;br /&gt;
=='''Mozilla Servo'''==&lt;br /&gt;
&lt;br /&gt;
Servo&amp;lt;ref&amp;gt;https://github.com/servo/servo&amp;lt;/ref&amp;gt; is a Web Browser engine written using the Rust&amp;lt;ref&amp;gt;https://www.rust-lang.org/&amp;lt;/ref&amp;gt; programming platform. Servo is an experimental project build that is optimized for new generations of hardware, particularly mobile devices, devices with multi-core processors and those with high-performance GPUs. It's core design principles are focused on optimizing power efficiency along with maximizing parallelism.&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Servo_(layout_engine)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=='''Rust'''==&lt;br /&gt;
Rust is a multi-paradigm, compiled programming language developed by Mozilla Research. The syntax of Rust is similar to C and C++.&lt;br /&gt;
Rust has a self hosting compiler, rustc. &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Rust_(programming_language)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
New projects can be created in Rust using [https://doc.rust-lang.org/book/hello-cargo.html Cargo]. Cargo is the package manager for Rust. It also builds the rust code and manages its dependencies.&amp;lt;ref&amp;gt;http://siciarz.net/24-days-rust-cargo-and-cratesio/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A new Rust project can be created using the command:&lt;br /&gt;
&lt;br /&gt;
$ cargo new project_name&lt;br /&gt;
&lt;br /&gt;
===Testing in Rust===&lt;br /&gt;
&lt;br /&gt;
Cargo will automatically generate a simple test when you make a new project. This new test function can be found in src/lib.rs. #[test] attribute indicates that a given function is a test function.&amp;lt;ref&amp;gt;https://doc.rust-lang.org/book/testing.html&amp;lt;/ref&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
The standard format for test functions is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#[test]&amp;lt;/nowiki&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
fn test_method() {&amp;amp;nbsp;&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The tests can be run using the following command:&lt;br /&gt;
&lt;br /&gt;
$ cargo test&lt;br /&gt;
&lt;br /&gt;
=='''Unicode Bidirectional Algorithm'''==&lt;br /&gt;
The Unicode Standard prescribes a memory representation order of text for browsers known as logical order.But the order they display text is different and is called the visual order.&lt;br /&gt;
&lt;br /&gt;
When text is displayed horizontally, most scripts display the characters from left to right.However in case of languages like Arabic, Hebrew etc. the ordering is from right to left.Also, they have digits that are displayed from left to right.So, the text is bidirectional in nature.In addition, these languages may also have embedded in them, letters from scripts that are displayed from left to right. &lt;br /&gt;
&lt;br /&gt;
To remove any ambiguities that may arise the Unicode Bidirectional Algorithm provides a set of rules which are used by a web-browser to produce the correct order at the time of display.&lt;br /&gt;
&lt;br /&gt;
=='''External Links'''==&lt;br /&gt;
[https://github.com/servo/servo/wiki/Design Servo Design]&lt;br /&gt;
&lt;br /&gt;
=='''References'''==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015/oss_M1505_MSV&amp;diff=97834</id>
		<title>CSC/ECE 517 Fall 2015/oss M1505 MSV</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015/oss_M1505_MSV&amp;diff=97834"/>
		<updated>2015-10-31T23:19:44Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: /* Unicode Bidirectional Algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=='''M1505: Add conformance tests to unicode-bidi and fix conformance bugs'''==&lt;br /&gt;
This project involved adding conformance tests to the Servo implementation of the Unicode Bidirectional algorithm (unicode-bidi).&lt;br /&gt;
&lt;br /&gt;
=='''Problem Statement'''==&lt;br /&gt;
&lt;br /&gt;
Web browsers are expected to support international text, and Servo is no exception. The unicode-bidi library  built into servo implements the [[Unicode Bidirectional Algorithm]] for display of mixed right-to-left and left-to-right text. This library's conformance with the Unicode Bidirectional Algorithm specification has yet to be comprehensively tested.&lt;br /&gt;
&lt;br /&gt;
The primary objectives of this project involved:&lt;br /&gt;
&lt;br /&gt;
* Adding code to tools/generate.py to download the  two specification files listed below, that make up the conformance testsuite:http://www.unicode.org/Public/UNIDATA/BidiTest.txt and http://www.unicode.org/Public/UNIDATA/BidiCharacterTest.txt&lt;br /&gt;
&lt;br /&gt;
* Conversion and extension of one or more test cases from the specification files into Rust test cases that can be run automatically.&lt;br /&gt;
&lt;br /&gt;
=='''Changes and Implementation'''==&lt;br /&gt;
&lt;br /&gt;
===Initial Steps===&lt;br /&gt;
&lt;br /&gt;
The following steps were performed in more or less serial order:&lt;br /&gt;
&lt;br /&gt;
* The current directory being pointed to by the running Python instance was modified. Since this instance points by default to the directory where the source file exists, it was pointing to the /tools/ directory. Instead it was made to point to the /src/ directory, where it could modify / check existence of existing files and download new files.&lt;br /&gt;
&lt;br /&gt;
* After changing the current directory, the predefined fetch() function was used to download and save the two files that make up the conformance test suite.&lt;br /&gt;
&lt;br /&gt;
* Once the files were fetched several test cases were inserted to test the conformance of the unicode-bidi implementation. The test cases that were added included:&lt;br /&gt;
** Several cases of line reordering&lt;br /&gt;
** Several cases where the RTL recognition was checked&lt;br /&gt;
** Several cases where the LTR recognition was checked&lt;br /&gt;
** All cases where the removal of characters as per the step X9 were tested&lt;br /&gt;
** All cases where the characters that weren't supposed to be removed as per the step X9 were tested&lt;br /&gt;
&lt;br /&gt;
===Forked Branch===&lt;br /&gt;
&lt;br /&gt;
The forked branch for our project can be found [https://github.com/moharnab123saikia/unicode-bidi here].&lt;br /&gt;
&lt;br /&gt;
===Pull Request===&lt;br /&gt;
&lt;br /&gt;
The pull request can be found [https://github.com/servo/unicode-bidi/pull/19 here].&lt;br /&gt;
&lt;br /&gt;
===Project Walk-through Video===&lt;br /&gt;
&lt;br /&gt;
The demonstration video of our project can be found [https://www.youtube.com/watch?v=f_Gw-usX0z0 here].&lt;br /&gt;
&lt;br /&gt;
=='''Mozilla Servo'''==&lt;br /&gt;
&lt;br /&gt;
Servo&amp;lt;ref&amp;gt;https://github.com/servo/servo&amp;lt;/ref&amp;gt; is a Web Browser engine written using the Rust&amp;lt;ref&amp;gt;https://www.rust-lang.org/&amp;lt;/ref&amp;gt; programming platform. Servo is an experimental project build that is optimized for new generations of hardware, particularly mobile devices, devices with multi-core processors and those with high-performance GPUs. It's core design principles are focused on optimizing power efficiency along with maximizing parallelism.&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Servo_(layout_engine)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=='''Rust'''==&lt;br /&gt;
Rust is a multi-paradigm, compiled programming language developed by Mozilla Research. The syntax of Rust is similar to C and C++.&lt;br /&gt;
Rust has a self hosting compiler, rustc. &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Rust_(programming_language)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
New projects can be created in Rust using [https://doc.rust-lang.org/book/hello-cargo.html Cargo]. Cargo is the package manager for Rust. It also builds the rust code and manages its dependencies.&amp;lt;ref&amp;gt;http://siciarz.net/24-days-rust-cargo-and-cratesio/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A new Rust project can be created using the command:&lt;br /&gt;
&lt;br /&gt;
$ cargo new project_name&lt;br /&gt;
&lt;br /&gt;
===Testing in Rust===&lt;br /&gt;
&lt;br /&gt;
Cargo will automatically generate a simple test when you make a new project. This new test function can be found in src/lib.rs. #[test] attribute indicates that a given function is a test function.&amp;lt;ref&amp;gt;https://doc.rust-lang.org/book/testing.html&amp;lt;/ref&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
The standard format for test functions is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#[test]&amp;lt;/nowiki&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
fn test_method() {&amp;amp;nbsp;&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;}&lt;br /&gt;
&lt;br /&gt;
The tests can be run using the following command:&lt;br /&gt;
&lt;br /&gt;
$ cargo test&lt;br /&gt;
&lt;br /&gt;
=='''Unicode Bidirectional Algorithm'''==&lt;br /&gt;
The Unicode Standard prescribes a memory representation order of text for browsers known as logical order.But the order they display text is different and is called the visual order.&lt;br /&gt;
&lt;br /&gt;
When text is displayed horizontally, most scripts display the characters from left to right.However in case of languages like Arabic, Hebrew etc. the ordering is from right to left.Also, they have digits that are displayed from left to right.So, the text is bidirectional in nature.In addition, these languages may also have embedded in them, letters from scripts that are displayed from left to right. &lt;br /&gt;
&lt;br /&gt;
To remove any ambiguities that may arise the Unicode Bidirectional Algorithm provides a set of rules which are used by a web-browser to produce the correct order at the time of display.&lt;br /&gt;
&lt;br /&gt;
=='''External Links'''==&lt;br /&gt;
[https://github.com/servo/servo/wiki/Design Servo Design]&lt;br /&gt;
&lt;br /&gt;
=='''References'''==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015/oss_M1505_MSV&amp;diff=97832</id>
		<title>CSC/ECE 517 Fall 2015/oss M1505 MSV</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015/oss_M1505_MSV&amp;diff=97832"/>
		<updated>2015-10-31T23:19:24Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=='''M1505: Add conformance tests to unicode-bidi and fix conformance bugs'''==&lt;br /&gt;
This project involved adding conformance tests to the Servo implementation of the Unicode Bidirectional algorithm (unicode-bidi).&lt;br /&gt;
&lt;br /&gt;
=='''Problem Statement'''==&lt;br /&gt;
&lt;br /&gt;
Web browsers are expected to support international text, and Servo is no exception. The unicode-bidi library  built into servo implements the [[Unicode Bidirectional Algorithm]] for display of mixed right-to-left and left-to-right text. This library's conformance with the Unicode Bidirectional Algorithm specification has yet to be comprehensively tested.&lt;br /&gt;
&lt;br /&gt;
The primary objectives of this project involved:&lt;br /&gt;
&lt;br /&gt;
* Adding code to tools/generate.py to download the  two specification files listed below, that make up the conformance testsuite:http://www.unicode.org/Public/UNIDATA/BidiTest.txt and http://www.unicode.org/Public/UNIDATA/BidiCharacterTest.txt&lt;br /&gt;
&lt;br /&gt;
* Conversion and extension of one or more test cases from the specification files into Rust test cases that can be run automatically.&lt;br /&gt;
&lt;br /&gt;
=='''Changes and Implementation'''==&lt;br /&gt;
&lt;br /&gt;
===Initial Steps===&lt;br /&gt;
&lt;br /&gt;
The following steps were performed in more or less serial order:&lt;br /&gt;
&lt;br /&gt;
* The current directory being pointed to by the running Python instance was modified. Since this instance points by default to the directory where the source file exists, it was pointing to the /tools/ directory. Instead it was made to point to the /src/ directory, where it could modify / check existence of existing files and download new files.&lt;br /&gt;
&lt;br /&gt;
* After changing the current directory, the predefined fetch() function was used to download and save the two files that make up the conformance test suite.&lt;br /&gt;
&lt;br /&gt;
* Once the files were fetched several test cases were inserted to test the conformance of the unicode-bidi implementation. The test cases that were added included:&lt;br /&gt;
** Several cases of line reordering&lt;br /&gt;
** Several cases where the RTL recognition was checked&lt;br /&gt;
** Several cases where the LTR recognition was checked&lt;br /&gt;
** All cases where the removal of characters as per the step X9 were tested&lt;br /&gt;
** All cases where the characters that weren't supposed to be removed as per the step X9 were tested&lt;br /&gt;
&lt;br /&gt;
===Forked Branch===&lt;br /&gt;
&lt;br /&gt;
The forked branch for our project can be found [https://github.com/moharnab123saikia/unicode-bidi here].&lt;br /&gt;
&lt;br /&gt;
===Pull Request===&lt;br /&gt;
&lt;br /&gt;
The pull request can be found [https://github.com/servo/unicode-bidi/pull/19 here].&lt;br /&gt;
&lt;br /&gt;
===Project Walk-through Video===&lt;br /&gt;
&lt;br /&gt;
The demonstration video of our project can be found [https://www.youtube.com/watch?v=f_Gw-usX0z0 here].&lt;br /&gt;
&lt;br /&gt;
=='''Mozilla Servo'''==&lt;br /&gt;
&lt;br /&gt;
Servo&amp;lt;ref&amp;gt;https://github.com/servo/servo&amp;lt;/ref&amp;gt; is a Web Browser engine written using the Rust&amp;lt;ref&amp;gt;https://www.rust-lang.org/&amp;lt;/ref&amp;gt; programming platform. Servo is an experimental project build that is optimized for new generations of hardware, particularly mobile devices, devices with multi-core processors and those with high-performance GPUs. It's core design principles are focused on optimizing power efficiency along with maximizing parallelism.&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Servo_(layout_engine)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=='''Rust'''==&lt;br /&gt;
Rust is a multi-paradigm, compiled programming language developed by Mozilla Research. The syntax of Rust is similar to C and C++.&lt;br /&gt;
Rust has a self hosting compiler, rustc. &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Rust_(programming_language)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
New projects can be created in Rust using [https://doc.rust-lang.org/book/hello-cargo.html Cargo]. Cargo is the package manager for Rust. It also builds the rust code and manages its dependencies.&amp;lt;ref&amp;gt;http://siciarz.net/24-days-rust-cargo-and-cratesio/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A new Rust project can be created using the command:&lt;br /&gt;
&lt;br /&gt;
''$ cargo new project_name''&lt;br /&gt;
&lt;br /&gt;
===Testing in Rust===&lt;br /&gt;
&lt;br /&gt;
Cargo will automatically generate a simple test when you make a new project. This new test function can be found in src/lib.rs. #[test] attribute indicates that a given function is a test function.&amp;lt;ref&amp;gt;https://doc.rust-lang.org/book/testing.html&amp;lt;/ref&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
The standard format for test functions is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#[test]&amp;lt;/nowiki&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
fn test_method() {&amp;amp;nbsp;&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;}&lt;br /&gt;
&lt;br /&gt;
The tests can be run using the following command:&lt;br /&gt;
&lt;br /&gt;
''$ cargo test''&lt;br /&gt;
&lt;br /&gt;
=='''Unicode Bidirectional Algorithm'''==&lt;br /&gt;
The Unicode Standard prescribes a memory representation order of text for browsers known as logical order.But the order they display text is different and is called the visual order.&lt;br /&gt;
&lt;br /&gt;
When text is displayed horizontally, most scripts display the characters from left to right.However in case of languages like Arabic, Hebrew etc. the ordering is from right to left.Also, they have digits that are displayed from left to right.So, the text is bidirectional in nature.In addition, these languages may also have embedded in them, letters from scripts that are displayed from left to right. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To remove any ambiguities that may arise the Unicode Bidirectional Algorithm provides a set of rules which are used by a web-browser to produce the correct order at the time of display.&lt;br /&gt;
&lt;br /&gt;
=='''External Links'''==&lt;br /&gt;
[https://github.com/servo/servo/wiki/Design Servo Design]&lt;br /&gt;
&lt;br /&gt;
=='''References'''==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015/oss_M1505_MSV&amp;diff=97826</id>
		<title>CSC/ECE 517 Fall 2015/oss M1505 MSV</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2015/oss_M1505_MSV&amp;diff=97826"/>
		<updated>2015-10-31T23:18:10Z</updated>

		<summary type="html">&lt;p&gt;Msaikia: Added content for &amp;quot;BIDI algo&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=='''M1505: Add conformance tests to unicode-bidi and fix conformance bugs'''==&lt;br /&gt;
This project involved adding conformance tests to the Servo implementation of the Unicode Bidirectional algorithm (unicode-bidi).&lt;br /&gt;
&lt;br /&gt;
=='''Problem Statement'''==&lt;br /&gt;
&lt;br /&gt;
Web browsers are expected to support international text, and Servo is no exception. The unicode-bidi library  built into servo implements the [[Unicode Bidirectional Algorithm]] for display of mixed right-to-left and left-to-right text. This library's conformance with the Unicode Bidirectional Algorithm specification has yet to be comprehensively tested.&lt;br /&gt;
&lt;br /&gt;
The primary objectives of this project involved:&lt;br /&gt;
&lt;br /&gt;
* Adding code to tools/generate.py to download the  two specification files listed below, that make up the conformance testsuite:http://www.unicode.org/Public/UNIDATA/BidiTest.txt and http://www.unicode.org/Public/UNIDATA/BidiCharacterTest.txt&lt;br /&gt;
&lt;br /&gt;
* Conversion and extension of one or more test cases from the specification files into Rust test cases that can be run automatically.&lt;br /&gt;
&lt;br /&gt;
=='''Changes and Implementation'''==&lt;br /&gt;
&lt;br /&gt;
===Initial Steps===&lt;br /&gt;
&lt;br /&gt;
The following steps were performed in more or less serial order:&lt;br /&gt;
&lt;br /&gt;
* The current directory being pointed to by the running Python instance was modified. Since this instance points by default to the directory where the source file exists, it was pointing to the /tools/ directory. Instead it was made to point to the /src/ directory, where it could modify / check existence of existing files and download new files.&lt;br /&gt;
&lt;br /&gt;
* After changing the current directory, the predefined fetch() function was used to download and save the two files that make up the conformance test suite.&lt;br /&gt;
&lt;br /&gt;
* Once the files were fetched several test cases were inserted to test the conformance of the unicode-bidi implementation. The test cases that were added included:&lt;br /&gt;
** Several cases of line reordering&lt;br /&gt;
** Several cases where the RTL recognition was checked&lt;br /&gt;
** Several cases where the LTR recognition was checked&lt;br /&gt;
** All cases where the removal of characters as per the step X9 were tested&lt;br /&gt;
** All cases where the characters that weren't supposed to be removed as per the step X9 were tested&lt;br /&gt;
&lt;br /&gt;
===Forked Branch===&lt;br /&gt;
&lt;br /&gt;
The forked branch for our project can be found [https://github.com/moharnab123saikia/unicode-bidi here].&lt;br /&gt;
&lt;br /&gt;
===Pull Request===&lt;br /&gt;
&lt;br /&gt;
The pull request can be found [https://github.com/servo/unicode-bidi/pull/19 here].&lt;br /&gt;
&lt;br /&gt;
===Project Walk-through Video===&lt;br /&gt;
&lt;br /&gt;
The demonstration video of our project can be found [https://www.youtube.com/watch?v=f_Gw-usX0z0 here].&lt;br /&gt;
&lt;br /&gt;
=='''Mozilla Servo'''==&lt;br /&gt;
&lt;br /&gt;
Servo&amp;lt;ref&amp;gt;https://github.com/servo/servo&amp;lt;/ref&amp;gt; is a Web Browser engine written using the Rust&amp;lt;ref&amp;gt;https://www.rust-lang.org/&amp;lt;/ref&amp;gt; programming platform. Servo is an experimental project build that is optimized for new generations of hardware, particularly mobile devices, devices with multi-core processors and those with high-performance GPUs. It's core design principles are focused on optimizing power efficiency along with maximizing parallelism.&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Servo_(layout_engine)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=='''Rust'''==&lt;br /&gt;
Rust is a multi-paradigm, compiled programming language developed by Mozilla Research. The syntax of Rust is similar to C and C++.&lt;br /&gt;
Rust has a self hosting compiler, rustc. &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Rust_(programming_language)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
New projects can be created in Rust using [https://doc.rust-lang.org/book/hello-cargo.html Cargo]. Cargo is the package manager for Rust. It also builds the rust code and manages its dependencies.&amp;lt;ref&amp;gt;http://siciarz.net/24-days-rust-cargo-and-cratesio/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A new Rust project can be created using the command:&lt;br /&gt;
&lt;br /&gt;
''$ cargo new project_name''&lt;br /&gt;
&lt;br /&gt;
===Testing in Rust===&lt;br /&gt;
&lt;br /&gt;
Cargo will automatically generate a simple test when you make a new project. This new test function can be found in src/lib.rs. #[test] attribute indicates that a given function is a test function.&amp;lt;ref&amp;gt;https://doc.rust-lang.org/book/testing.html&amp;lt;/ref&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
The standard format for test functions is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#[test]&amp;lt;/nowiki&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
fn test_method() {&amp;amp;nbsp;&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tests can be run using the following command:&lt;br /&gt;
&lt;br /&gt;
''$ cargo test''&lt;br /&gt;
&lt;br /&gt;
=='''Unicode Bidirectional Algorithm'''==&lt;br /&gt;
The Unicode Standard prescribes a memory representation order of text for browsers known as logical order.But the order they display text is different and is called the visual order.&lt;br /&gt;
When text is displayed horizontally, most scripts display the characters from left to right.However in case of languages like Arabic, Hebrew etc. the ordering is from right to left.Also, they have digits that are displayed from left to right.So, the text is bidirectional in nature.In addition, these languages may also have embedded in them, letters from scripts that are displayed from left to right. &lt;br /&gt;
To remove any ambiguities that may arise the Unicode Bidirectional Algorithm provides a set of rules which are used by a web-browser to produce the correct order at the time of display.&lt;br /&gt;
&lt;br /&gt;
=='''External Links'''==&lt;br /&gt;
[https://github.com/servo/servo/wiki/Design Servo Design]&lt;br /&gt;
&lt;br /&gt;
=='''References'''==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Msaikia</name></author>
	</entry>
</feed>