<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mdunlap</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mdunlap"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Mdunlap"/>
	<updated>2026-05-13T06:09:19Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108707</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108707"/>
		<updated>2017-04-29T02:00:03Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part to be built at a later time is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes to the submission records page for particular team on a project, a link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot; in order to request the metrics from Github on demand.&lt;br /&gt;
&lt;br /&gt;
[[File:Submission Record.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
[[File:Invalid Github link.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
[[File:Github metrics.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the unit testing of the system by testing the github contributor controller. To run the rspec test, from the top expertiza directory execute the following command &amp;quot;rspec spec/controllers/github_contributors_controller_spec.rb&amp;quot; to run the four unit tests.&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108705</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108705"/>
		<updated>2017-04-29T01:43:30Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes to the submission records page for particular team on a project, a link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot; in order to request the metrics from Github on demand.&lt;br /&gt;
&lt;br /&gt;
[[File:Submission Record.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
[[File:Invalid Github link.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
[[File:Github metrics.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the unit testing of the system by testing the github contributor controller. To run the rspec test, from the top expertiza directory execute the following command &amp;quot;rspec spec/controllers/github_contributors_controller_spec.rb&amp;quot; to run the four unit tests.&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Github_metrics.png&amp;diff=108704</id>
		<title>File:Github metrics.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Github_metrics.png&amp;diff=108704"/>
		<updated>2017-04-29T01:40:43Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Submission_Record.png&amp;diff=108703</id>
		<title>File:Submission Record.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Submission_Record.png&amp;diff=108703"/>
		<updated>2017-04-29T01:40:26Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108702</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108702"/>
		<updated>2017-04-29T01:34:53Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes to the submission records page for particular team on a project, a link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot; in order to request the metrics from Github on demand.&lt;br /&gt;
&lt;br /&gt;
[[File:Submission records.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
[[File:Invalid Github link.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
[[File:Github Metrics.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the unit testing of the system by testing the github contributor controller. To run the rspec test, from the top expertiza directory execute the following command &amp;quot;rspec spec/controllers/github_contributors_controller_spec.rb&amp;quot; to run the four unit tests.&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108701</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108701"/>
		<updated>2017-04-29T01:31:29Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes to the submission records page a link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot; in order to request the metrics from Github on demand.&lt;br /&gt;
&lt;br /&gt;
[[File:Submission records.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
[[File:Invalid Github link.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
[[File:Github Metrics.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the unit testing of the system by testing the github contributor controller. To run the rspec test, from the top expertiza directory execute the following command &amp;quot;rspec spec/controllers/github_contributors_controller_spec.rb&amp;quot; to run the four unit tests.&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108700</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108700"/>
		<updated>2017-04-29T01:28:26Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes to the submission records page a link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot; in order to request the metrics from Github on demand.&lt;br /&gt;
&lt;br /&gt;
[[File:Submission records.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
[[File:Invalid Github link.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
[[File:Github Metrics.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the unit testing of the system and will not explicitly test the Github API, but will test the github contributor controller. To run the rspec test, from the top expertiza directory execute the following command &amp;quot;rspec spec/controllers/github_contributors_controller_spec.rb&amp;quot; to run the four unit tests.&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108699</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108699"/>
		<updated>2017-04-29T01:16:05Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes the the submission records page a new link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:Submission records.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
[[File:Invalid Github link.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
[[File:Github Metrics.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the unit testing of the system and will not explicitly test the Github API. These tests will be test the data in new tables '''github_contributors''' with foreign key to '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108698</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108698"/>
		<updated>2017-04-29T01:15:34Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes the the submission records page a new link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:Submission records.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
[[File:Invalid Github link.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
[[File:Github Metrics.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the unit testing of the system and will not explicitly test the Github API. These tests will be test the data in new tables '''github_contributors''' with foreign key to '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108697</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108697"/>
		<updated>2017-04-29T01:11:11Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes the the submission records page a new link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:Submission records.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
[[File:Invalid Github link.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
[[File:Github Metrics.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the response to the metrics from github and displaying the various metrics. These tests will be test the data in new tables '''github_contributors''' with foreign key to '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
# Test 1: &lt;br /&gt;
## Login as instructor6&lt;br /&gt;
## Go to submission page&lt;br /&gt;
## Validate that metrics are pulled in from github&lt;br /&gt;
# Test 2:&lt;br /&gt;
## Login as student&lt;br /&gt;
## Go to submission page&lt;br /&gt;
## Confirm the student's metrics values&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Github_Metrics.png&amp;diff=108696</id>
		<title>File:Github Metrics.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Github_Metrics.png&amp;diff=108696"/>
		<updated>2017-04-29T01:08:58Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Submission_records.png&amp;diff=108695</id>
		<title>File:Submission records.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Submission_records.png&amp;diff=108695"/>
		<updated>2017-04-29T01:08:17Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Invalid_Github_link.png&amp;diff=108694</id>
		<title>File:Invalid Github link.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Invalid_Github_link.png&amp;diff=108694"/>
		<updated>2017-04-29T01:07:45Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108693</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108693"/>
		<updated>2017-04-29T01:06:18Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes the the submission records page a new link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:Sumission_Record.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the response to the metrics from github and displaying the various metrics. These tests will be test the data in new tables '''github_contributors''' with foreign key to '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
# Test 1: &lt;br /&gt;
## Login as instructor6&lt;br /&gt;
## Go to submission page&lt;br /&gt;
## Validate that metrics are pulled in from github&lt;br /&gt;
# Test 2:&lt;br /&gt;
## Login as student&lt;br /&gt;
## Go to submission page&lt;br /&gt;
## Confirm the student's metrics values&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108692</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108692"/>
		<updated>2017-04-29T01:05:08Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be accepted/rejected (even before the final due dates).&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects (Fall 2012- Fall 2016).  Based on the meta-data from students repos/pull requests, we can warn both authors and teaching staff if our model predicts that some projects are likely to fail.&lt;br /&gt;
&lt;br /&gt;
The methodology of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed. When an instructor goes the the submission records page a new link will be added below each hyperlink called &amp;quot;View Github Metrics&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:Sumission_Record.png]]&lt;br /&gt;
&lt;br /&gt;
If the link is not a valid github page the controller will return a &amp;quot;No Results Found&amp;quot; page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the link is valid it will pull data from Github using the API described below and show the lines added, lines updated, and lines deleted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====Extract Github metadata=====&lt;br /&gt;
* First, get an access token from github. Here are the [https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ steps]&lt;br /&gt;
* Save the access token in the environment variable 'EXPERTIZA_GITHUB_TOKEN'&lt;br /&gt;
* Now, github data is fetched from github's [https://developer.github.com/v3/repos/statistics/ Statistic API]&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the response to the metrics from github and displaying the various metrics. These tests will be test the data in new tables '''github_contributors''' with foreign key to '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
# Test 1: &lt;br /&gt;
## Login as instructor6&lt;br /&gt;
## Go to submission page&lt;br /&gt;
## Validate that metrics are pulled in from github&lt;br /&gt;
# Test 2:&lt;br /&gt;
## Login as student&lt;br /&gt;
## Go to submission page&lt;br /&gt;
## Confirm the student's metrics values&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with valid github hyperlink&lt;br /&gt;
| Github Metrics and status code 200&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with file uploaded&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with non github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
| show&lt;br /&gt;
| submission_id with private github hyperlink&lt;br /&gt;
| render 'github_contributors/not_found'&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108495</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108495"/>
		<updated>2017-04-13T02:19:25Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: /* Test Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be merged.&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects. The features above should be used, together with some temporal features (e.g. the temporal pattern of this team’s commits so far).  Eventually, we would like to e-mail students whose metrics are bad, giving them advice on how to improve.&lt;br /&gt;
&lt;br /&gt;
The purpose of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed.&lt;br /&gt;
&lt;br /&gt;
====Extract Github metadata====&lt;br /&gt;
&lt;br /&gt;
=====Data Flow=====&lt;br /&gt;
&lt;br /&gt;
The code should sync the data with Github whenever someone (student or instructor) looks at a view that shows Github data.&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:1744_db_schema.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
A new table called '''submission_records_github_contributors''' which acts as a reference between the '''submission_records''' and '''github_contributors''' tables. It has two columns:&lt;br /&gt;
* github_contributor_id - Foreign Key to '''github_contributors''' table.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
A composite unique key constraint is added on ''github_contributor_id'' and ''submission_record_id''.&lt;br /&gt;
&lt;br /&gt;
====Test Plan====&lt;br /&gt;
&lt;br /&gt;
The tests will use rspec to validate the response to the metrics from github and displaying the various metrics. These tests will be test the data in new tables '''submission_records_github_contributors''' and '''github_contributors''' as well as expand the existing tests on '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
# Test 1: &lt;br /&gt;
## Login as instructor6&lt;br /&gt;
## Go to submission page&lt;br /&gt;
## Validate that metrics are pulled in from github&lt;br /&gt;
# Test 2:&lt;br /&gt;
## Login as student&lt;br /&gt;
## Go to submission page&lt;br /&gt;
## Confirm the student's metrics values&lt;br /&gt;
&lt;br /&gt;
=====Unit Tests=====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameter&lt;br /&gt;
! Expected result&lt;br /&gt;
|-&lt;br /&gt;
| get_metrics&lt;br /&gt;
| team_id = null&lt;br /&gt;
| Null / Exception&lt;br /&gt;
|-&lt;br /&gt;
| get_metrics&lt;br /&gt;
| team_id = Invalid team_id&lt;br /&gt;
| Null / Exception&lt;br /&gt;
|-&lt;br /&gt;
| get_metrics&lt;br /&gt;
| team_id = valid_team_id&lt;br /&gt;
| List of metrics for all the committers in the team.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108275</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108275"/>
		<updated>2017-04-10T21:20:13Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: /* Test Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be merged.&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects. The features above should be used, together with some temporal features (e.g. the temporal pattern of this team’s commits so far).  Eventually, we would like to e-mail students whose metrics are bad, giving them advice on how to improve.&lt;br /&gt;
&lt;br /&gt;
The purpose of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed.&lt;br /&gt;
&lt;br /&gt;
====Extract Github metadata====&lt;br /&gt;
&lt;br /&gt;
=====Use Cases=====&lt;br /&gt;
&lt;br /&gt;
=====Data Flow=====&lt;br /&gt;
&lt;br /&gt;
The code should sync the data with Github whenever someone (student or instructor) looks at a view that shows Github data.&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
This feature has similar functionality with a web crawler, which is crawling the data from a server and store locally. So that for the architectural style of our subsystem, we would like to choose client/server style, which segregates the system into two applications, where the client makes requests to the server whenever a user is looking for the metrics. In many cases, the server is a database with application logic represented as stored procedures, in our case, is Github.&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
[[File:1744 design.png|frame|center|30px|30px]]&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:db_github_schema.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
A new table called '''submission_records_github_contributors''' which acts as a reference between the '''submission_records''' and '''github_contributors''' tables. It has two columns:&lt;br /&gt;
* github_contributor_id - Foreign Key to '''github_contributors''' table.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
A composite unique key constraint is added on ''github_contributor_id'' and ''submission_record_id''.&lt;br /&gt;
&lt;br /&gt;
=====Test Plan=====&lt;br /&gt;
&lt;br /&gt;
The tests will use stubs to simulate the number of commits from github and displaying the various metrics. These tests will be test the data in new tables '''submission_records_github_contributors''' and '''github_contributors''' as well as expand the existing tests on '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
Test 1: &lt;br /&gt;
1. Login as instructor6&lt;br /&gt;
2. Go to submission page&lt;br /&gt;
3. Validate that metrics are pulled in from github&lt;br /&gt;
&lt;br /&gt;
Test 2:&lt;br /&gt;
1. Login as student&lt;br /&gt;
2. Go to submission page&lt;br /&gt;
3. Confirm the student's metrics values&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108196</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108196"/>
		<updated>2017-04-08T02:01:19Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be merged.&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects. The features above should be used, together with some temporal features (e.g. the temporal pattern of this team’s commits so far).  Eventually, we would like to e-mail students whose metrics are bad, giving them advice on how to improve.&lt;br /&gt;
&lt;br /&gt;
The purpose of this project is to add a means to monitor the individual contributions of various team members throughout the duration of project in order to quantitatively access their work. This will aid the teaching staff and team members during the review process as well as improve visibility to a student of the work he or she has committed.&lt;br /&gt;
&lt;br /&gt;
====Extract Github metadata====&lt;br /&gt;
&lt;br /&gt;
=====Use Cases=====&lt;br /&gt;
&lt;br /&gt;
=====Data Flow=====&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:db_github_schema.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
A new table called '''submission_records_github_contributors''' which acts as a reference between the '''submission_records''' and '''github_contributors''' tables. It has two columns:&lt;br /&gt;
* github_contributor_id - Foreign Key to '''github_contributors''' table.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
A composite unique key constraint is added on ''github_contributor_id'' and ''submission_record_id''.&lt;br /&gt;
&lt;br /&gt;
=====Test Plan=====&lt;br /&gt;
&lt;br /&gt;
The tests will use stubs to simulate the number of commits from github and displaying the various metrics. These tests will be test the data in new tables '''submission_records_github_contributors''' and '''github_contributors''' as well as expand the existing tests on '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108134</id>
		<title>CSC/ECE 517 Spring 2017/finalproject E1744</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/finalproject_E1744&amp;diff=108134"/>
		<updated>2017-04-08T00:22:10Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''CSC517 Final Project - E1744 Github Metrics'''&lt;br /&gt;
&lt;br /&gt;
'''(asorgiu, george2, mdunlap, ygou14)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Proposed Design Document ==&lt;br /&gt;
&lt;br /&gt;
===Description===&lt;br /&gt;
We will add a new feature to provide Expertiza with Github metrics (for example, number of committers, number of commits, number of lines of code modified, number of lines added, number of lines deleted.) from each group’s submitted repo link.  This information should prove useful for differentiating the performance of team members for grading purposes. It may also help instructors to predict which projects are likely to be merged.&lt;br /&gt;
&lt;br /&gt;
===Work to be done===&lt;br /&gt;
This project is divided into two parts. One is to extract Github metadata of the submitted repos and pull requests. The second part is to build a classifier (e.g., Bayesian) to do the early prediction on some projects that are likely to fail. This prediction is based on more than 200 past projects. The features above should be used, together with some temporal features (e.g. the temporal pattern of this team’s commits so far).  Eventually, we would like to e-mail students whose metrics are bad, giving them advice on how to improve.&lt;br /&gt;
&lt;br /&gt;
====Extract Github metadata====&lt;br /&gt;
&lt;br /&gt;
=====Use Cases=====&lt;br /&gt;
&lt;br /&gt;
=====Data Flow=====&lt;br /&gt;
&lt;br /&gt;
=====Architectural Design=====&lt;br /&gt;
&lt;br /&gt;
=====UML=====&lt;br /&gt;
&lt;br /&gt;
=====Database Schema Changes=====&lt;br /&gt;
[[File:db_github_schema.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
A new table called '''github_contributors''' is created to store the data for each committer. The table contain's the committer's email, github_id and all the metrics associated with a project. At the moment we handle the following metrics:&lt;br /&gt;
* Committer email - commiter_url&lt;br /&gt;
* Committer id - commiter_id&lt;br /&gt;
* Total number of commits - total_commits&lt;br /&gt;
* Number of files changed - files_changed&lt;br /&gt;
* Lines of code changed - lines_changed&lt;br /&gt;
* Lines of code added - lines_added&lt;br /&gt;
* Lines of code removed - lines_removed&lt;br /&gt;
* Lines of code added that survived until final submission - lines_persisted.&lt;br /&gt;
&lt;br /&gt;
An index on committer_id is added to enable search.&lt;br /&gt;
&lt;br /&gt;
A new table called '''submission_records_github_contributors''' which acts as a reference between the '''submission_records''' and '''github_contributors''' tables. It has two columns:&lt;br /&gt;
* github_contributor_id - Foreign Key to '''github_contributors''' table.&lt;br /&gt;
* submission_record_id - Foreign Key to '''submission_records''' table.&lt;br /&gt;
&lt;br /&gt;
A composite unique key constraint is added on ''github_contributor_id'' and ''submission_record_id''.&lt;br /&gt;
&lt;br /&gt;
=====Test Plan=====&lt;br /&gt;
&lt;br /&gt;
The tests will use stubs to simulate the number of commits from github and displaying the various metrics. These tests will be test the data in new tables '''submission_records_github_contributors''' and '''github_contributors''' as well as expand the existing tests on '''submission_records'''.&lt;br /&gt;
&lt;br /&gt;
====Build a classifier====&lt;br /&gt;
THIS WILL NOT BE IMPLEMENTED AS PART OF THIS PROJECT.  This is future work to be done.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107802</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107802"/>
		<updated>2017-04-01T02:08:48Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate. Code Climate aides in determining the DRYness and style of code, more information can be found at https://codeclimate.com/dashboard. &lt;br /&gt;
&lt;br /&gt;
'''Running Tests'''&lt;br /&gt;
&lt;br /&gt;
After building the Expertiza environment run 'rspec spec/features/*_spec.rb' to run all of the feature tests. This will run the feature spec test files. &lt;br /&gt;
&lt;br /&gt;
'''Test Plan'''&lt;br /&gt;
&lt;br /&gt;
Since this is a refactoring of rspec tests, the testing will consist of running the tests as described in the previous section and ensuring that all of the tests pass. In order to fully test that the refactoring has worked the negative of the expect statements where tested to ensure that they failed and that the tests were still running as expected. Once the test failed as expected the tests were changed back to run to success.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb each of the test cases were written in less than five lines of code.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb the majority of the test cases were written in less than five lines, with the exception of the team formation test which required additional logic but was still significantly compressed.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
Using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
  def validate_attributes(questionaire_name)&lt;br /&gt;
    questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
    expect(questionnaire).to have_attributes(&lt;br /&gt;
      questionnaire_weight: 50,&lt;br /&gt;
      notification_limit: 50&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def validate_dropdown&lt;br /&gt;
    questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
    assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
    expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    within(:css, questionaire_css) do&lt;br /&gt;
      select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
      uncheck('dropdown')&lt;br /&gt;
      select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
    end&lt;br /&gt;
    click_button 'Save'&lt;br /&gt;
    sleep 1&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
    fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    if test_attributes&lt;br /&gt;
      validate_attributes(questionaire_name)&lt;br /&gt;
    else&lt;br /&gt;
      validate_dropdown&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was exactly duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring questionnaire_spec'''&lt;br /&gt;
&lt;br /&gt;
The questionnaire_spec covers testing scenarios relating to creating questionnaires and filling them out.  The unique thing about the questionnaire_spec was that it had a lot of instances of similar code, but not identical.  This is because there are many different types of questions and every variation has to be tested.  Aside from the specific question name, the process of testing the editing and deleting of each question type was the same.  In order to make this more generic and repeatable, a new method was created which took the question type as an input and an each operator was used to cycle through each type and to test editing and deletion.  Below is the definition which was created to test each question type for the ability to edit and delete.&lt;br /&gt;
&lt;br /&gt;
  question_type = %w(Criterion Scale Dropdown Checkbox TextArea TextField UploadFile SectionHeader TableHeader ColumnHeader)&lt;br /&gt;
&lt;br /&gt;
  def load_question question_type, verify_button&lt;br /&gt;
    load_questionnaire&lt;br /&gt;
    fill_in('question_total_num', with: '1')&lt;br /&gt;
    select(question_type, from: 'question_type')&lt;br /&gt;
    click_button &amp;quot;Add&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('Remove') if verify_button&lt;br /&gt;
  &lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!') if verify_button&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def edit_created_question&lt;br /&gt;
    first(&amp;quot;textarea[placeholder='Edit question content here']&amp;quot;).set &amp;quot;Question edit&amp;quot;&lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!')&lt;br /&gt;
    expect(page).to have_content('Question edit')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def check_deleted_question&lt;br /&gt;
    click_on('Remove')&lt;br /&gt;
    expect(page).to have_content('You have successfully deleted the question!')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def choose_check_type command_type&lt;br /&gt;
    if command_type == 'edit'&lt;br /&gt;
      edit_created_question&lt;br /&gt;
    else&lt;br /&gt;
      check_deleted_question&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe &amp;quot;Edit and delete a question&amp;quot; do&lt;br /&gt;
    question_type.each do |q_type|&lt;br /&gt;
      %w(edit delete).each do |q_command|&lt;br /&gt;
        it &amp;quot;is able to &amp;quot; + q_command + &amp;quot; &amp;quot; + q_type + &amp;quot; question&amp;quot; do&lt;br /&gt;
          load_question q_type, false&lt;br /&gt;
          choose_check_type q_command&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring quiz_spec'''&lt;br /&gt;
The quiz_spec covers the testing of creation and use of quizes for instructors and students.  Similar to the other files which were refactored, this spec had several areas of duplicated code which were extracted and placed into individual definitions.  The interesting thing about updating this module was it's high ABC (Assignment, Branch, Condition) count.  In order to reduce this metric, several of the definitions needed to be split into logical methods to be called by the refactored methods.  Below shows and example where new definitions were made in order to reduce ABC score.&lt;br /&gt;
&lt;br /&gt;
  def fill_in_choices&lt;br /&gt;
    # Fill in for all 4 choices&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_1_txt', with: 'Test Quiz 1'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_2_txt', with: 'Test Quiz 2'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_3_txt', with: 'Test Quiz 3'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_4_txt', with: 'Test Quiz 4'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_quiz&lt;br /&gt;
    # Fill in the form for Name&lt;br /&gt;
    fill_in 'questionnaire_name', with: 'Quiz for test'&lt;br /&gt;
 &lt;br /&gt;
    # Fill in the form for Question 1&lt;br /&gt;
    fill_in 'text_area', with: 'Test Question 1'&lt;br /&gt;
 &lt;br /&gt;
    # Choose the quiz to be a single choice question&lt;br /&gt;
    page.choose('question_type_1_type_multiplechoiceradio')&lt;br /&gt;
 &lt;br /&gt;
    fill_in_choices&lt;br /&gt;
 &lt;br /&gt;
    # Choose the first one to be the correct answer&lt;br /&gt;
    page.choose('new_choices_1_MultipleChoiceRadio_1_iscorrect_1')&lt;br /&gt;
 &lt;br /&gt;
    # Save quiz&lt;br /&gt;
    click_on 'Create Quiz'&lt;br /&gt;
  end&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107674</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107674"/>
		<updated>2017-03-31T21:54:29Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate. Code Climate aides in determining the DRYness and style of code, more information can be found at https://codeclimate.com/dashboard. &lt;br /&gt;
&lt;br /&gt;
'''Running Tests'''&lt;br /&gt;
&lt;br /&gt;
After building the Expertiza environment run 'rspec spec/features/*_spec.rb' to run all of the feature tests. This will run the feature spec test files. &lt;br /&gt;
&lt;br /&gt;
'''Test Plan'''&lt;br /&gt;
&lt;br /&gt;
Since this is a refactoring of rspec tests, the testing will consist of running the tests as described in the previous section and ensuring that all of the tests pass. In order to fully test that the refactoring has worked the negative of the expect statements where tested to ensure that they failed and that the tests were still running as expected. Once the test failed as expected the tests were changed back to run to success.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_delayed_job(stage)&lt;br /&gt;
       #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
       it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
         enqueue_delayed_job(stage)&lt;br /&gt;
         expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
         expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
       end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_scheduled_tasks(stage)&lt;br /&gt;
      #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
      it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
        enqueue_scheduled_tasks(stage)&lt;br /&gt;
        expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
        expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
Using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
  def validate_attributes(questionaire_name)&lt;br /&gt;
    questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
    expect(questionnaire).to have_attributes(&lt;br /&gt;
      questionnaire_weight: 50,&lt;br /&gt;
      notification_limit: 50&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def validate_dropdown&lt;br /&gt;
    questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
    assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
    expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    within(:css, questionaire_css) do&lt;br /&gt;
      select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
      uncheck('dropdown')&lt;br /&gt;
      select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
    end&lt;br /&gt;
    click_button 'Save'&lt;br /&gt;
    sleep 1&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
    fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    if test_attributes&lt;br /&gt;
      validate_attributes(questionaire_name)&lt;br /&gt;
    else&lt;br /&gt;
      validate_dropdown&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was exactly duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring questionnaire_spec'''&lt;br /&gt;
&lt;br /&gt;
The questionnaire_spec covers testing scenarios relating to creating questionnaires and filling them out.  The unique thing about the questionnaire_spec was that it had a lot of instances of similar code, but not identical.  This is because there are many different types of questions and every variation has to be tested.  Aside from the specific question name, the process of testing the editing and deleting of each question type was the same.  In order to make this more generic and repeatable, a new method was created which took the question type as an input and an each operator was used to cycle through each type and to test editing and deletion.  Below is the definition which was created to test each question type for the ability to edit and delete.&lt;br /&gt;
&lt;br /&gt;
  question_type = %w(Criterion Scale Dropdown Checkbox TextArea TextField UploadFile SectionHeader TableHeader ColumnHeader)&lt;br /&gt;
&lt;br /&gt;
  def load_question question_type, verify_button&lt;br /&gt;
    load_questionnaire&lt;br /&gt;
    fill_in('question_total_num', with: '1')&lt;br /&gt;
    select(question_type, from: 'question_type')&lt;br /&gt;
    click_button &amp;quot;Add&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('Remove') if verify_button&lt;br /&gt;
  &lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!') if verify_button&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def edit_created_question&lt;br /&gt;
    first(&amp;quot;textarea[placeholder='Edit question content here']&amp;quot;).set &amp;quot;Question edit&amp;quot;&lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!')&lt;br /&gt;
    expect(page).to have_content('Question edit')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def check_deleted_question&lt;br /&gt;
    click_on('Remove')&lt;br /&gt;
    expect(page).to have_content('You have successfully deleted the question!')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def choose_check_type command_type&lt;br /&gt;
    if command_type == 'edit'&lt;br /&gt;
      edit_created_question&lt;br /&gt;
    else&lt;br /&gt;
      check_deleted_question&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe &amp;quot;Edit and delete a question&amp;quot; do&lt;br /&gt;
    question_type.each do |q_type|&lt;br /&gt;
      %w(edit delete).each do |q_command|&lt;br /&gt;
        it &amp;quot;is able to &amp;quot; + q_command + &amp;quot; &amp;quot; + q_type + &amp;quot; question&amp;quot; do&lt;br /&gt;
          load_question q_type, false&lt;br /&gt;
          choose_check_type q_command&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring quiz_spec'''&lt;br /&gt;
The quiz_spec covers the testing of creation and use of quizes for instructors and students.  Similar to the other files which were refactored, this spec had several areas of duplicated code which were extracted and placed into individual definitions.  The interesting thing about updating this module was it's high ABC (Assignment, Branch, Condition) count.  In order to reduce this metric, several of the definitions needed to be split into logical methods to be called by the refactored methods.  Below shows and example where new definitions were made in order to reduce ABC score.&lt;br /&gt;
&lt;br /&gt;
  def fill_in_choices&lt;br /&gt;
    # Fill in for all 4 choices&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_1_txt', with: 'Test Quiz 1'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_2_txt', with: 'Test Quiz 2'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_3_txt', with: 'Test Quiz 3'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_4_txt', with: 'Test Quiz 4'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_quiz&lt;br /&gt;
    # Fill in the form for Name&lt;br /&gt;
    fill_in 'questionnaire_name', with: 'Quiz for test'&lt;br /&gt;
 &lt;br /&gt;
    # Fill in the form for Question 1&lt;br /&gt;
    fill_in 'text_area', with: 'Test Question 1'&lt;br /&gt;
 &lt;br /&gt;
    # Choose the quiz to be a single choice question&lt;br /&gt;
    page.choose('question_type_1_type_multiplechoiceradio')&lt;br /&gt;
 &lt;br /&gt;
    fill_in_choices&lt;br /&gt;
 &lt;br /&gt;
    # Choose the first one to be the correct answer&lt;br /&gt;
    page.choose('new_choices_1_MultipleChoiceRadio_1_iscorrect_1')&lt;br /&gt;
 &lt;br /&gt;
    # Save quiz&lt;br /&gt;
    click_on 'Create Quiz'&lt;br /&gt;
  end&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107673</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107673"/>
		<updated>2017-03-31T21:53:28Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate. Code Climate aides in determining the DRYness and style of code, more information can be found at https://codeclimate.com/dashboard. &lt;br /&gt;
&lt;br /&gt;
'''Running Tests'''&lt;br /&gt;
&lt;br /&gt;
After building the Expertiza environment run 'rspec spec/features/*_spec.rb' to run all of the feature tests. This will run the feature spec test files. In order to fully test that the refactoring has worked the negative of the expect statements where tested to ensure that they failed and that the tests were still running as expected. Once the test failed as expected the tests were changed back to run to success.&lt;br /&gt;
&lt;br /&gt;
'''Test Plan'''&lt;br /&gt;
&lt;br /&gt;
Since this is a refactoring of rspec tests, the testing will consist of running the tests as described in the previous section and ensuring that all of the tests pass. To further &lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_delayed_job(stage)&lt;br /&gt;
       #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
       it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
         enqueue_delayed_job(stage)&lt;br /&gt;
         expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
         expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
       end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_scheduled_tasks(stage)&lt;br /&gt;
      #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
      it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
        enqueue_scheduled_tasks(stage)&lt;br /&gt;
        expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
        expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
Using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
  def validate_attributes(questionaire_name)&lt;br /&gt;
    questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
    expect(questionnaire).to have_attributes(&lt;br /&gt;
      questionnaire_weight: 50,&lt;br /&gt;
      notification_limit: 50&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def validate_dropdown&lt;br /&gt;
    questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
    assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
    expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    within(:css, questionaire_css) do&lt;br /&gt;
      select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
      uncheck('dropdown')&lt;br /&gt;
      select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
    end&lt;br /&gt;
    click_button 'Save'&lt;br /&gt;
    sleep 1&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
    fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    if test_attributes&lt;br /&gt;
      validate_attributes(questionaire_name)&lt;br /&gt;
    else&lt;br /&gt;
      validate_dropdown&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was exactly duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring questionnaire_spec'''&lt;br /&gt;
&lt;br /&gt;
The questionnaire_spec covers testing scenarios relating to creating questionnaires and filling them out.  The unique thing about the questionnaire_spec was that it had a lot of instances of similar code, but not identical.  This is because there are many different types of questions and every variation has to be tested.  Aside from the specific question name, the process of testing the editing and deleting of each question type was the same.  In order to make this more generic and repeatable, a new method was created which took the question type as an input and an each operator was used to cycle through each type and to test editing and deletion.  Below is the definition which was created to test each question type for the ability to edit and delete.&lt;br /&gt;
&lt;br /&gt;
  question_type = %w(Criterion Scale Dropdown Checkbox TextArea TextField UploadFile SectionHeader TableHeader ColumnHeader)&lt;br /&gt;
&lt;br /&gt;
  def load_question question_type, verify_button&lt;br /&gt;
    load_questionnaire&lt;br /&gt;
    fill_in('question_total_num', with: '1')&lt;br /&gt;
    select(question_type, from: 'question_type')&lt;br /&gt;
    click_button &amp;quot;Add&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('Remove') if verify_button&lt;br /&gt;
  &lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!') if verify_button&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def edit_created_question&lt;br /&gt;
    first(&amp;quot;textarea[placeholder='Edit question content here']&amp;quot;).set &amp;quot;Question edit&amp;quot;&lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!')&lt;br /&gt;
    expect(page).to have_content('Question edit')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def check_deleted_question&lt;br /&gt;
    click_on('Remove')&lt;br /&gt;
    expect(page).to have_content('You have successfully deleted the question!')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def choose_check_type command_type&lt;br /&gt;
    if command_type == 'edit'&lt;br /&gt;
      edit_created_question&lt;br /&gt;
    else&lt;br /&gt;
      check_deleted_question&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe &amp;quot;Edit and delete a question&amp;quot; do&lt;br /&gt;
    question_type.each do |q_type|&lt;br /&gt;
      %w(edit delete).each do |q_command|&lt;br /&gt;
        it &amp;quot;is able to &amp;quot; + q_command + &amp;quot; &amp;quot; + q_type + &amp;quot; question&amp;quot; do&lt;br /&gt;
          load_question q_type, false&lt;br /&gt;
          choose_check_type q_command&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring quiz_spec'''&lt;br /&gt;
The quiz_spec covers the testing of creation and use of quizes for instructors and students.  Similar to the other files which were refactored, this spec had several areas of duplicated code which were extracted and placed into individual definitions.  The interesting thing about updating this module was it's high ABC (Assignment, Branch, Condition) count.  In order to reduce this metric, several of the definitions needed to be split into logical methods to be called by the refactored methods.  Below shows and example where new definitions were made in order to reduce ABC score.&lt;br /&gt;
&lt;br /&gt;
  def fill_in_choices&lt;br /&gt;
    # Fill in for all 4 choices&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_1_txt', with: 'Test Quiz 1'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_2_txt', with: 'Test Quiz 2'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_3_txt', with: 'Test Quiz 3'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_4_txt', with: 'Test Quiz 4'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_quiz&lt;br /&gt;
    # Fill in the form for Name&lt;br /&gt;
    fill_in 'questionnaire_name', with: 'Quiz for test'&lt;br /&gt;
 &lt;br /&gt;
    # Fill in the form for Question 1&lt;br /&gt;
    fill_in 'text_area', with: 'Test Question 1'&lt;br /&gt;
 &lt;br /&gt;
    # Choose the quiz to be a single choice question&lt;br /&gt;
    page.choose('question_type_1_type_multiplechoiceradio')&lt;br /&gt;
 &lt;br /&gt;
    fill_in_choices&lt;br /&gt;
 &lt;br /&gt;
    # Choose the first one to be the correct answer&lt;br /&gt;
    page.choose('new_choices_1_MultipleChoiceRadio_1_iscorrect_1')&lt;br /&gt;
 &lt;br /&gt;
    # Save quiz&lt;br /&gt;
    click_on 'Create Quiz'&lt;br /&gt;
  end&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107492</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107492"/>
		<updated>2017-03-27T21:50:35Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate. Code Climate aides in determining the DRYness and style of code, more information can be found at https://codeclimate.com/dashboard. &lt;br /&gt;
&lt;br /&gt;
'''Running Tests'''&lt;br /&gt;
&lt;br /&gt;
After building the Expertiza environment run 'rspec spec/features/*_spec.rb' to run all of the feature tests. This will run the feature spec test files.&lt;br /&gt;
&lt;br /&gt;
'''Test Plan'''&lt;br /&gt;
&lt;br /&gt;
Since this is a refactoring of rspec tests, the testing will consist of running the tests as described in the previous section and ensuring that all of the tests pass.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_delayed_job(stage)&lt;br /&gt;
       #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
       it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
         enqueue_delayed_job(stage)&lt;br /&gt;
         expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
         expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
       end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_scheduled_tasks(stage)&lt;br /&gt;
      #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
      it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
        enqueue_scheduled_tasks(stage)&lt;br /&gt;
        expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
        expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
Using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
  def validate_attributes(questionaire_name)&lt;br /&gt;
    questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
    expect(questionnaire).to have_attributes(&lt;br /&gt;
      questionnaire_weight: 50,&lt;br /&gt;
      notification_limit: 50&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def validate_dropdown&lt;br /&gt;
    questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
    assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
    expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    within(:css, questionaire_css) do&lt;br /&gt;
      select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
      uncheck('dropdown')&lt;br /&gt;
      select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
    end&lt;br /&gt;
    click_button 'Save'&lt;br /&gt;
    sleep 1&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
    fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    if test_attributes&lt;br /&gt;
      validate_attributes(questionaire_name)&lt;br /&gt;
    else&lt;br /&gt;
      validate_dropdown&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was exactly duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring questionnaire_spec'''&lt;br /&gt;
&lt;br /&gt;
The questionnaire_spec covers testing scenarios relating to creating questionnaires and filling them out.  The unique thing about the questionnaire_spec was that it had a lot of instances of similar code, but not identical.  This is because there are many different types of questions and every variation has to be tested.  Aside from the specific question name, the process of testing the editing and deleting of each question type was the same.  In order to make this more generic and repeatable, a new method was created which took the question type as an input and an each operator was used to cycle through each type and to test editing and deletion.  Below is the definition which was created to test each question type for the ability to edit and delete.&lt;br /&gt;
&lt;br /&gt;
  question_type = %w(Criterion Scale Dropdown Checkbox TextArea TextField UploadFile SectionHeader TableHeader ColumnHeader)&lt;br /&gt;
&lt;br /&gt;
  def load_question question_type, verify_button&lt;br /&gt;
    load_questionnaire&lt;br /&gt;
    fill_in('question_total_num', with: '1')&lt;br /&gt;
    select(question_type, from: 'question_type')&lt;br /&gt;
    click_button &amp;quot;Add&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('Remove') if verify_button&lt;br /&gt;
  &lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!') if verify_button&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def edit_created_question&lt;br /&gt;
    first(&amp;quot;textarea[placeholder='Edit question content here']&amp;quot;).set &amp;quot;Question edit&amp;quot;&lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!')&lt;br /&gt;
    expect(page).to have_content('Question edit')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def check_deleted_question&lt;br /&gt;
    click_on('Remove')&lt;br /&gt;
    expect(page).to have_content('You have successfully deleted the question!')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def choose_check_type command_type&lt;br /&gt;
    if command_type == 'edit'&lt;br /&gt;
      edit_created_question&lt;br /&gt;
    else&lt;br /&gt;
      check_deleted_question&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe &amp;quot;Edit and delete a question&amp;quot; do&lt;br /&gt;
    question_type.each do |q_type|&lt;br /&gt;
      %w(edit delete).each do |q_command|&lt;br /&gt;
        it &amp;quot;is able to &amp;quot; + q_command + &amp;quot; &amp;quot; + q_type + &amp;quot; question&amp;quot; do&lt;br /&gt;
          load_question q_type, false&lt;br /&gt;
          choose_check_type q_command&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring quiz_spec'''&lt;br /&gt;
The quiz_spec covers the testing of creation and use of quizes for instructors and students.  Similar to the other files which were refactored, this spec had several areas of duplicated code which were extracted and placed into individual definitions.  The interesting thing about updating this module was it's high ABC (Assignment, Branch, Condition) count.  In order to reduce this metric, several of the definitions needed to be split into logical methods to be called by the refactored methods.  Below shows and example where new definitions were made in order to reduce ABC score.&lt;br /&gt;
&lt;br /&gt;
  def fill_in_choices&lt;br /&gt;
    # Fill in for all 4 choices&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_1_txt', with: 'Test Quiz 1'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_2_txt', with: 'Test Quiz 2'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_3_txt', with: 'Test Quiz 3'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_4_txt', with: 'Test Quiz 4'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_quiz&lt;br /&gt;
    # Fill in the form for Name&lt;br /&gt;
    fill_in 'questionnaire_name', with: 'Quiz for test'&lt;br /&gt;
 &lt;br /&gt;
    # Fill in the form for Question 1&lt;br /&gt;
    fill_in 'text_area', with: 'Test Question 1'&lt;br /&gt;
 &lt;br /&gt;
    # Choose the quiz to be a single choice question&lt;br /&gt;
    page.choose('question_type_1_type_multiplechoiceradio')&lt;br /&gt;
 &lt;br /&gt;
    fill_in_choices&lt;br /&gt;
 &lt;br /&gt;
    # Choose the first one to be the correct answer&lt;br /&gt;
    page.choose('new_choices_1_MultipleChoiceRadio_1_iscorrect_1')&lt;br /&gt;
 &lt;br /&gt;
    # Save quiz&lt;br /&gt;
    click_on 'Create Quiz'&lt;br /&gt;
  end&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107491</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107491"/>
		<updated>2017-03-26T23:43:16Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate. Code Climate aides in determining the DRYness and style of code, more information can be found at https://codeclimate.com/dashboard. &lt;br /&gt;
&lt;br /&gt;
'''Running Tests'''&lt;br /&gt;
&lt;br /&gt;
After building the Expertiza environment run 'rspec spec/features/*_spec.rb' to run all of the feature tests. This will run the feature spec test files.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_delayed_job(stage)&lt;br /&gt;
       #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
       it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
         enqueue_delayed_job(stage)&lt;br /&gt;
         expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
         expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
       end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_scheduled_tasks(stage)&lt;br /&gt;
      #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
      it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
        enqueue_scheduled_tasks(stage)&lt;br /&gt;
        expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
        expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
Using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
  def validate_attributes(questionaire_name)&lt;br /&gt;
    questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
    expect(questionnaire).to have_attributes(&lt;br /&gt;
      questionnaire_weight: 50,&lt;br /&gt;
      notification_limit: 50&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def validate_dropdown&lt;br /&gt;
    questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
    assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
    expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    within(:css, questionaire_css) do&lt;br /&gt;
      select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
      uncheck('dropdown')&lt;br /&gt;
      select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
    end&lt;br /&gt;
    click_button 'Save'&lt;br /&gt;
    sleep 1&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
    fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    if test_attributes&lt;br /&gt;
      validate_attributes(questionaire_name)&lt;br /&gt;
    else&lt;br /&gt;
      validate_dropdown&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was exactly duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring questionnaire_spec'''&lt;br /&gt;
&lt;br /&gt;
The questionnaire_spec covers testing scenarios relating to creating questionnaires and filling them out.  The unique thing about the questionnaire_spec was that it had a lot of instances of similar code, but not identical.  This is because there are many different types of questions and every variation has to be tested.  Aside from the specific question name, the process of testing the editing and deleting of each question type was the same.  In order to make this more generic and repeatable, a new method was created which took the question type as an input and an each operator was used to cycle through each type and to test editing and deletion.  Below is the definition which was created to test each question type for the ability to edit and delete.&lt;br /&gt;
&lt;br /&gt;
  question_type = %w(Criterion Scale Dropdown Checkbox TextArea TextField UploadFile SectionHeader TableHeader ColumnHeader)&lt;br /&gt;
&lt;br /&gt;
  def load_question question_type, verify_button&lt;br /&gt;
    load_questionnaire&lt;br /&gt;
    fill_in('question_total_num', with: '1')&lt;br /&gt;
    select(question_type, from: 'question_type')&lt;br /&gt;
    click_button &amp;quot;Add&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('Remove') if verify_button&lt;br /&gt;
  &lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!') if verify_button&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def edit_created_question&lt;br /&gt;
    first(&amp;quot;textarea[placeholder='Edit question content here']&amp;quot;).set &amp;quot;Question edit&amp;quot;&lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!')&lt;br /&gt;
    expect(page).to have_content('Question edit')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def check_deleted_question&lt;br /&gt;
    click_on('Remove')&lt;br /&gt;
    expect(page).to have_content('You have successfully deleted the question!')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def choose_check_type command_type&lt;br /&gt;
    if command_type == 'edit'&lt;br /&gt;
      edit_created_question&lt;br /&gt;
    else&lt;br /&gt;
      check_deleted_question&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe &amp;quot;Edit and delete a question&amp;quot; do&lt;br /&gt;
    question_type.each do |q_type|&lt;br /&gt;
      %w(edit delete).each do |q_command|&lt;br /&gt;
        it &amp;quot;is able to &amp;quot; + q_command + &amp;quot; &amp;quot; + q_type + &amp;quot; question&amp;quot; do&lt;br /&gt;
          load_question q_type, false&lt;br /&gt;
          choose_check_type q_command&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring quiz_spec'''&lt;br /&gt;
The quiz_spec covers the testing of creation and use of quizes for instructors and students.  Similar to the other files which were refactored, this spec had several areas of duplicated code which were extracted and placed into individual definitions.  The interesting thing about updating this module was it's high ABC (Assignment, Branch, Condition) count.  In order to reduce this metric, several of the definitions needed to be split into logical methods to be called by the refactored methods.  Below shows and example where new definitions were made in order to reduce ABC score.&lt;br /&gt;
&lt;br /&gt;
  def fill_in_choices&lt;br /&gt;
    # Fill in for all 4 choices&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_1_txt', with: 'Test Quiz 1'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_2_txt', with: 'Test Quiz 2'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_3_txt', with: 'Test Quiz 3'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_4_txt', with: 'Test Quiz 4'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_quiz&lt;br /&gt;
    # Fill in the form for Name&lt;br /&gt;
    fill_in 'questionnaire_name', with: 'Quiz for test'&lt;br /&gt;
 &lt;br /&gt;
    # Fill in the form for Question 1&lt;br /&gt;
    fill_in 'text_area', with: 'Test Question 1'&lt;br /&gt;
 &lt;br /&gt;
    # Choose the quiz to be a single choice question&lt;br /&gt;
    page.choose('question_type_1_type_multiplechoiceradio')&lt;br /&gt;
 &lt;br /&gt;
    fill_in_choices&lt;br /&gt;
 &lt;br /&gt;
    # Choose the first one to be the correct answer&lt;br /&gt;
    page.choose('new_choices_1_MultipleChoiceRadio_1_iscorrect_1')&lt;br /&gt;
 &lt;br /&gt;
    # Save quiz&lt;br /&gt;
    click_on 'Create Quiz'&lt;br /&gt;
  end&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107490</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107490"/>
		<updated>2017-03-26T23:43:01Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate. Code Climate aides in determining the DRYness and style of code, more information can be found at https://codeclimate.com/dashboard. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Running Tests'''&lt;br /&gt;
&lt;br /&gt;
After building the Expertiza environment run 'rspec spec/features/*_spec.rb' to run all of the feature tests. This will run the feature spec test files.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_delayed_job(stage)&lt;br /&gt;
       #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
       it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
         enqueue_delayed_job(stage)&lt;br /&gt;
         expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
         expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
       end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_scheduled_tasks(stage)&lt;br /&gt;
      #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
      it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
        enqueue_scheduled_tasks(stage)&lt;br /&gt;
        expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
        expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
Using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
  def validate_attributes(questionaire_name)&lt;br /&gt;
    questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
    expect(questionnaire).to have_attributes(&lt;br /&gt;
      questionnaire_weight: 50,&lt;br /&gt;
      notification_limit: 50&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def validate_dropdown&lt;br /&gt;
    questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
    assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
    expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    within(:css, questionaire_css) do&lt;br /&gt;
      select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
      uncheck('dropdown')&lt;br /&gt;
      select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
    end&lt;br /&gt;
    click_button 'Save'&lt;br /&gt;
    sleep 1&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
    fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    if test_attributes&lt;br /&gt;
      validate_attributes(questionaire_name)&lt;br /&gt;
    else&lt;br /&gt;
      validate_dropdown&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was exactly duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring questionnaire_spec'''&lt;br /&gt;
&lt;br /&gt;
The questionnaire_spec covers testing scenarios relating to creating questionnaires and filling them out.  The unique thing about the questionnaire_spec was that it had a lot of instances of similar code, but not identical.  This is because there are many different types of questions and every variation has to be tested.  Aside from the specific question name, the process of testing the editing and deleting of each question type was the same.  In order to make this more generic and repeatable, a new method was created which took the question type as an input and an each operator was used to cycle through each type and to test editing and deletion.  Below is the definition which was created to test each question type for the ability to edit and delete.&lt;br /&gt;
&lt;br /&gt;
  question_type = %w(Criterion Scale Dropdown Checkbox TextArea TextField UploadFile SectionHeader TableHeader ColumnHeader)&lt;br /&gt;
&lt;br /&gt;
  def load_question question_type, verify_button&lt;br /&gt;
    load_questionnaire&lt;br /&gt;
    fill_in('question_total_num', with: '1')&lt;br /&gt;
    select(question_type, from: 'question_type')&lt;br /&gt;
    click_button &amp;quot;Add&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('Remove') if verify_button&lt;br /&gt;
  &lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!') if verify_button&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def edit_created_question&lt;br /&gt;
    first(&amp;quot;textarea[placeholder='Edit question content here']&amp;quot;).set &amp;quot;Question edit&amp;quot;&lt;br /&gt;
    click_button &amp;quot;Save review questionnaire&amp;quot;&lt;br /&gt;
    expect(page).to have_content('All questions has been successfully saved!')&lt;br /&gt;
    expect(page).to have_content('Question edit')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def check_deleted_question&lt;br /&gt;
    click_on('Remove')&lt;br /&gt;
    expect(page).to have_content('You have successfully deleted the question!')&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def choose_check_type command_type&lt;br /&gt;
    if command_type == 'edit'&lt;br /&gt;
      edit_created_question&lt;br /&gt;
    else&lt;br /&gt;
      check_deleted_question&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe &amp;quot;Edit and delete a question&amp;quot; do&lt;br /&gt;
    question_type.each do |q_type|&lt;br /&gt;
      %w(edit delete).each do |q_command|&lt;br /&gt;
        it &amp;quot;is able to &amp;quot; + q_command + &amp;quot; &amp;quot; + q_type + &amp;quot; question&amp;quot; do&lt;br /&gt;
          load_question q_type, false&lt;br /&gt;
          choose_check_type q_command&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring quiz_spec'''&lt;br /&gt;
The quiz_spec covers the testing of creation and use of quizes for instructors and students.  Similar to the other files which were refactored, this spec had several areas of duplicated code which were extracted and placed into individual definitions.  The interesting thing about updating this module was it's high ABC (Assignment, Branch, Condition) count.  In order to reduce this metric, several of the definitions needed to be split into logical methods to be called by the refactored methods.  Below shows and example where new definitions were made in order to reduce ABC score.&lt;br /&gt;
&lt;br /&gt;
  def fill_in_choices&lt;br /&gt;
    # Fill in for all 4 choices&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_1_txt', with: 'Test Quiz 1'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_2_txt', with: 'Test Quiz 2'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_3_txt', with: 'Test Quiz 3'&lt;br /&gt;
    fill_in 'new_choices_1_MultipleChoiceRadio_4_txt', with: 'Test Quiz 4'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_quiz&lt;br /&gt;
    # Fill in the form for Name&lt;br /&gt;
    fill_in 'questionnaire_name', with: 'Quiz for test'&lt;br /&gt;
 &lt;br /&gt;
    # Fill in the form for Question 1&lt;br /&gt;
    fill_in 'text_area', with: 'Test Question 1'&lt;br /&gt;
 &lt;br /&gt;
    # Choose the quiz to be a single choice question&lt;br /&gt;
    page.choose('question_type_1_type_multiplechoiceradio')&lt;br /&gt;
 &lt;br /&gt;
    fill_in_choices&lt;br /&gt;
 &lt;br /&gt;
    # Choose the first one to be the correct answer&lt;br /&gt;
    page.choose('new_choices_1_MultipleChoiceRadio_1_iscorrect_1')&lt;br /&gt;
 &lt;br /&gt;
    # Save quiz&lt;br /&gt;
    click_on 'Create Quiz'&lt;br /&gt;
  end&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107319</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107319"/>
		<updated>2017-03-23T23:34:21Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_delayed_job(stage)&lt;br /&gt;
       #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
       it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
         enqueue_delayed_job(stage)&lt;br /&gt;
         expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
         expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
       end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_scheduled_tasks(stage)&lt;br /&gt;
      #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
      it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
        enqueue_scheduled_tasks(stage)&lt;br /&gt;
        expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
        expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
This using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
  def validate_attributes(questionaire_name)&lt;br /&gt;
    questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
    expect(questionnaire).to have_attributes(&lt;br /&gt;
      questionnaire_weight: 50,&lt;br /&gt;
      notification_limit: 50&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def validate_dropdown&lt;br /&gt;
    questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
    assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
    expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    within(:css, questionaire_css) do&lt;br /&gt;
      select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
      uncheck('dropdown')&lt;br /&gt;
      select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
    end&lt;br /&gt;
    click_button 'Save'&lt;br /&gt;
    sleep 1&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
    fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    if test_attributes&lt;br /&gt;
      validate_attributes(questionaire_name)&lt;br /&gt;
    else&lt;br /&gt;
      validate_dropdown&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was exactly duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107318</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107318"/>
		<updated>2017-03-23T23:33:19Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_delayed_job(stage)&lt;br /&gt;
       #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
       it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
         enqueue_delayed_job(stage)&lt;br /&gt;
         expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
         expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
       end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_scheduled_tasks(stage)&lt;br /&gt;
      #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
      it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
        enqueue_scheduled_tasks(stage)&lt;br /&gt;
        expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
        expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
This using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
  def validate_attributes(questionaire_name)&lt;br /&gt;
    questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
    expect(questionnaire).to have_attributes(&lt;br /&gt;
      questionnaire_weight: 50,&lt;br /&gt;
      notification_limit: 50&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def validate_dropdown&lt;br /&gt;
    questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
    assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
    expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    within(:css, questionaire_css) do&lt;br /&gt;
      select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
      uncheck('dropdown')&lt;br /&gt;
      select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
      fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
    end&lt;br /&gt;
    click_button 'Save'&lt;br /&gt;
    sleep 1&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
    fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
    if test_attributes&lt;br /&gt;
      validate_attributes(questionaire_name)&lt;br /&gt;
    else&lt;br /&gt;
      validate_dropdown&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107317</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107317"/>
		<updated>2017-03-23T23:31:41Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;br /&gt;
&lt;br /&gt;
'''About Expertiza&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source web application project based on Ruby on Rails framework. It provides an online interactive platform for instructors to post and grade assignments, and for students to contribute to team-based projects as well as individual assignments.&lt;br /&gt;
&lt;br /&gt;
'''Problem Statement'''&lt;br /&gt;
&lt;br /&gt;
Remove duplicated code in feature tests and improve the overall Code Climate.&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Delayed_mailer Method''' &lt;br /&gt;
&lt;br /&gt;
Delayed_mailer_spec method covers testing scenarios with the email reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_delayed_job(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in delayed_mailer_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_delayed_job(stage)&lt;br /&gt;
       #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
       it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
         enqueue_delayed_job(stage)&lt;br /&gt;
         expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
         expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
       end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Scheduled_task_spec Method''' &lt;br /&gt;
&lt;br /&gt;
Scheduled_task_spec method covers testing scenarios with scheduling for the deadline reminder feature targeting various users and tasks.&lt;br /&gt;
This method took time stamps using time_parse function, which would create issues when users change their time zones. Calling time_zone_parse instead of time_parse solves the issue. This method also had massive duplicate code for different test scenarios. A helper method enqueue_scheduled_tasks(stage) is used to encapsulate the task being performed.&lt;br /&gt;
&lt;br /&gt;
After refactoring in scheduled_spec.rb:&lt;br /&gt;
&lt;br /&gt;
    def enqueue_scheduled_tasks(stage)&lt;br /&gt;
      #enqueue a delayed job using current stage’s timestamp&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    describe '&amp;lt;stage&amp;gt; deadline reminder email' do&lt;br /&gt;
      it 'is able to send reminder email for &amp;lt;stage&amp;gt; deadline to &amp;lt;stage_users&amp;gt; ' do&lt;br /&gt;
        enqueue_scheduled_tasks(stage)&lt;br /&gt;
        expect(Delayed::Job.count).to eq(1)&lt;br /&gt;
        expect(Delayed::Job.last.handler).to include(&amp;quot;deadline_type: &amp;lt;stage&amp;gt;&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Assignment_creation_spec''' &lt;br /&gt;
&lt;br /&gt;
Assignment_creation_spec covers testing scenarios that create public and private assignments as well as the various options in this assignment creation.&lt;br /&gt;
This using CodeClimate it was identified that a large portion of the code was duplicated across multiple test cases which violates the DRY principle. The redundant code was generalized and placed in methods instead of being written in each of the test cases and redundant methods that were never called were removed from class.&lt;br /&gt;
&lt;br /&gt;
The code below is a sample of the refactored code where instead of having redundant code, handle_questionaire is called with a few parameters and all of the redundant code in the test cases is replaced.&lt;br /&gt;
&lt;br /&gt;
def validate_attributes(questionaire_name)&lt;br /&gt;
  questionnaire = get_questionnaire(questionaire_name).first&lt;br /&gt;
  expect(questionnaire).to have_attributes(&lt;br /&gt;
    questionnaire_weight: 50,&lt;br /&gt;
    notification_limit: 50&lt;br /&gt;
  )&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
def validate_dropdown&lt;br /&gt;
  questionnaire = Questionnaire.where(name: &amp;quot;ReviewQuestionnaire2&amp;quot;).first&lt;br /&gt;
  assignment_questionnaire = AssignmentQuestionnaire.where(assignment_id: @assignment.id, questionnaire_id: questionnaire.id).first&lt;br /&gt;
  expect(assignment_questionnaire.dropdown).to eq(false)&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
def fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
  within(:css, questionaire_css) do&lt;br /&gt;
    select questionaire_name, from: 'assignment_form[assignment_questionnaire][][questionnaire_id]'&lt;br /&gt;
    uncheck('dropdown')&lt;br /&gt;
    select &amp;quot;Scale&amp;quot;, from: 'assignment_form[assignment_questionnaire][][dropdown]'&lt;br /&gt;
    fill_in 'assignment_form[assignment_questionnaire][][questionnaire_weight]', with: '50'&lt;br /&gt;
    fill_in 'assignment_form[assignment_questionnaire][][notification_limit]', with: '50'&lt;br /&gt;
  end&lt;br /&gt;
  click_button 'Save'&lt;br /&gt;
  sleep 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
def handle_questionaire(questionaire_css, questionaire_name, test_attributes)&lt;br /&gt;
  fill_in_questionaire(questionaire_css, questionaire_name)&lt;br /&gt;
  if test_attributes&lt;br /&gt;
    validate_attributes(questionaire_name)&lt;br /&gt;
  else&lt;br /&gt;
    validate_dropdown&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Refactoring Instructor_interface_spec''' &lt;br /&gt;
&lt;br /&gt;
Instructor_interface_spec covers testing scenarios like creating a course, importing tests, and viewing publishing rights.&lt;br /&gt;
Unlike assignment_creation_spec, the largest violation (as determined by CodeClimate) of the DRY principle was functionality that was duplicated in questionnaire_spec. In order to fix this /spec/helpers/instructor_interface_helper_spec was created as a module and then included in both of the other Ruby files as a mixin.&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107182</id>
		<title>CSC/ECE 517 Spring 2017/E1724</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/E1724&amp;diff=107182"/>
		<updated>2017-03-23T01:44:45Z</updated>

		<summary type="html">&lt;p&gt;Mdunlap: Created page with &amp;quot;'''E1724 - Refactoring Feature Tests'''&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''E1724 - Refactoring Feature Tests'''&lt;/div&gt;</summary>
		<author><name>Mdunlap</name></author>
	</entry>
</feed>