<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pdscott2</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pdscott2"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Pdscott2"/>
	<updated>2026-05-17T04:39:42Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108691</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108691"/>
		<updated>2017-04-29T01:04:47Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Testing Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). A sample HTML page demonstrating the ability to query by filename was written as part of phase I.  For phase II, a more complex HTML page will be required which queries for filename and date range. Records will also be populated into an HTML table instead of returning raw JSON.  Javascript functions will also be added to sort columns using the [https://datatables.net DataTables jQuery plug-in]. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely dictated by the Servo team - they have defined what the service should do and how it should be designed. A standalone working version of the design has been implemented in a [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 Github repository].  The complete list of files created for this project can be examined in this repository.  Eventually, this project will be incorporated into the Servo development repository and interfaced with other Servo development software. The details of the implementation are given below.&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure in the form YYYYMMDD&lt;br /&gt;
|-&lt;br /&gt;
| fail_time&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Time of the failure in the form HH:MM:SS.MMMMMM&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date, :fail_time]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python tests.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* Set the Flask environment variable by typing &amp;lt;code&amp;gt;export FLASK_APP=flask_sever&amp;lt;/code&amp;gt; (use set instead of export for Windows)&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt; or alternatively type &amp;lt;code&amp;gt;flask run&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108690</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108690"/>
		<updated>2017-04-28T23:56:54Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* To Run The App Locally */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). A sample HTML page demonstrating the ability to query by filename was written as part of phase I.  For phase II, a more complex HTML page will be required which queries for filename and date range. Records will also be populated into an HTML table instead of returning raw JSON.  Javascript functions will also be added to sort columns using the [https://datatables.net DataTables jQuery plug-in]. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely dictated by the Servo team - they have defined what the service should do and how it should be designed. A standalone working version of the design has been implemented in a [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 Github repository].  The complete list of files created for this project can be examined in this repository.  Eventually, this project will be incorporated into the Servo development repository and interfaced with other Servo development software. The details of the implementation are given below.&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure in the form YYYYMMDD&lt;br /&gt;
|-&lt;br /&gt;
| fail_time&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Time of the failure in the form HH:MM:SS.MMMMMM&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date, :fail_time]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python tests.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* Set the Flask environment variable by typing &amp;lt;code&amp;gt;export FLASK_APP=flask_sever&amp;lt;/code&amp;gt; (use set instead of export for Windows)&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt; or alternatively type &amp;lt;code&amp;gt;flask run&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108689</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108689"/>
		<updated>2017-04-28T23:56:01Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* To Run The App Locally */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). A sample HTML page demonstrating the ability to query by filename was written as part of phase I.  For phase II, a more complex HTML page will be required which queries for filename and date range. Records will also be populated into an HTML table instead of returning raw JSON.  Javascript functions will also be added to sort columns using the [https://datatables.net DataTables jQuery plug-in]. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely dictated by the Servo team - they have defined what the service should do and how it should be designed. A standalone working version of the design has been implemented in a [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 Github repository].  The complete list of files created for this project can be examined in this repository.  Eventually, this project will be incorporated into the Servo development repository and interfaced with other Servo development software. The details of the implementation are given below.&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure in the form YYYYMMDD&lt;br /&gt;
|-&lt;br /&gt;
| fail_time&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Time of the failure in the form HH:MM:SS.MMMMMM&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date, :fail_time]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python tests.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* Set the Flask environment variable by typing &amp;lt;code&amp;gt;export FLASK_APP=flask_sever&amp;lt;/code&amp;gt;&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt; or alternatively type &amp;lt;code&amp;gt;flask run&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108688</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108688"/>
		<updated>2017-04-28T23:53:36Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* To Run Unit Tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). A sample HTML page demonstrating the ability to query by filename was written as part of phase I.  For phase II, a more complex HTML page will be required which queries for filename and date range. Records will also be populated into an HTML table instead of returning raw JSON.  Javascript functions will also be added to sort columns using the [https://datatables.net DataTables jQuery plug-in]. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely dictated by the Servo team - they have defined what the service should do and how it should be designed. A standalone working version of the design has been implemented in a [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 Github repository].  The complete list of files created for this project can be examined in this repository.  Eventually, this project will be incorporated into the Servo development repository and interfaced with other Servo development software. The details of the implementation are given below.&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure in the form YYYYMMDD&lt;br /&gt;
|-&lt;br /&gt;
| fail_time&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Time of the failure in the form HH:MM:SS.MMMMMM&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date, :fail_time]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python tests.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108687</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108687"/>
		<updated>2017-04-28T23:52:23Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). A sample HTML page demonstrating the ability to query by filename was written as part of phase I.  For phase II, a more complex HTML page will be required which queries for filename and date range. Records will also be populated into an HTML table instead of returning raw JSON.  Javascript functions will also be added to sort columns using the [https://datatables.net DataTables jQuery plug-in]. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely dictated by the Servo team - they have defined what the service should do and how it should be designed. A standalone working version of the design has been implemented in a [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 Github repository].  The complete list of files created for this project can be examined in this repository.  Eventually, this project will be incorporated into the Servo development repository and interfaced with other Servo development software. The details of the implementation are given below.&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure in the form YYYYMMDD&lt;br /&gt;
|-&lt;br /&gt;
| fail_time&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Time of the failure in the form HH:MM:SS.MMMMMM&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date, :fail_time]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108686</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108686"/>
		<updated>2017-04-28T23:46:25Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Unit Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely dictated by the Servo team - they have defined what the service should do and how it should be designed. A standalone working version of the design has been implemented in a [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 Github repository].  The complete list of files created for this project can be examined in this repository.  Eventually, this project will be incorporated into the Servo development repository and interfaced with other Servo development software. The details of the implementation are given below.&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure in the form YYYYMMDD&lt;br /&gt;
|-&lt;br /&gt;
| fail_time&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Time of the failure in the form HH:MM:SS.MMMMMM&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date, :fail_time]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108685</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108685"/>
		<updated>2017-04-28T23:44:10Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Data model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely dictated by the Servo team - they have defined what the service should do and how it should be designed. A standalone working version of the design has been implemented in a [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 Github repository].  The complete list of files created for this project can be examined in this repository.  Eventually, this project will be incorporated into the Servo development repository and interfaced with other Servo development software. The details of the implementation are given below.&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure in the form YYYYMMDD&lt;br /&gt;
|-&lt;br /&gt;
| fail_time&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Time of the failure in the form HH:MM:SS.MMMMMM&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108427</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108427"/>
		<updated>2017-04-13T00:34:42Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely dictated by the Servo team - they have defined what the service should do and how it should be designed. A standalone working version of the design has been implemented in a [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 Github repository].  The complete list of files created for this project can be examined in this repository.  Eventually, this project will be incorporated into the Servo development repository and interfaced with other Servo development software. The details of the implementation are given below.&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108405</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108405"/>
		<updated>2017-04-13T00:05:11Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Flask Service */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes are given below:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108401</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108401"/>
		<updated>2017-04-12T23:59:42Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given that the fail_date is already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108400</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108400"/>
		<updated>2017-04-12T23:57:51Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date already being populated in the database as an ISO date string when records are added, it should be possible to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108140</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108140"/>
		<updated>2017-04-08T00:36:25Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Testing Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Feature&lt;br /&gt;
! Testing Infrastructure Required&lt;br /&gt;
|-&lt;br /&gt;
| Date Range Query Functionality&lt;br /&gt;
| New unit tests to ensure query returns proper records&lt;br /&gt;
|-&lt;br /&gt;
| HTML Front-End&lt;br /&gt;
| Functional tests to ensure proper views are returned for each request.&lt;br /&gt;
|-&lt;br /&gt;
| Servo Integration via filter-intermittents&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|-&lt;br /&gt;
| Servo integration with Salt Stack File System&lt;br /&gt;
| Unit and functional testing via Servo &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108137</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108137"/>
		<updated>2017-04-08T00:28:00Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Unit Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Round 2 Testing Plan===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108136</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108136"/>
		<updated>2017-04-08T00:24:48Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108135</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108135"/>
		<updated>2017-04-08T00:22:29Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in [https://github.com/servo/saltfs Salt Stack File System] (saltfs).  This will also require a testing setup for the saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108133</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108133"/>
		<updated>2017-04-08T00:17:08Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Filter intermittent failures by date &lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database during specific time frame &lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| A user submits a query for intermittent failures occurring within a specific date range&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| A collection of intermittent failure records is returned &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using JavaScript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required JavaScript request mechanism should fulfill the requirements.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Interface with Flask service via HTML front-end&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Previously saved intermittent failure records exist in the database, Flask server is operational&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| Users select a date range via an HTML form&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| An HTML view is presented showing a pareto failing tests, each linked to the pertinent Github library and sortable by various criteria.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Modify &amp;quot;filter-intermittents&amp;quot; command to add entries to database&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Instead of being immediately filtered, the details of the intermittent failures are firstly recorded in the database before being filtered.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the database for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Use Case&lt;br /&gt;
| Propagate failure data into Salt Stack Filesystem&lt;br /&gt;
|-&lt;br /&gt;
! Primary Actor&lt;br /&gt;
| Servo Developer&lt;br /&gt;
|-&lt;br /&gt;
! Preconditions&lt;br /&gt;
| Failure database exists with functionality to add records, Flask server is operational.&lt;br /&gt;
|-&lt;br /&gt;
! Triggers&lt;br /&gt;
| The &amp;quot;filter-intermittents command is issued as part of the automated failure reporting that occurs during continuous integration.  Failure information is propagated to the master Salt Stack Filesystem for archival.&lt;br /&gt;
|-&lt;br /&gt;
! Postconditions&lt;br /&gt;
| Entries exist in the Salt Stack Filesystem for each occurrence of an intermittent failure.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108122</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108122"/>
		<updated>2017-04-07T23:11:39Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Motivation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding frequencies of specific failing tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using javascript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required javascript request mechanism should fulfill the requirements.&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108121</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108121"/>
		<updated>2017-04-07T23:10:27Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding specific tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using javascript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required javascript request mechanism should fulfill the requirements.&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded for each intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108120</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108120"/>
		<updated>2017-04-07T23:07:21Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding specific tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using javascript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required javascript request mechanism should fulfill the requirements.&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
It was requested that we use an existing command (filter-intermittents) In the existing testing_commands.py code as the initiator for saving records.  Separate failure records should be recorded foreach intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108119</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108119"/>
		<updated>2017-04-07T23:05:31Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding specific tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using javascript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required javascript request mechanism should fulfill the requirements.&lt;br /&gt;
&lt;br /&gt;
The last two steps involve the full integration of this product into the Servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
In the given testing_commands.py code, it has been requested that we use an existing command (filter-intermittents) as the initiator for saving records.  Separate failure records should be recorded foreach intermittent failure encountered.  This modification will provide the actual integration into the Servo framework to allow communication with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the Servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108118</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108118"/>
		<updated>2017-04-07T22:58:05Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Round 2 Design Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding specific tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
The second request is to build an HTML front-end to the service to be queried using javascript.  The user interface should report the results in a useful manner (linking to the pertinent Github location, sorting by different criteria, etc.). The requirements of this request should allow us to repurpose the testing webpages that were built in the first round.  Slight modifications and the addition of the required javascript request mechanism should fulfill the requirements.&lt;br /&gt;
&lt;br /&gt;
These two last steps involve the full integration of this product into the servo pipeline and will require the forking of the Servo project on Github.&lt;br /&gt;
&lt;br /&gt;
In the given test_command application we have to make filter-intermittents command record a separate failure for each intermittent failure encountered.  This is the actual integration into the servo framework to talk with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108117</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108117"/>
		<updated>2017-04-07T22:48:02Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Subsequent Steps (Round 2) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding specific tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Round 2 Design Plan===&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
Build a HTML front-end to the service that queries it using JS and reports the results in a useful manner (linking to github, sorting, etc.). For this we should be able to repurpose the testing webpages that we built in the first round.  Polishing these up and giving them the required JS request mechanism should suffice.&lt;br /&gt;
&lt;br /&gt;
These two last steps are the full integration of this product into the servo pipeline and will require the forking of the servo project on github.&lt;br /&gt;
&lt;br /&gt;
In the given test_command application we have to make filter-intermittents command record a separate failure for each intermittent failure encountered.  This is the actual integration into the servo framework to talk with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108116</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108116"/>
		<updated>2017-04-07T22:44:15Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Motivation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Specifically, the requestors have  [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project specified] that a [http://flask.pocoo.org/docs/0.12/ Flask] service be developed using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7] to aid in tracking failures. Intermittent failures frequently occur during development but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolve the most prevalent issues. The intermittent test failure tracker would store information regarding specific tests and also provide a means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Subsequent Steps (Round 2)==&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
Build a HTML front-end to the service that queries it using JS and reports the results in a useful manner (linking to github, sorting, etc.). For this we should be able to repurpose the testing webpages that we built in the first round.  Polishing these up and giving them the required JS request mechanism should suffice.&lt;br /&gt;
&lt;br /&gt;
These two last steps are the full integration of this product into the servo pipeline and will require the forking of the servo project on github.&lt;br /&gt;
&lt;br /&gt;
In the given test_command application we have to make filter-intermittents command record a separate failure for each intermittent failure encountered.  This is the actual integration into the servo framework to talk with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108114</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108114"/>
		<updated>2017-04-07T22:37:57Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Tasks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. The [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project request] made is for a [http://flask.pocoo.org/docs/0.12/ Flask] service using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7]. The intermittent test failure tracker stores information regarding a test that fails intermittently and also provides means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
=====Round 1=====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
=====Round 2=====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Subsequent Steps (Round 2)==&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
Build a HTML front-end to the service that queries it using JS and reports the results in a useful manner (linking to github, sorting, etc.). For this we should be able to repurpose the testing webpages that we built in the first round.  Polishing these up and giving them the required JS request mechanism should suffice.&lt;br /&gt;
&lt;br /&gt;
These two last steps are the full integration of this product into the servo pipeline and will require the forking of the servo project on github.&lt;br /&gt;
&lt;br /&gt;
In the given test_command application we have to make filter-intermittents command record a separate failure for each intermittent failure encountered.  This is the actual integration into the servo framework to talk with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108113</id>
		<title>CSC/ECE 517 Spring 2017/OSS M1706 Tracking intermittent test failures over time</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/OSS_M1706_Tracking_intermittent_test_failures_over_time&amp;diff=108113"/>
		<updated>2017-04-07T22:37:10Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Tasks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This wiki provides details on new functionality programmed for the Servo OSS project.&lt;br /&gt;
&lt;br /&gt;
===Background===&lt;br /&gt;
&amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
This project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. The [https://github.com/servo/servo/wiki/Tracking-intermittent-failures-over-time-project request] made is for a [http://flask.pocoo.org/docs/0.12/ Flask] service using [https://en.wikipedia.org/wiki/Python_(programming_language) Python 2.7]. The intermittent test failure tracker stores information regarding a test that fails intermittently and also provides means to quickly query for tests that have failed.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
====Round 1====&lt;br /&gt;
The intermittent test failure tracker initial steps (for the OSS project) include:&lt;br /&gt;
* Build a Flask service &lt;br /&gt;
* Use a JSON file to store information&lt;br /&gt;
* Record required parameters: Test file, platform, test machine (builder), and related GitHub pull request number&lt;br /&gt;
* Query the store results given a particular test file name&lt;br /&gt;
* Use the known intermittent issue tracker as an example of a Simple flask server &lt;br /&gt;
&lt;br /&gt;
====Round 2====&lt;br /&gt;
Subsequent steps (for the final project) include:&lt;br /&gt;
* Add ability to query the service by a date range, to find out which were occurred the most often&lt;br /&gt;
* Build an HTML front-end to the service that queries using JS and reports the results&lt;br /&gt;
** Links to GitHub&lt;br /&gt;
** Sorting&lt;br /&gt;
* Make [https://github.com/servo/servo/blob/master/python/servo/testing_commands.py#L508-L574 filter-intermittents] command record a separate failure for each intermittent failure encountered&lt;br /&gt;
* Propogate the required information for recording failures in [https://github.com/servo/saltfs/issues/597 saltfs]&lt;br /&gt;
&lt;br /&gt;
== Design ==&lt;br /&gt;
&lt;br /&gt;
===Design Pattern===&lt;br /&gt;
&lt;br /&gt;
The Servo and this project's code follow a [https://en.wikipedia.org/wiki/Service_layers_pattern Service Layer] design pattern. This design pattern breaks up functionality into smaller &amp;quot;services&amp;quot; and applies the services to the topmost &amp;quot;layer&amp;quot; of the project for which they are needed.&lt;br /&gt;
&lt;br /&gt;
===Application Flow===&lt;br /&gt;
&lt;br /&gt;
==== Saving a Test ====&lt;br /&gt;
The Servo build agent calls a webhook (a way for an app to provide other applications with real-time information) inside the test tracker. The webhook then calls a handler that contains any business logic necessary to transform the request. Finally the handler persists the request into the db, in this case a json file. This flow can be seen in the graph below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
                    |       Intermittent Test Failure Tracker     |&lt;br /&gt;
                    |                                             |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      +--------+&lt;br /&gt;
|    Servo     |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
|    Build     +------&amp;gt;  webhook  +------&amp;gt; handler +----&amp;gt;  db  +---------&amp;gt;  json  |&lt;br /&gt;
|    Server    |    | |           |      |         |    |      |  |      |  file  |&lt;br /&gt;
|              |    | |           |      |         |    |      |  |      |        |&lt;br /&gt;
+--------------+    | +-----------+      +---------+    +------+  |      +--------+&lt;br /&gt;
                    |                                             |&lt;br /&gt;
                    +---------------------------------------------+&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Subsequent Steps (Round 2)==&lt;br /&gt;
The first request is to add the ability to query the service by a date range, to find out which failures were most frequent.  Given the fail_date is included in the addition call as an ISO date string, we should be able to build a function to query this date using standard date functions and a range given from the user.  This will require a new query function that takes the range as parameters.&lt;br /&gt;
&lt;br /&gt;
Build a HTML front-end to the service that queries it using JS and reports the results in a useful manner (linking to github, sorting, etc.). For this we should be able to repurpose the testing webpages that we built in the first round.  Polishing these up and giving them the required JS request mechanism should suffice.&lt;br /&gt;
&lt;br /&gt;
These two last steps are the full integration of this product into the servo pipeline and will require the forking of the servo project on github.&lt;br /&gt;
&lt;br /&gt;
In the given test_command application we have to make filter-intermittents command record a separate failure for each intermittent failure encountered.  This is the actual integration into the servo framework to talk with this tracking system.&lt;br /&gt;
&lt;br /&gt;
The second integration into the servo project for this tracker will be to propagate the required information for recording failures in saltfs.  This will also require a testing setup for saltfs or at least a mock setup for mimicking the integration into saltfs.&lt;br /&gt;
&lt;br /&gt;
== Implementation ==&lt;br /&gt;
The implementation is entirely influenced by the request, the Servo team clearly defines what the service should do and how it would be made.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Data model ===&lt;br /&gt;
The model for an intermittent test is defined mostly by the request with a few additions to help with querying in later steps of the OSS request.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Type&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| test_file&lt;br /&gt;
| String&lt;br /&gt;
| Name of the intermittent test file &lt;br /&gt;
|-&lt;br /&gt;
| platform&lt;br /&gt;
| String&lt;br /&gt;
| Platform the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| builder&lt;br /&gt;
| String&lt;br /&gt;
| The test machine (builder) the test failed on&lt;br /&gt;
|-&lt;br /&gt;
| number&lt;br /&gt;
| Integer&lt;br /&gt;
| The GitHub pull request number&lt;br /&gt;
|-&lt;br /&gt;
| fail_date&lt;br /&gt;
| ISO date (String)&lt;br /&gt;
| Date of the failure&lt;br /&gt;
|}&lt;br /&gt;
=== Datastore ===&lt;br /&gt;
To store the intermittent test failures, a library called [https://tinydb.readthedocs.io/en/latest/ TinyDB] is used. This library is a native python library that provides convenient [https://en.wikipedia.org/wiki/SQL SQL] command like helpers around a [https://www.w3schools.com/js/js_json_syntax.asp JSON] file to more easily use it like a database. The format of the JSON file is simply an array of JSON objects, making the file easily human-readable.&lt;br /&gt;
&lt;br /&gt;
=== Flask Service ===&lt;br /&gt;
[http://flask.pocoo.org/ Flask] is a [https://en.wikipedia.org/wiki/Microservices microservice] framework written in Python. A flask service is a REST (representational state transfer) API that maps URL and HTTP verbs to python functions. Some basic examples of flask routes:&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/')&lt;br /&gt;
  def index():&lt;br /&gt;
    return 'Index page'&lt;br /&gt;
  &lt;br /&gt;
  @app.route('/user/&amp;lt;username&amp;gt;')&lt;br /&gt;
  def show_user(username):&lt;br /&gt;
    return db.lookup(username)&lt;br /&gt;
&lt;br /&gt;
The first method returns 'index page' at the root URL. The second method accepts a URL param after user and returns the user from a database.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Functional Testing ===&lt;br /&gt;
As a convenience to the testers included in this code base is a set of [http://csc517oss.zachncst.com/ testing web applications] and is only for illustrating the project's functionality.&lt;br /&gt;
This simple set of forms allow a tester to exercise the functionality of the [https://en.wikipedia.org/wiki/Representational_state_transfer REST] endpoints without having to write any REST code.&lt;br /&gt;
The links on the page lead to demonstrations of the query and record handlers, as well as a display of the JSON file containing all the Intermittent Test Failure records. &lt;br /&gt;
All usable for thorough integration testing.&lt;br /&gt;
&lt;br /&gt;
===Unit Testing===&lt;br /&gt;
The Unit Tests included in the code exercise the major functions of this system. The tests exercise the addition of a record into the database, the removal of a record given a filename, the retrieval of a record, and the assertion that a record will not be added if any of the record parameters (test_file, platform, builder, number) is missing. All unit tests are in tests.py.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Test Purpose&lt;br /&gt;
! Function Tested&lt;br /&gt;
! Parameters&lt;br /&gt;
|-&lt;br /&gt;
| Add a record to a database &lt;br /&gt;
| db.add&lt;br /&gt;
| params[:self, :test_file, :platform, :builder, :number, :fail_date]&lt;br /&gt;
|-&lt;br /&gt;
| Delete a record from database&lt;br /&gt;
| db.remove&lt;br /&gt;
| params[:test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
| Query the Intermittent failure records &lt;br /&gt;
| handlers.query&lt;br /&gt;
| params[:db, :test_file]&lt;br /&gt;
|-&lt;br /&gt;
| Record a new Intermittent failure, test invalid values - 4 tests for blanks for each input item&lt;br /&gt;
| handlers.record&lt;br /&gt;
| params[:db, :test_file, :platform, :builder, :number]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
====Running Unit Tests and the App====&lt;br /&gt;
&lt;br /&gt;
Before attempting either of the following, clone the [https://github.com/adamw17/csc517ossproject repo].&lt;br /&gt;
&lt;br /&gt;
=====To Run Unit Tests=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python test.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====To Run The App Locally=====&lt;br /&gt;
* In the cloned repo folder, use the command &amp;lt;code&amp;gt;python -m flask_server&amp;lt;/code&amp;gt;&lt;br /&gt;
* To launch the app, go to http://localhost:5000&lt;br /&gt;
&lt;br /&gt;
== Submission/Pull Requests ==&lt;br /&gt;
&lt;br /&gt;
There is no Pull Request because Servo manager Josh Matthews requested that we start a new (non-branched) repository for this project. The work has been started in a new GitHub repo located [https://github.com/adamw17/csc517ossproject/tree/832969c1cf01d94be340731c744854c25fdbb441 here]. When Servo developers are ready, the project will be pulled in to the Servo project on GitHub. In the interim, we shared our repo with Josh, whose reply was &amp;quot;this looks really great! Thanks for tackling it!&amp;quot;&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/oss_M1706&amp;diff=107932</id>
		<title>CSC/ECE 517 Spring 2017/oss M1706</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/oss_M1706&amp;diff=107932"/>
		<updated>2017-04-06T01:01:41Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Description===&lt;br /&gt;
This purpose of this project is provide additional testing infrastructure for the Servo OSS project. &amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. The goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
In particular, this project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Intermittent failures frequently occur but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolved the most prevalent issues.&lt;br /&gt;
&lt;br /&gt;
===Tasks to be completed===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Proposed Design===&lt;br /&gt;
&lt;br /&gt;
'''TASK 1''' - &lt;br /&gt;
&lt;br /&gt;
'''TASK 2''' - &lt;br /&gt;
&lt;br /&gt;
==== Design Pattern Used ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Features to be added====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Testing Plan===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Files Changed and Added====&lt;br /&gt;
&lt;br /&gt;
====Models====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Controllers====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Views====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Important Links===&lt;br /&gt;
&lt;br /&gt;
Link to Github repository :  https://github.com/adamw17/csc517ossproject&lt;br /&gt;
&lt;br /&gt;
Link to Pull request : The contact for this project asked us to create an entirely new repository.  A pull request is not applicable.&lt;br /&gt;
&lt;br /&gt;
===References===&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/oss_M1706&amp;diff=107931</id>
		<title>CSC/ECE 517 Spring 2017/oss M1706</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/oss_M1706&amp;diff=107931"/>
		<updated>2017-04-06T01:00:23Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Description===&lt;br /&gt;
This purpose of this project is provide additional testing infrastructure for the [[Servo OSS project]]. &amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. The goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
In particular, this project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Intermittent failures frequently occur but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolved the most prevalent issues.&lt;br /&gt;
&lt;br /&gt;
===Tasks to be completed===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Proposed Design===&lt;br /&gt;
&lt;br /&gt;
'''TASK 1''' - &lt;br /&gt;
&lt;br /&gt;
'''TASK 2''' - &lt;br /&gt;
&lt;br /&gt;
==== Design Pattern Used ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Features to be added====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Testing Plan===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Files Changed and Added====&lt;br /&gt;
&lt;br /&gt;
====Models====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Controllers====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Views====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Important Links===&lt;br /&gt;
&lt;br /&gt;
Link to Github repository :  https://github.com/adamw17/csc517ossproject&lt;br /&gt;
&lt;br /&gt;
Link to Pull request : The contact for this project asked us to create an entirely new repository.  A pull request is not applicable.&lt;br /&gt;
&lt;br /&gt;
===References===&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/oss_M1706&amp;diff=107928</id>
		<title>CSC/ECE 517 Spring 2017/oss M1706</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/oss_M1706&amp;diff=107928"/>
		<updated>2017-04-06T00:54:49Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Description===&lt;br /&gt;
This purpose of this project is provide additional testing infrastructure for Mozilla Servo OSS project. &amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. The goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
In particular, this project is a request from the Servo OSS project to reduce the impact intermittent test failures have on the software. Intermittent failures frequently occur but are normally ignored during continuous integration. The frequency of each intermittent failure signature, though not currently logged, would be useful in allowing developers to identify and resolved the most prevalent issues. &lt;br /&gt;
&lt;br /&gt;
===Tasks to be completed===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Proposed Design===&lt;br /&gt;
&lt;br /&gt;
'''TASK 1''' - &lt;br /&gt;
&lt;br /&gt;
'''TASK 2''' - &lt;br /&gt;
&lt;br /&gt;
==== Design Pattern Used ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Features to be added====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Gems to be used====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Metrics View===&lt;br /&gt;
&lt;br /&gt;
====Instructor View====&lt;br /&gt;
&lt;br /&gt;
====Student View====&lt;br /&gt;
&lt;br /&gt;
===Testing Plan===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Files Changed and Added====&lt;br /&gt;
&lt;br /&gt;
====Models====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Controllers====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Views====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Important Links===&lt;br /&gt;
&lt;br /&gt;
Link to Github repository :  &lt;br /&gt;
&lt;br /&gt;
Link to Pull request : &lt;br /&gt;
&lt;br /&gt;
===References===&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/oss_M1706&amp;diff=107921</id>
		<title>CSC/ECE 517 Spring 2017/oss M1706</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2017/oss_M1706&amp;diff=107921"/>
		<updated>2017-04-05T20:06:13Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Description===&lt;br /&gt;
This project is designed to provide additional testing infrastructure for Mozilla Servo. &amp;quot;[https://github.com/servo/servo/wiki/Design Servo] is a project to develop a new Web browser engine. The goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.&amp;quot; Servo can be used through Browser.html, embedded in a website, or natively in Mozilla Firefox. It is designed to load web pages more efficiently and more securely. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Tasks to be completed===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===UML Diagram===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Proposed Design===&lt;br /&gt;
&lt;br /&gt;
'''TASK 1''' - &lt;br /&gt;
&lt;br /&gt;
'''TASK 2''' - &lt;br /&gt;
&lt;br /&gt;
==== Design Pattern Used ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Features to be added====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Gems to be used====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Metrics View===&lt;br /&gt;
&lt;br /&gt;
====Instructor View====&lt;br /&gt;
&lt;br /&gt;
====Student View====&lt;br /&gt;
&lt;br /&gt;
===Testing Plan===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Files Changed and Added====&lt;br /&gt;
&lt;br /&gt;
====Models====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Controllers====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Views====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Important Links===&lt;br /&gt;
&lt;br /&gt;
Link to Github repository :  &lt;br /&gt;
&lt;br /&gt;
Link to Pull request : &lt;br /&gt;
&lt;br /&gt;
===References===&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107724</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107724"/>
		<updated>2017-04-01T00:23:13Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Create test file named grades_helper_spec.rb in spec/helpers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file, in order to separate Controller logic from View.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb. Prior to this project, none of the existing methods in grades_helper.rb were tested using unit tests.  The following unit tests were written and used to exercise the methods.  An examination of the specific test cases is given below in the Test Plan section.  The final two methods in this file are functional tests used to ensure all discrete functions perform normally when combined together. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
require 'selenium-webdriver'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, type: :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @assignment2 = create(:assignment, name: 'whatever', max_team_size: 3)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
    @new_participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment2)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @questionnaire = create(:questionnaire)&lt;br /&gt;
    @metareview_questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
    @author_feedback_questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
    @teammate_review_questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @metareview_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @author_feedback_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @teammate_review_questionnaire)&lt;br /&gt;
&lt;br /&gt;
    @questions = {}&lt;br /&gt;
    @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
    @questions[@metareview_questionnaire.symbol] = @metareview_questionnaire.questions&lt;br /&gt;
    @questions[@author_feedback_questionnaire.symbol] = @author_feedback_questionnaire.questions&lt;br /&gt;
    @questions[@teammate_review_questionnaire.symbol] = @teammate_review_questionnaire.questions&lt;br /&gt;
&lt;br /&gt;
    create(&lt;br /&gt;
      :assignment_due_date,&lt;br /&gt;
      assignment: @assignment2,&lt;br /&gt;
      deadline_type: @deadline_type,&lt;br /&gt;
      submission_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_of_review_allowed_id: @deadline_right.id,&lt;br /&gt;
      due_at: '2015-12-30 23:30:12'&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment2.max_team_size = 1&lt;br /&gt;
      @assignment2.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment with a team and with a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_X' do&lt;br /&gt;
    it 'should return records of each review type if available' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(rscore_review).to_not be_nil&lt;br /&gt;
      expect(rscore_metareview).to_not be_nil&lt;br /&gt;
      expect(rscore_feedback).to_not eq(nil)&lt;br /&gt;
      expect(rscore_teammate).to_not eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if review types are not available' do&lt;br /&gt;
      params[:id] = @participant.id&lt;br /&gt;
      expect(rscore_review).to be_nil&lt;br /&gt;
      expect(rscore_metareview).to be_nil&lt;br /&gt;
      expect(rscore_feedback).to be_nil&lt;br /&gt;
      expect(rscore_teammate).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      graded_participant = create(:participant, grade: 90)&lt;br /&gt;
      create(:assignment_questionnaire, user_id: graded_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = graded_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      @new_participant.grade = 90&lt;br /&gt;
      @new_participant.save&lt;br /&gt;
      expect(p_title).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_title).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_X_reputation' do&lt;br /&gt;
    hamer_input = [-0.1, 0, 0.5, 1, 1.5, 2, 2.1]&lt;br /&gt;
    lauw_input = [-0.1, 0, 0.2, 0.4, 0.6, 0.8, 0.9]&lt;br /&gt;
    output = %w(c1 c1 c2 c2 c3 c4 c5)&lt;br /&gt;
    it 'should return correct css for hamer reputations' do&lt;br /&gt;
      hamer_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_hamer_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for luaw reputations' do&lt;br /&gt;
      lauw_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_lauw_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
#########################&lt;br /&gt;
# Functional Cases&lt;br /&gt;
#########################&lt;br /&gt;
describe GradesHelper, type: :feature do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    create(:team_user, team: @assignment_team, user: User.find(@participant.user_id))&lt;br /&gt;
    login_as(@participant.name)&lt;br /&gt;
    visit '/student_task/list'&lt;br /&gt;
    expect(page).to have_content 'final2'&lt;br /&gt;
    click_link('final2')&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 1' do&lt;br /&gt;
    it &amp;quot;Javascript should work on grades Alternate View&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Alternate View'&lt;br /&gt;
      expect(page).to have_content 'Review'&lt;br /&gt;
      click_link('Alternate View')&lt;br /&gt;
      expect(page).to have_content 'Grade for submission'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 2' do&lt;br /&gt;
    it &amp;quot;Student should be able to view scores&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Your scores'&lt;br /&gt;
      click_link('Your scores')&lt;br /&gt;
      expect(page).to have_content '0.00%'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
&lt;br /&gt;
The main goal of testing was to ensure functionality was maintained after refactoring was completed.  The following test cases exercise the functionality of the code sections which were refactored.  Automated versions of these tests were written using RSpec/FactoryGirl/Selenium and appear in on the Github repository associated with this project. The functional tests may also be run manually per the descriptions below.&lt;br /&gt;
===Functional Tests===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Unit Tests===&lt;br /&gt;
Unit tests were required per the project assignment for helper methods which previously had no associated unit tests. These tests verified that each discrete method returned the proper values.  The tests were written using RSpec/FactoryGirl and are listed in the code above.  A summary of the test cases is given below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameters&lt;br /&gt;
! Purpose&lt;br /&gt;
! Tested Scenarios&lt;br /&gt;
|-&lt;br /&gt;
| get_accordion_title&lt;br /&gt;
| last_topic, new_topic&lt;br /&gt;
| Render a proper partial for questionnaires&lt;br /&gt;
| last_topic=nil; new_topic=nil; neither=nil&lt;br /&gt;
|-&lt;br /&gt;
| has_team_and_metareview?&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Determine whether a team and reviews exist for a given assignment&lt;br /&gt;
| find an assignment given assignment_id; find an assignment given participant_id; return 1 for a case with a team but no meta review; return 1 for a case with a meta review but no team; return 2 for a case with a team and metareview&lt;br /&gt;
|-&lt;br /&gt;
| participant&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Returns a participant given an id&lt;br /&gt;
| find a newly created participant &lt;br /&gt;
|-&lt;br /&gt;
| rscore_review&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the reviews for a participant based on id&lt;br /&gt;
| return reviews from a participant with reviews; return nothing for a participant without reviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_metareview&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the metareviews for a participant based on id&lt;br /&gt;
| return metareviews from a participant with metareviews; return nothing for a participant without metareviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_feedback&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the author feedback for a participant based on id&lt;br /&gt;
| returns feedback for a participant with feedback; return nothing for a participant without feedback.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rscore_teammate&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns teammate reviews for a participant based on id&lt;br /&gt;
| returns teammate reviews for a participant with teammate reviews; return nothing for a participant without teammate reviews.&lt;br /&gt;
|-&lt;br /&gt;
| p_total_score&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns grade for a participant based on id&lt;br /&gt;
| returns grade for a participant with grade; returns total_score  for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| p_title&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns a pertinent title for the grades view&lt;br /&gt;
| returns text title for a participant with grade; returns nothing for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_hamer_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.5 yields CSS c2; rep 1 yields CSS c2; rep 1.5 yields CSS c3, rep 2 yields CSS c4; rep 2.1 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_lauw_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.2 yields CSS c2; rep 0.4 yields CSS c2; rep 0.6 yields CSS c3, rep 0.8 yields CSS c4; rep 0.9 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107723</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107723"/>
		<updated>2017-04-01T00:17:33Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Create test file named grades_helper_spec.rb in spec/helpers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file, in order to separate Controller logic from View.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
require 'selenium-webdriver'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, type: :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @assignment2 = create(:assignment, name: 'whatever', max_team_size: 3)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
    @new_participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment2)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @questionnaire = create(:questionnaire)&lt;br /&gt;
    @metareview_questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
    @author_feedback_questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
    @teammate_review_questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @metareview_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @author_feedback_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @teammate_review_questionnaire)&lt;br /&gt;
&lt;br /&gt;
    @questions = {}&lt;br /&gt;
    @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
    @questions[@metareview_questionnaire.symbol] = @metareview_questionnaire.questions&lt;br /&gt;
    @questions[@author_feedback_questionnaire.symbol] = @author_feedback_questionnaire.questions&lt;br /&gt;
    @questions[@teammate_review_questionnaire.symbol] = @teammate_review_questionnaire.questions&lt;br /&gt;
&lt;br /&gt;
    create(&lt;br /&gt;
      :assignment_due_date,&lt;br /&gt;
      assignment: @assignment2,&lt;br /&gt;
      deadline_type: @deadline_type,&lt;br /&gt;
      submission_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_of_review_allowed_id: @deadline_right.id,&lt;br /&gt;
      due_at: '2015-12-30 23:30:12'&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment2.max_team_size = 1&lt;br /&gt;
      @assignment2.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment with a team and with a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_X' do&lt;br /&gt;
    it 'should return records of each review type if available' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(rscore_review).to_not be_nil&lt;br /&gt;
      expect(rscore_metareview).to_not be_nil&lt;br /&gt;
      expect(rscore_feedback).to_not eq(nil)&lt;br /&gt;
      expect(rscore_teammate).to_not eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if review types are not available' do&lt;br /&gt;
      params[:id] = @participant.id&lt;br /&gt;
      expect(rscore_review).to be_nil&lt;br /&gt;
      expect(rscore_metareview).to be_nil&lt;br /&gt;
      expect(rscore_feedback).to be_nil&lt;br /&gt;
      expect(rscore_teammate).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      graded_participant = create(:participant, grade: 90)&lt;br /&gt;
      create(:assignment_questionnaire, user_id: graded_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = graded_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      @new_participant.grade = 90&lt;br /&gt;
      @new_participant.save&lt;br /&gt;
      expect(p_title).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_title).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_X_reputation' do&lt;br /&gt;
    hamer_input = [-0.1, 0, 0.5, 1, 1.5, 2, 2.1]&lt;br /&gt;
    lauw_input = [-0.1, 0, 0.2, 0.4, 0.6, 0.8, 0.9]&lt;br /&gt;
    output = %w(c1 c1 c2 c2 c3 c4 c5)&lt;br /&gt;
    it 'should return correct css for hamer reputations' do&lt;br /&gt;
      hamer_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_hamer_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for luaw reputations' do&lt;br /&gt;
      lauw_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_lauw_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
#########################&lt;br /&gt;
# Functional Cases&lt;br /&gt;
#########################&lt;br /&gt;
describe GradesHelper, type: :feature do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    create(:team_user, team: @assignment_team, user: User.find(@participant.user_id))&lt;br /&gt;
    login_as(@participant.name)&lt;br /&gt;
    visit '/student_task/list'&lt;br /&gt;
    expect(page).to have_content 'final2'&lt;br /&gt;
    click_link('final2')&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 1' do&lt;br /&gt;
    it &amp;quot;Javascript should work on grades Alternate View&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Alternate View'&lt;br /&gt;
      expect(page).to have_content 'Review'&lt;br /&gt;
      click_link('Alternate View')&lt;br /&gt;
      expect(page).to have_content 'Grade for submission'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 2' do&lt;br /&gt;
    it &amp;quot;Student should be able to view scores&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Your scores'&lt;br /&gt;
      click_link('Your scores')&lt;br /&gt;
      expect(page).to have_content '0.00%'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
&lt;br /&gt;
The main goal of testing was to ensure functionality was maintained after refactoring was completed.  The following test cases exercise the functionality of the code sections which were refactored.  Automated versions of these tests were written using RSpec/FactoryGirl/Selenium and appear in on the Github repository associated with this project. The functional tests may also be run manually per the descriptions below.&lt;br /&gt;
===Functional Tests===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Unit Tests===&lt;br /&gt;
Unit tests were required per the project assignment for helper methods which previously had no associated unit tests. These tests verified that each discrete method returned the proper values.  The tests were written using RSpec/FactoryGirl and are listed in the code above.  A summary of the test cases is given below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameters&lt;br /&gt;
! Purpose&lt;br /&gt;
! Tested Scenarios&lt;br /&gt;
|-&lt;br /&gt;
| get_accordion_title&lt;br /&gt;
| last_topic, new_topic&lt;br /&gt;
| Render a proper partial for questionnaires&lt;br /&gt;
| last_topic=nil; new_topic=nil; neither=nil&lt;br /&gt;
|-&lt;br /&gt;
| has_team_and_metareview?&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Determine whether a team and reviews exist for a given assignment&lt;br /&gt;
| find an assignment given assignment_id; find an assignment given participant_id; return 1 for a case with a team but no meta review; return 1 for a case with a meta review but no team; return 2 for a case with a team and metareview&lt;br /&gt;
|-&lt;br /&gt;
| participant&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Returns a participant given an id&lt;br /&gt;
| find a newly created participant &lt;br /&gt;
|-&lt;br /&gt;
| rscore_review&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the reviews for a participant based on id&lt;br /&gt;
| return reviews from a participant with reviews; return nothing for a participant without reviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_metareview&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the metareviews for a participant based on id&lt;br /&gt;
| return metareviews from a participant with metareviews; return nothing for a participant without metareviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_feedback&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the author feedback for a participant based on id&lt;br /&gt;
| returns feedback for a participant with feedback; return nothing for a participant without feedback.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rscore_teammate&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns teammate reviews for a participant based on id&lt;br /&gt;
| returns teammate reviews for a participant with teammate reviews; return nothing for a participant without teammate reviews.&lt;br /&gt;
|-&lt;br /&gt;
| p_total_score&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns grade for a participant based on id&lt;br /&gt;
| returns grade for a participant with grade; returns total_score  for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| p_title&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns a pertinent title for the grades view&lt;br /&gt;
| returns text title for a participant with grade; returns nothing for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_hamer_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.5 yields CSS c2; rep 1 yields CSS c2; rep 1.5 yields CSS c3, rep 2 yields CSS c4; rep 2.1 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_lauw_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.2 yields CSS c2; rep 0.4 yields CSS c2; rep 0.6 yields CSS c3, rep 0.8 yields CSS c4; rep 0.9 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107722</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107722"/>
		<updated>2017-04-01T00:13:33Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Create test file named grades_helper_spec.rb in spec/helpers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file, in order to separate Controller logic from View.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
require 'selenium-webdriver'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, type: :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @assignment2 = create(:assignment, name: 'whatever', max_team_size: 3)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
    @new_participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment2)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @questionnaire = create(:questionnaire)&lt;br /&gt;
    @metareview_questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
    @author_feedback_questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
    @teammate_review_questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @metareview_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @author_feedback_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @teammate_review_questionnaire)&lt;br /&gt;
&lt;br /&gt;
    @questions = {}&lt;br /&gt;
    @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
    @questions[@metareview_questionnaire.symbol] = @metareview_questionnaire.questions&lt;br /&gt;
    @questions[@author_feedback_questionnaire.symbol] = @author_feedback_questionnaire.questions&lt;br /&gt;
    @questions[@teammate_review_questionnaire.symbol] = @teammate_review_questionnaire.questions&lt;br /&gt;
&lt;br /&gt;
    create(&lt;br /&gt;
      :assignment_due_date,&lt;br /&gt;
      assignment: @assignment2,&lt;br /&gt;
      deadline_type: @deadline_type,&lt;br /&gt;
      submission_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_of_review_allowed_id: @deadline_right.id,&lt;br /&gt;
      due_at: '2015-12-30 23:30:12'&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment2.max_team_size = 1&lt;br /&gt;
      @assignment2.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment with a team and with a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_X' do&lt;br /&gt;
    it 'should return records of each review type if available' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(rscore_review).to_not be_nil&lt;br /&gt;
      expect(rscore_metareview).to_not be_nil&lt;br /&gt;
      expect(rscore_feedback).to_not eq(nil)&lt;br /&gt;
      expect(rscore_teammate).to_not eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if review types are not available' do&lt;br /&gt;
      params[:id] = @participant.id&lt;br /&gt;
      expect(rscore_review).to be_nil&lt;br /&gt;
      expect(rscore_metareview).to be_nil&lt;br /&gt;
      expect(rscore_feedback).to be_nil&lt;br /&gt;
      expect(rscore_teammate).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      graded_participant = create(:participant, grade: 90)&lt;br /&gt;
      create(:assignment_questionnaire, user_id: graded_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = graded_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      @new_participant.grade = 90&lt;br /&gt;
      @new_participant.save&lt;br /&gt;
      expect(p_title).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_title).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_X_reputation' do&lt;br /&gt;
    hamer_input = [-0.1, 0, 0.5, 1, 1.5, 2, 2.1]&lt;br /&gt;
    lauw_input = [-0.1, 0, 0.2, 0.4, 0.6, 0.8, 0.9]&lt;br /&gt;
    output = %w(c1 c1 c2 c2 c3 c4 c5)&lt;br /&gt;
    it 'should return correct css for hamer reputations' do&lt;br /&gt;
      hamer_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_hamer_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for luaw reputations' do&lt;br /&gt;
      lauw_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_lauw_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Functional Cases&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, type: :feature do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    create(:team_user, team: @assignment_team, user: User.find(@participant.user_id))&lt;br /&gt;
    login_as(@participant.name)&lt;br /&gt;
    visit '/student_task/list'&lt;br /&gt;
    expect(page).to have_content 'final2'&lt;br /&gt;
    click_link('final2')&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 1' do&lt;br /&gt;
    it &amp;quot;Javascript should work on grades Alternate View&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Alternate View'&lt;br /&gt;
      expect(page).to have_content 'Review'&lt;br /&gt;
      click_link('Alternate View')&lt;br /&gt;
      expect(page).to have_content 'Grade for submission'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 2' do&lt;br /&gt;
    it &amp;quot;Student should be able to view scores&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Your scores'&lt;br /&gt;
      click_link('Your scores')&lt;br /&gt;
      expect(page).to have_content '0.00%'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
&lt;br /&gt;
The main goal of testing was to ensure functionality was maintained after refactoring was completed.  The following test cases exercise the functionality of the code sections which were refactored.  Automated versions of these tests were written using RSpec/FactoryGirl/Selenium and appear in on the Github repository associated with this project. The functional tests may also be run manually per the descriptions below.&lt;br /&gt;
===Functional Tests===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Unit Tests===&lt;br /&gt;
Unit tests were required per the project assignment for helper methods which previously had no associated unit tests. These tests verified that each discrete method returned the proper values.  The tests were written using RSpec/FactoryGirl and are listed in the code above.  A summary of the test cases is given below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameters&lt;br /&gt;
! Purpose&lt;br /&gt;
! Tested Scenarios&lt;br /&gt;
|-&lt;br /&gt;
| get_accordion_title&lt;br /&gt;
| last_topic, new_topic&lt;br /&gt;
| Render a proper partial for questionnaires&lt;br /&gt;
| last_topic=nil; new_topic=nil; neither=nil&lt;br /&gt;
|-&lt;br /&gt;
| has_team_and_metareview?&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Determine whether a team and reviews exist for a given assignment&lt;br /&gt;
| find an assignment given assignment_id; find an assignment given participant_id; return 1 for a case with a team but no meta review; return 1 for a case with a meta review but no team; return 2 for a case with a team and metareview&lt;br /&gt;
|-&lt;br /&gt;
| participant&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Returns a participant given an id&lt;br /&gt;
| find a newly created participant &lt;br /&gt;
|-&lt;br /&gt;
| rscore_review&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the reviews for a participant based on id&lt;br /&gt;
| return reviews from a participant with reviews; return nothing for a participant without reviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_metareview&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the metareviews for a participant based on id&lt;br /&gt;
| return metareviews from a participant with metareviews; return nothing for a participant without metareviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_feedback&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the author feedback for a participant based on id&lt;br /&gt;
| returns feedback for a participant with feedback; return nothing for a participant without feedback.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rscore_teammate&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns teammate reviews for a participant based on id&lt;br /&gt;
| returns teammate reviews for a participant with teammate reviews; return nothing for a participant without teammate reviews.&lt;br /&gt;
|-&lt;br /&gt;
| p_total_score&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns grade for a participant based on id&lt;br /&gt;
| returns grade for a participant with grade; returns total_score  for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| p_title&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns a pertinent title for the grades view&lt;br /&gt;
| returns text title for a participant with grade; returns nothing for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_hamer_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.5 yields CSS c2; rep 1 yields CSS c2; rep 1.5 yields CSS c3, rep 2 yields CSS c4; rep 2.1 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_lauw_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.2 yields CSS c2; rep 0.4 yields CSS c2; rep 0.6 yields CSS c3, rep 0.8 yields CSS c4; rep 0.9 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107721</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107721"/>
		<updated>2017-04-01T00:13:10Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Create test file named grades_helper_spec.rb in spec/helpers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file, in order to separate Controller logic from View.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
require 'selenium-webdriver'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, type: :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @assignment2 = create(:assignment, name: 'whatever', max_team_size: 3)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
    @new_participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment2)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @questionnaire = create(:questionnaire)&lt;br /&gt;
    @metareview_questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
    @author_feedback_questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
    @teammate_review_questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @metareview_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @author_feedback_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @teammate_review_questionnaire)&lt;br /&gt;
&lt;br /&gt;
    @questions = {}&lt;br /&gt;
    @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
    @questions[@metareview_questionnaire.symbol] = @metareview_questionnaire.questions&lt;br /&gt;
    @questions[@author_feedback_questionnaire.symbol] = @author_feedback_questionnaire.questions&lt;br /&gt;
    @questions[@teammate_review_questionnaire.symbol] = @teammate_review_questionnaire.questions&lt;br /&gt;
&lt;br /&gt;
    create(&lt;br /&gt;
      :assignment_due_date,&lt;br /&gt;
      assignment: @assignment2,&lt;br /&gt;
      deadline_type: @deadline_type,&lt;br /&gt;
      submission_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_of_review_allowed_id: @deadline_right.id,&lt;br /&gt;
      due_at: '2015-12-30 23:30:12'&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment2.max_team_size = 1&lt;br /&gt;
      @assignment2.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment with a team and with a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_X' do&lt;br /&gt;
    it 'should return records of each review type if available' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(rscore_review).to_not be_nil&lt;br /&gt;
      expect(rscore_metareview).to_not be_nil&lt;br /&gt;
      expect(rscore_feedback).to_not eq(nil)&lt;br /&gt;
      expect(rscore_teammate).to_not eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if review types are not available' do&lt;br /&gt;
      params[:id] = @participant.id&lt;br /&gt;
      expect(rscore_review).to be_nil&lt;br /&gt;
      expect(rscore_metareview).to be_nil&lt;br /&gt;
      expect(rscore_feedback).to be_nil&lt;br /&gt;
      expect(rscore_teammate).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      graded_participant = create(:participant, grade: 90)&lt;br /&gt;
      create(:assignment_questionnaire, user_id: graded_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = graded_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      @new_participant.grade = 90&lt;br /&gt;
      @new_participant.save&lt;br /&gt;
      expect(p_title).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_title).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_X_reputation' do&lt;br /&gt;
    hamer_input = [-0.1, 0, 0.5, 1, 1.5, 2, 2.1]&lt;br /&gt;
    lauw_input = [-0.1, 0, 0.2, 0.4, 0.6, 0.8, 0.9]&lt;br /&gt;
    output = %w(c1 c1 c2 c2 c3 c4 c5)&lt;br /&gt;
    it 'should return correct css for hamer reputations' do&lt;br /&gt;
      hamer_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_hamer_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for luaw reputations' do&lt;br /&gt;
      lauw_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_lauw_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Functional Cases&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, type: :feature do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    create(:team_user, team: @assignment_team, user: User.find(@participant.user_id))&lt;br /&gt;
    login_as(@participant.name)&lt;br /&gt;
    visit '/student_task/list'&lt;br /&gt;
    expect(page).to have_content 'final2'&lt;br /&gt;
    click_link('final2')&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 1' do&lt;br /&gt;
    it &amp;quot;Javascript should work on grades Alternate View&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Alternate View'&lt;br /&gt;
      expect(page).to have_content 'Review'&lt;br /&gt;
      click_link('Alternate View')&lt;br /&gt;
      expect(page).to have_content 'Grade for submission'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 2' do&lt;br /&gt;
    it &amp;quot;Student should be able to view scores&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Your scores'&lt;br /&gt;
      click_link('Your scores')&lt;br /&gt;
      expect(page).to have_content '0.00%'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
&lt;br /&gt;
The main goal of testing was to ensure functionality was maintained after refactoring was completed.  The following test cases exercise the functionality of the code sections which were refactored.  Automated versions of these tests were written using RSpec/FactoryGirl/Selenium and appear in on the Github repository associated with this project. The functional tests may also be run manually per the descriptions below.&lt;br /&gt;
===Functional Tests===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Unit Tests===&lt;br /&gt;
Unit tests were required per the project assignment for helper methods which previously had no associated unit tests. These tests verified that each discrete method returned the proper values.  The tests were written using RSpec/FactoryGirl and are listed in the code above.  A summary of the test cases is given below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameters&lt;br /&gt;
! Purpose&lt;br /&gt;
! Tested Scenarios&lt;br /&gt;
|-&lt;br /&gt;
| get_accordion_title&lt;br /&gt;
| last_topic, new_topic&lt;br /&gt;
| Render a proper partial for questionnaires&lt;br /&gt;
| last_topic=nil; new_topic=nil; neither=nil&lt;br /&gt;
|-&lt;br /&gt;
| has_team_and_metareview?&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Determine whether a team and reviews exist for a given assignment&lt;br /&gt;
| find an assignment given assignment_id; find an assignment given participant_id; return 1 for a case with a team but no meta review; return 1 for a case with a meta review but no team; return 2 for a case with a team and metareview&lt;br /&gt;
|-&lt;br /&gt;
| participant&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Returns a participant given an id&lt;br /&gt;
| find a newly created participant &lt;br /&gt;
|-&lt;br /&gt;
| rscore_review&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the reviews for a participant based on id&lt;br /&gt;
| return reviews from a participant with reviews; return nothing for a participant without reviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_metareview&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the metareviews for a participant based on id&lt;br /&gt;
| return metareviews from a participant with metareviews; return nothing for a participant without metareviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_feedback&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the author feedback for a participant based on id&lt;br /&gt;
| returns feedback for a participant with feedback; return nothing for a participant without feedback.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rscore_teammate&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns teammate reviews for a participant based on id&lt;br /&gt;
| returns teammate reviews for a participant with teammate reviews; return nothing for a participant without teammate reviews.&lt;br /&gt;
|-&lt;br /&gt;
| p_total_score&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns grade for a participant based on id&lt;br /&gt;
| returns grade for a participant with grade; returns total_score  for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| p_title&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns a pertinent title for the grades view&lt;br /&gt;
| returns text title for a participant with grade; returns nothing for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_hamer_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.5 yields CSS c2; rep 1 yields CSS c2; rep 1.5 yields CSS c3, rep 2 yields CSS c4; rep 2.1 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_lauw_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.2 yields CSS c2; rep 0.4 yields CSS c2; rep 0.6 yields CSS c3, rep 0.8 yields CSS c4; rep 0.9 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107720</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107720"/>
		<updated>2017-04-01T00:12:21Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Write test cases for all methods in grades_helper.rb by using factories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file, in order to separate Controller logic from View.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
require 'selenium-webdriver'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, type: :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @assignment2 = create(:assignment, name: 'whatever', max_team_size: 3)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
    @new_participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment2)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @questionnaire = create(:questionnaire)&lt;br /&gt;
    @metareview_questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
    @author_feedback_questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
    @teammate_review_questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @metareview_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @author_feedback_questionnaire)&lt;br /&gt;
    create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @teammate_review_questionnaire)&lt;br /&gt;
&lt;br /&gt;
    @questions = {}&lt;br /&gt;
    @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
    @questions[@metareview_questionnaire.symbol] = @metareview_questionnaire.questions&lt;br /&gt;
    @questions[@author_feedback_questionnaire.symbol] = @author_feedback_questionnaire.questions&lt;br /&gt;
    @questions[@teammate_review_questionnaire.symbol] = @teammate_review_questionnaire.questions&lt;br /&gt;
&lt;br /&gt;
    create(&lt;br /&gt;
      :assignment_due_date,&lt;br /&gt;
      assignment: @assignment2,&lt;br /&gt;
      deadline_type: @deadline_type,&lt;br /&gt;
      submission_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_allowed_id: @deadline_right.id,&lt;br /&gt;
      review_of_review_allowed_id: @deadline_right.id,&lt;br /&gt;
      due_at: '2015-12-30 23:30:12'&lt;br /&gt;
    )&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment2.max_team_size = 1&lt;br /&gt;
      @assignment2.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment with a team and with a metareview after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment2.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_X' do&lt;br /&gt;
    it 'should return records of each review type if available' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(rscore_review).to_not be_nil&lt;br /&gt;
      expect(rscore_metareview).to_not be_nil&lt;br /&gt;
      expect(rscore_feedback).to_not eq(nil)&lt;br /&gt;
      expect(rscore_teammate).to_not eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if review types are not available' do&lt;br /&gt;
      params[:id] = @participant.id&lt;br /&gt;
      expect(rscore_review).to be_nil&lt;br /&gt;
      expect(rscore_metareview).to be_nil&lt;br /&gt;
      expect(rscore_feedback).to be_nil&lt;br /&gt;
      expect(rscore_teammate).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      graded_participant = create(:participant, grade: 90)&lt;br /&gt;
      create(:assignment_questionnaire, user_id: graded_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = graded_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      create(:assignment_questionnaire, user_id: @new_participant.id, questionnaire: @questionnaire)&lt;br /&gt;
      @questions[@questionnaire.symbol] = @questionnaire.questions&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_total_score).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      @new_participant.grade = 90&lt;br /&gt;
      @new_participant.save&lt;br /&gt;
      expect(p_title).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      params[:id] = @new_participant.id&lt;br /&gt;
      expect(p_title).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_X_reputation' do&lt;br /&gt;
    hamer_input = [-0.1, 0, 0.5, 1, 1.5, 2, 2.1]&lt;br /&gt;
    lauw_input = [-0.1, 0, 0.2, 0.4, 0.6, 0.8, 0.9]&lt;br /&gt;
    output = %w(c1 c1 c2 c2 c3 c4 c5)&lt;br /&gt;
    it 'should return correct css for hamer reputations' do&lt;br /&gt;
      hamer_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_hamer_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for luaw reputations' do&lt;br /&gt;
      lauw_input.each_with_index do |e, i|&lt;br /&gt;
        expect(get_css_style_for_lauw_reputation(e)).to eq(output[i])&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
#########################&lt;br /&gt;
# Functional Cases&lt;br /&gt;
#########################&lt;br /&gt;
describe GradesHelper, type: :feature do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment)&lt;br /&gt;
    @assignment_team = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
    @participant = create(:participant, assignment: @assignment)&lt;br /&gt;
    create(:team_user, team: @assignment_team, user: User.find(@participant.user_id))&lt;br /&gt;
    login_as(@participant.name)&lt;br /&gt;
    visit '/student_task/list'&lt;br /&gt;
    expect(page).to have_content 'final2'&lt;br /&gt;
    click_link('final2')&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 1' do&lt;br /&gt;
    it &amp;quot;Javascript should work on grades Alternate View&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Alternate View'&lt;br /&gt;
      expect(page).to have_content 'Review'&lt;br /&gt;
      click_link('Alternate View')&lt;br /&gt;
      expect(page).to have_content 'Grade for submission'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  describe 'case 2' do&lt;br /&gt;
    it &amp;quot;Student should be able to view scores&amp;quot;, js: true do&lt;br /&gt;
      expect(page).to have_content 'Your scores'&lt;br /&gt;
      click_link('Your scores')&lt;br /&gt;
      expect(page).to have_content '0.00%'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
&lt;br /&gt;
The main goal of testing was to ensure functionality was maintained after refactoring was completed.  The following test cases exercise the functionality of the code sections which were refactored.  Automated versions of these tests were written using RSpec/FactoryGirl/Selenium and appear in on the Github repository associated with this project. The functional tests may also be run manually per the descriptions below.&lt;br /&gt;
===Functional Tests===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Unit Tests===&lt;br /&gt;
Unit tests were required per the project assignment for helper methods which previously had no associated unit tests. These tests verified that each discrete method returned the proper values.  The tests were written using RSpec/FactoryGirl and are listed in the code above.  A summary of the test cases is given below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameters&lt;br /&gt;
! Purpose&lt;br /&gt;
! Tested Scenarios&lt;br /&gt;
|-&lt;br /&gt;
| get_accordion_title&lt;br /&gt;
| last_topic, new_topic&lt;br /&gt;
| Render a proper partial for questionnaires&lt;br /&gt;
| last_topic=nil; new_topic=nil; neither=nil&lt;br /&gt;
|-&lt;br /&gt;
| has_team_and_metareview?&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Determine whether a team and reviews exist for a given assignment&lt;br /&gt;
| find an assignment given assignment_id; find an assignment given participant_id; return 1 for a case with a team but no meta review; return 1 for a case with a meta review but no team; return 2 for a case with a team and metareview&lt;br /&gt;
|-&lt;br /&gt;
| participant&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Returns a participant given an id&lt;br /&gt;
| find a newly created participant &lt;br /&gt;
|-&lt;br /&gt;
| rscore_review&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the reviews for a participant based on id&lt;br /&gt;
| return reviews from a participant with reviews; return nothing for a participant without reviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_metareview&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the metareviews for a participant based on id&lt;br /&gt;
| return metareviews from a participant with metareviews; return nothing for a participant without metareviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_feedback&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the author feedback for a participant based on id&lt;br /&gt;
| returns feedback for a participant with feedback; return nothing for a participant without feedback.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rscore_teammate&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns teammate reviews for a participant based on id&lt;br /&gt;
| returns teammate reviews for a participant with teammate reviews; return nothing for a participant without teammate reviews.&lt;br /&gt;
|-&lt;br /&gt;
| p_total_score&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns grade for a participant based on id&lt;br /&gt;
| returns grade for a participant with grade; returns total_score  for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| p_title&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns a pertinent title for the grades view&lt;br /&gt;
| returns text title for a participant with grade; returns nothing for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_hamer_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.5 yields CSS c2; rep 1 yields CSS c2; rep 1.5 yields CSS c3, rep 2 yields CSS c4; rep 2.1 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_lauw_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.2 yields CSS c2; rep 0.4 yields CSS c2; rep 0.6 yields CSS c3, rep 0.8 yields CSS c4; rep 0.9 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107152</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107152"/>
		<updated>2017-03-23T00:33:25Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Test Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
===Write test cases for all methods in grades_helper.rb by using factories===&lt;br /&gt;
The following test cases were added to test all methods in grades_helper.rb&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, :type =&amp;gt; :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      participant = create(:participant, assignment: @assignment)&lt;br /&gt;
      params[:id] = participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview deadline after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 1&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 3&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant()&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_review' do&lt;br /&gt;
    it 'should return a record of type :review if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :review is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_metareview' do&lt;br /&gt;
    it 'should return a record of type :metareview if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :metareview is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_feedback' do&lt;br /&gt;
    it 'should return a record of type :feedback if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :feedback is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_teammate' do&lt;br /&gt;
    it 'should return a record of type :teammate if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :teammate is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      new_participant = create(:participant, grade: 90)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      new_participant.grade = 90&lt;br /&gt;
      new_participant.save&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_hamer_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0.5)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1.5)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2.1)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_lauw_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.2' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.2)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.4' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.4)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.6' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.6)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.8' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.8)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.9' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.9)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
&lt;br /&gt;
The main goal of testing was to ensure functionality was maintained after refactoring was completed.  The following test cases exercise the functionality of the code sections which were refactored.  Automated versions of these tests were written using RSpec/FactoryGirl/Selenium and appear in on the Github repository associated with this project. The functional tests may also be run manually per the descriptions below.&lt;br /&gt;
===Functional Tests===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Unit Tests===&lt;br /&gt;
Unit tests were required per the project assignment for helper methods which previously had no associated unit tests. These tests verified that each discrete method returned the proper values.  The tests were written using RSpec/FactoryGirl and are listed in the code above.  A summary of the test cases is given below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; | Unit Test Summary&lt;br /&gt;
|-&lt;br /&gt;
! Method&lt;br /&gt;
! Parameters&lt;br /&gt;
! Purpose&lt;br /&gt;
! Tested Scenarios&lt;br /&gt;
|-&lt;br /&gt;
| get_accordion_title&lt;br /&gt;
| last_topic, new_topic&lt;br /&gt;
| Render a proper partial for questionnaires&lt;br /&gt;
| last_topic=nil; new_topic=nil; neither=nil&lt;br /&gt;
|-&lt;br /&gt;
| has_team_and_metareview?&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Determine whether a team and reviews exist for a given assignment&lt;br /&gt;
| find an assignment given assignment_id; find an assignment given participant_id; return 1 for a case with a team but no meta review; return 1 for a case with a meta review but no team; return 2 for a case with a team and metareview&lt;br /&gt;
|-&lt;br /&gt;
| participant&lt;br /&gt;
| params[:id]&lt;br /&gt;
| Returns a participant given an id&lt;br /&gt;
| find a newly created participant &lt;br /&gt;
|-&lt;br /&gt;
| rscore_review&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the reviews for a participant based on id&lt;br /&gt;
| return reviews from a participant with reviews; return nothing for a participant without reviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_metareview&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the metareviews for a participant based on id&lt;br /&gt;
| return metareviews from a participant with metareviews; return nothing for a participant without metareviews.&lt;br /&gt;
|-&lt;br /&gt;
| rscore_feedback&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns all the author feedback for a participant based on id&lt;br /&gt;
| returns feedback for a participant with feedback; return nothing for a participant without feedback.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rscore_teammate&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns teammate reviews for a participant based on id&lt;br /&gt;
| returns teammate reviews for a participant with teammate reviews; return nothing for a participant without teammate reviews.&lt;br /&gt;
|-&lt;br /&gt;
| p_total_score&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns grade for a participant based on id&lt;br /&gt;
| returns grade for a participant with grade; returns total_score  for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| p_title&lt;br /&gt;
| params[:id]&lt;br /&gt;
| returns a pertinent title for the grades view&lt;br /&gt;
| returns text title for a participant with grade; returns nothing for a participant without grade.&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_hamer_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.5 yields CSS c2; rep 1 yields CSS c2; rep 1.5 yields CSS c3, rep 2 yields CSS c4; rep 2.1 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
| get_css_style_for_lauw_reputation&lt;br /&gt;
| reputation_value&lt;br /&gt;
| returns a CSS value for formatting based on reputation&lt;br /&gt;
| rep -0.1 yields CSS c1; rep 0 yields CSS c1; rep 0.2 yields CSS c2; rep 0.4 yields CSS c2; rep 0.6 yields CSS c3, rep 0.8 yields CSS c4; rep 0.9 yields css c5&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107139</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107139"/>
		<updated>2017-03-22T23:44:54Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Test Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
===Write test cases for all methods in grades_helper.rb by using factories===&lt;br /&gt;
The following test cases were added to test all methods in grades_helper.rb&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, :type =&amp;gt; :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      participant = create(:participant, assignment: @assignment)&lt;br /&gt;
      params[:id] = participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview deadline after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 1&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 3&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant()&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_review' do&lt;br /&gt;
    it 'should return a record of type :review if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :review is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_metareview' do&lt;br /&gt;
    it 'should return a record of type :metareview if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :metareview is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_feedback' do&lt;br /&gt;
    it 'should return a record of type :feedback if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :feedback is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_teammate' do&lt;br /&gt;
    it 'should return a record of type :teammate if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :teammate is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      new_participant = create(:participant, grade: 90)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      new_participant.grade = 90&lt;br /&gt;
      new_participant.save&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_hamer_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0.5)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1.5)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2.1)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_lauw_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.2' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.2)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.4' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.4)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.6' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.6)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.8' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.8)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.9' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.9)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
The main goal of testing was to ensure functionality was maintained after refactoring was completed.  The following test cases exercise the functionality of the code sections which were refactored.  Automated versions of these tests were written and appear in on the Github repository associated with this project. The functional tests may also be run manually per the descriptions below.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107136</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107136"/>
		<updated>2017-03-22T23:40:42Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Test Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
===Write test cases for all methods in grades_helper.rb by using factories===&lt;br /&gt;
The following test cases were added to test all methods in grades_helper.rb&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, :type =&amp;gt; :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      participant = create(:participant, assignment: @assignment)&lt;br /&gt;
      params[:id] = participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview deadline after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 1&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 3&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant()&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_review' do&lt;br /&gt;
    it 'should return a record of type :review if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :review is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_metareview' do&lt;br /&gt;
    it 'should return a record of type :metareview if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :metareview is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_feedback' do&lt;br /&gt;
    it 'should return a record of type :feedback if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :feedback is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_teammate' do&lt;br /&gt;
    it 'should return a record of type :teammate if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :teammate is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      new_participant = create(:participant, grade: 90)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      new_participant.grade = 90&lt;br /&gt;
      new_participant.save&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_hamer_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0.5)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1.5)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2.1)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_lauw_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.2' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.2)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.4' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.4)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.6' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.6)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.8' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.8)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.9' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.9)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza as a student&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107135</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107135"/>
		<updated>2017-03-22T23:39:18Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Test Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
===Write test cases for all methods in grades_helper.rb by using factories===&lt;br /&gt;
The following test cases were added to test all methods in grades_helper.rb&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, :type =&amp;gt; :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      participant = create(:participant, assignment: @assignment)&lt;br /&gt;
      params[:id] = participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview deadline after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 1&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 3&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant()&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_review' do&lt;br /&gt;
    it 'should return a record of type :review if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :review is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_metareview' do&lt;br /&gt;
    it 'should return a record of type :metareview if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :metareview is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_feedback' do&lt;br /&gt;
    it 'should return a record of type :feedback if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :feedback is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_teammate' do&lt;br /&gt;
    it 'should return a record of type :teammate if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :teammate is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      new_participant = create(:participant, grade: 90)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      new_participant.grade = 90&lt;br /&gt;
      new_participant.save&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_hamer_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0.5)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1.5)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2.1)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_lauw_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.2' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.2)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.4' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.4)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.6' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.6)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.8' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.8)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.9' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.9)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student2064&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to Expertiza&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107134</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107134"/>
		<updated>2017-03-22T23:29:59Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Test Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
===Write test cases for all methods in grades_helper.rb by using factories===&lt;br /&gt;
The following test cases were added to test all methods in grades_helper.rb&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, :type =&amp;gt; :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      participant = create(:participant, assignment: @assignment)&lt;br /&gt;
      params[:id] = participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview deadline after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 1&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 3&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant()&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_review' do&lt;br /&gt;
    it 'should return a record of type :review if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :review is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_metareview' do&lt;br /&gt;
    it 'should return a record of type :metareview if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :metareview is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_feedback' do&lt;br /&gt;
    it 'should return a record of type :feedback if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :feedback is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_teammate' do&lt;br /&gt;
    it 'should return a record of type :teammate if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :teammate is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      new_participant = create(:participant, grade: 90)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      new_participant.grade = 90&lt;br /&gt;
      new_participant.save&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_hamer_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0.5)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1.5)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2.1)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_lauw_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.2' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.2)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.4' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.4)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.6' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.6)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.8' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.8)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.9' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.9)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student2064&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test _participant.html.erb to ensure it is working after refactoring logic code&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assignment as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Your Scores&amp;quot;&lt;br /&gt;
# On the &amp;quot;Score for OSS Project&amp;quot; screen, ensure that the scores are loaded correctly.&lt;br /&gt;
# Averages should be 99.2, 90, and 60 for Submitted Work, Reviewing, and Author Feedback, respectively.&lt;br /&gt;
# The Final Score should be 99.2%.&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
| [Insert test execution steps using existing login]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107129</id>
		<title>E1728. Remove useless partials from grades view and move view logic to grades helper.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=E1728._Remove_useless_partials_from_grades_view_and_move_view_logic_to_grades_helper.rb&amp;diff=107129"/>
		<updated>2017-03-22T22:55:51Z</updated>

		<summary type="html">&lt;p&gt;Pdscott2: /* Test Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
This wiki provides details on the refactoring tasks that were undertaken as part of the continuous improvement to the Expertiza project&lt;br /&gt;
===Background===&lt;br /&gt;
[[Expertiza_documentation|Expertiza]] is a web application where students can submit and peer-review learning objects (articles, code, web sites, etc). The Expertiza project is supported by the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
The application provides a complete system through which students and instructors collaborate on the learning objects as well as submit, review and grade assignments for the courses.&lt;br /&gt;
&lt;br /&gt;
===Motivation===&lt;br /&gt;
By participating in the overall refactoring effort as part of the continuous improvement of Expertiza, students get an opportunity to work on a open source software project. This helps them gain exposure on the technologies used in the project as well as much needed experience in collaborating with peers as part of the software development process.&lt;br /&gt;
&lt;br /&gt;
===Tasks===&lt;br /&gt;
The tasks involved as part of this refactoring effort were geared towards cleaning up the grade view logic.&lt;br /&gt;
&lt;br /&gt;
====Initial Task List====&lt;br /&gt;
The initial set of tasks were:&lt;br /&gt;
*  For files: _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
* For files: _scores_header.html.erb&lt;br /&gt;
# Move logical code (such as L43-96) to helper file and assign self-explanatory method name&lt;br /&gt;
* For files: _participant.html.erb, view_team.html.erb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name&lt;br /&gt;
# Such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb&lt;br /&gt;
&lt;br /&gt;
====Revised Task List====&lt;br /&gt;
On working with files _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, the team came to find out that these files were not being used anymore in the application and the overall tasks were updated to the following:&lt;br /&gt;
* Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.&lt;br /&gt;
* For files _participant.html.erb, view_team.html.erb to grades_helper.rb&lt;br /&gt;
# Move javascript code to assets&lt;br /&gt;
# Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb&lt;br /&gt;
* Create test file named grades_helper_spec.rb in spec/helpers&lt;br /&gt;
* Write test cases for all methods in grades_helper.rb by using factories&lt;br /&gt;
&lt;br /&gt;
==Refactoring Tasks==&lt;br /&gt;
===Remove useless partials from grades view, such as _scores_author_feedback.html.erb, _scores_metareview.html.erb, _scores_submitted_work.html.erb, _scores_header.html.erb, etc.===&lt;br /&gt;
The named files were removed from the project and the deletions were committed to the repository&lt;br /&gt;
&lt;br /&gt;
===For files _participant.html.erb, view_team.html.erb to grades_helper.rb===&lt;br /&gt;
====Move javascript code to assets====&lt;br /&gt;
A new file was created and added to the project: app/assets/javascripts/grades/view_team.js&lt;br /&gt;
&lt;br /&gt;
For the view_team.html.erb file, the javascript code was moved to view_team.js&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$=jQuery;&lt;br /&gt;
&lt;br /&gt;
$(function () {&lt;br /&gt;
    $(&amp;quot;[data-toggle='tooltip']&amp;quot;).tooltip();&lt;br /&gt;
    $(&amp;quot;#scoresTable&amp;quot;).tablesorter();&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
var lesser = false;&lt;br /&gt;
// Function to sort the columns based on the total review score&lt;br /&gt;
function col_sort(m) {&lt;br /&gt;
    lesser = !lesser&lt;br /&gt;
    // Swaps two columns of the table&lt;br /&gt;
    jQuery.moveColumn = function (table, from, to) {&lt;br /&gt;
        var rows = jQuery('tr', table);&lt;br /&gt;
&lt;br /&gt;
        var hidden_child_row = table.find('tr.tablesorter-childRow');&lt;br /&gt;
&lt;br /&gt;
        hidden_child_row.each(function () {&lt;br /&gt;
            inner_table = jQuery(this).find('table.tbl_questlist')&lt;br /&gt;
            hidden_table = inner_table.eq(0).find('tr')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
            hidden_table.eq(from - 1).detach().insertBefore(hidden_table.eq(to - 1));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                hidden_table.eq(to - 1).detach().insertAfter((hidden_table.eq(from - 2)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        var cols;&lt;br /&gt;
        rows.each(function () {&lt;br /&gt;
            cols = jQuery(this).children('th, td');&lt;br /&gt;
            cols.eq(from).detach().insertBefore(cols.eq(to));&lt;br /&gt;
            if (from - to &amp;gt; 1) {&lt;br /&gt;
                cols.eq(to).detach().insertAfter((cols.eq(from - 1)));&lt;br /&gt;
&lt;br /&gt;
            }&lt;br /&gt;
        });&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Gets all the table with the class &amp;quot;tbl_heat&amp;quot;&lt;br /&gt;
    var tables = $(&amp;quot;table.tbl_heat&amp;quot;);&lt;br /&gt;
    // Get all the rows with the class accordion-toggle&lt;br /&gt;
    var tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
    // Get the cells from the last row of the table&lt;br /&gt;
    var columns = tbr.eq(tbr.length - 1).find('td');&lt;br /&gt;
    // Init an array to hold the review total&lt;br /&gt;
    var sum_array = [];&lt;br /&gt;
    // iterate through the rows and calculate the total of each review&lt;br /&gt;
    for (var l = 2; l &amp;lt; columns.length - 2; l++) {&lt;br /&gt;
        var total = 0;&lt;br /&gt;
        for (var n = 0; n &amp;lt; tbr.length; n++) {&lt;br /&gt;
            var row_slice = tbr.eq(n).find('td');&lt;br /&gt;
            if (parseInt(row_slice[l].innerHTML) &amp;gt; 0) {&lt;br /&gt;
                total = total + parseInt(row_slice[l].innerHTML)&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        sum_array.push(total)&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // The sorting algorithm&lt;br /&gt;
    for (var i = 3; i &amp;lt; columns.length - 2; i++) {&lt;br /&gt;
        var j = i;&lt;br /&gt;
        while (j &amp;gt; 2 &amp;amp;&amp;amp; compare(sum_array[j - 2], sum_array[j - 3], lesser)) {&lt;br /&gt;
            var tmp&lt;br /&gt;
            tmp = sum_array[j - 3]&lt;br /&gt;
            sum_array[j - 3] = sum_array[j - 2]&lt;br /&gt;
            sum_array[j - 2] = tmp&lt;br /&gt;
            jQuery.moveColumn($(&amp;quot;table.tbl_heat&amp;quot;).eq(m), j, j - 1);&lt;br /&gt;
            // This part is repeated since the table is updated&lt;br /&gt;
            tables = $(&amp;quot;table.tbl_heat&amp;quot;)&lt;br /&gt;
            tbr = tables.eq(m).find('tr.accordion-toggle');&lt;br /&gt;
            columns = tbr.eq(tbr.length - 1).find('td')&lt;br /&gt;
            j = j - 1;&lt;br /&gt;
&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Function to return boolean based on lesser or greater operator&lt;br /&gt;
function compare(a, b, less) {&lt;br /&gt;
    if (less) {&lt;br /&gt;
        return a &amp;lt; b&lt;br /&gt;
    } else {&lt;br /&gt;
        return a &amp;gt; b&lt;br /&gt;
    }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Move logical code to helper file and assign self-explanatory method name such as L8-22 in _participant.html.erb====&lt;br /&gt;
The following lines of code were removed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% &lt;br /&gt;
participant = pscore[:participant]&lt;br /&gt;
    if pscore[:review]&lt;br /&gt;
@rscore_review=Rscore.new(pscore,:review)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:metareview]&lt;br /&gt;
@rscore_metareview=Rscore.new(pscore,:metareview)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:feedback]&lt;br /&gt;
@rscore_feedback=Rscore.new(pscore,:feedback)&lt;br /&gt;
    end&lt;br /&gt;
    if pscore[:teammate]&lt;br /&gt;
@rscore_teammate=Rscore.new(pscore,:teammate)&lt;br /&gt;
    end&lt;br /&gt;
%&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and converted to properties in the GradesHelper module defined in the app/helpers/grades_helper.rb file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def participant&lt;br /&gt;
    @participant = Participant.find(params[:id])&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_review&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:review]&lt;br /&gt;
         @rscore_review=Rscore.new(@pscore,:review)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_review&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_metareview&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:metareview]&lt;br /&gt;
         @rscore_metareview=Rscore.new(@pscore,:metareview)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_metareview&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_feedback&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:feedback]&lt;br /&gt;
         @rscore_feedback=Rscore.new(@pscore,:feedback)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_feedback&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def rscore_teammate&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @pscore[:teammate]&lt;br /&gt;
         @rscore_teammate=Rscore.new(@pscore,:teammate)&lt;br /&gt;
         end&lt;br /&gt;
     @rscore_teammate&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_total_score&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     @pscore = @participant.scores(@questions)&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @total_score = participant.grade&lt;br /&gt;
         else&lt;br /&gt;
             @total_score = @pscore[:total_score]&lt;br /&gt;
             end&lt;br /&gt;
     @total_score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def p_title&lt;br /&gt;
     @participant = Participant.find(params[:id])&lt;br /&gt;
     if @participant.grade&lt;br /&gt;
         @title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot;&lt;br /&gt;
         else&lt;br /&gt;
             @title = nil&lt;br /&gt;
             end&lt;br /&gt;
     @title&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following lines of code in the _participant.html.erb file were also replaced to use the above properties created&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Original&lt;br /&gt;
! Replacement&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg): @rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg): rscore_review.my_avg %&amp;gt;%&amp;lt;BR/&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= @rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_min): @rscore_review.my_min %&amp;gt;% - &amp;lt;%= @rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_review.my_max): @rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;%= rscore_review.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_min): rscore_review.my_min %&amp;gt;% - &amp;lt;%= rscore_review.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_review.my_max): rscore_review.my_max %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_metareview &amp;amp;&amp;amp; @rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_metareview.my_avg):@rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_metareview &amp;amp;&amp;amp; rscore_metareview.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_metareview.my_avg):rscore_metareview.my_avg %&amp;gt;%&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_min):@rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= @rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,@rscore_metareview.my_max):@rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_metareview.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_min):rscore_metareview.my_min %&amp;gt;% - &amp;lt;%= rscore_metareview.my_max.is_a?(Float)?sprintf(&amp;quot;%.0f&amp;quot;,rscore_metareview.my_max):rscore_metareview.my_max %&amp;gt;%&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_feedback.nil? &amp;amp;&amp;amp; @rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,@rscore_feedback.my_avg): @rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_feedback.nil? &amp;amp;&amp;amp; rscore_feedback.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_avg.is_a?(Float)? sprintf(&amp;quot;%.2f&amp;quot;,rscore_feedback.my_avg): rscore_feedback.my_avg %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= @rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_min): @rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= @rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_feedback.my_max):@rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;TD ALIGN=&amp;quot;CENTER&amp;quot; VALIGN=&amp;quot;TOP&amp;quot;&amp;gt;&amp;lt;%= rscore_feedback.my_min.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_min): rscore_feedback.my_min %&amp;gt;% - &amp;lt;%= rscore_feedback.my_max.is_a?(Float)? sprintf(&amp;quot;%.0f&amp;quot;,rscore_feedback.my_max):rscore_feedback.my_max %&amp;gt;%&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if @rscore_teammate and @rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;% if rscore_teammate and rscore_teammate.my_avg %&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  @rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_min) : @rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += @rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,@rscore_teammate.my_max) : @rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% range =  rscore_teammate.my_min.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_min) : rscore_teammate.my_min.to_s + '%' + ' - ' %&amp;gt;&lt;br /&gt;
&amp;lt;% range += rscore_teammate.my_max.is_a?(Float) ? sprintf(&amp;quot;%.0f&amp;quot;,rscore_teammate.my_max) : rscore_teammate.my_max.to_s + '%' %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = participant.grade %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = &amp;quot;A score in blue indicates that the value was overwritten by the instructor or teaching assistant.&amp;quot; %&amp;gt;&lt;br /&gt;
 &amp;lt;% else %&amp;gt;&lt;br /&gt;
	&amp;lt;% total_score = pscore[:total_score] %&amp;gt;&lt;br /&gt;
	&amp;lt;% title = nil %&amp;gt;&lt;br /&gt;
&amp;lt;% end %&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div &amp;lt;% if title %&amp;gt;title=&amp;quot;&amp;lt;%=title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&amp;lt;div &amp;lt;% if p_title %&amp;gt;title=&amp;quot;&amp;lt;%=p_title%&amp;gt;&amp;quot; style=&amp;quot;color:#0033FF&amp;quot;&amp;lt;% end %&amp;gt;&amp;gt;&amp;lt;%= p_total_score==(-1)? &amp;quot;N/A&amp;quot;: sprintf(&amp;quot;%.2f&amp;quot;,p_total_score) %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !@rscore_review.nil? &amp;amp;&amp;amp; @rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= @rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,@rscore_review.my_avg):@rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
| &amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;% if !rscore_review.nil? &amp;amp;&amp;amp; rscore_review.my_avg %&amp;gt;&lt;br /&gt;
	&amp;lt;%= rscore_review.my_avg.is_a?(Float)?sprintf(&amp;quot;%.2f&amp;quot;,rscore_review.my_avg):rscore_review.my_avg %&amp;gt;&amp;lt;%= score_postfix %&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Create test file named grades_helper_spec.rb in spec/helpers===&lt;br /&gt;
A new test file was created and added to the project: spec/helpers/grades_helper_spec.rb&lt;br /&gt;
===Write test cases for all methods in grades_helper.rb by using factories===&lt;br /&gt;
The following test cases were added to test all methods in grades_helper.rb&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'rails_helper'&lt;br /&gt;
&lt;br /&gt;
describe GradesHelper, :type =&amp;gt; :helper do&lt;br /&gt;
  before(:each) do&lt;br /&gt;
    @assignment = create(:assignment, max_team_size: 1)&lt;br /&gt;
    @deadline_type = create(:deadline_type, id: 5, name: 'metareview')&lt;br /&gt;
    @deadline_right = create(:deadline_right)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_accordion_title' do&lt;br /&gt;
    it 'should render is_first:true if last_topic is nil' do&lt;br /&gt;
      get_accordion_title(nil, 'last question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'last question', is_first: true})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render is_first:false if last_topic is not equal to next_topic' do&lt;br /&gt;
      get_accordion_title('last question', 'next question')&lt;br /&gt;
      expect(response).to render_template(partial: 'response/_accordion', locals: {title: 'next question', is_first: false})&lt;br /&gt;
    end&lt;br /&gt;
    it 'should render nothing if last_topic is equal to next_topic' do&lt;br /&gt;
      get_accordion_title('question', 'question')&lt;br /&gt;
      expect(response).to render_template(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'has_team_and_metareview?' do&lt;br /&gt;
    it 'should correctly identify the assignment from an assignment id' do&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = Assignment.find(params[:id])&lt;br /&gt;
      expect(result).to eq(@assignment)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should correctly identify the assignment from a participant id' do&lt;br /&gt;
      participant = create(:participant, assignment: @assignment)&lt;br /&gt;
      params[:id] = participant.id&lt;br /&gt;
      result = Participant.find(params[:id]).parent_id&lt;br /&gt;
      expect(result).to eq(@assignment.id)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 0 for an assignment without a team or a metareview deadline after a view action' do&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: false, true_num: 0}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment with a team but no metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 2&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: false, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 1 for an assignment without a team but with a metareview deadline after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 1&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: false, has_metareview: true, true_num: 1}&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return 2 for an assignment without a team but with a metareview after a view action' do&lt;br /&gt;
      @assignment.max_team_size = 3&lt;br /&gt;
      @assignment.save&lt;br /&gt;
      @assignment_due_date = create(:assignment_due_date, assignment: @assignment, deadline_type: @deadline_type,&lt;br /&gt;
        submission_allowed_id: @deadline_right.id, review_allowed_id: @deadline_right.id,&lt;br /&gt;
        review_of_review_allowed_id: @deadline_right.id, due_at: '2015-12-30 23:30:12')&lt;br /&gt;
&lt;br /&gt;
      params[:action] = 'view'&lt;br /&gt;
      params[:id] = @assignment.id&lt;br /&gt;
      result = has_team_and_metareview?&lt;br /&gt;
      expect(result).to be == {has_team: true, has_metareview: true, true_num: 2}&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'participant' do&lt;br /&gt;
    it 'should return the correct particpant' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = participant()&lt;br /&gt;
      expect(result).to eq(new_participant)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_review' do&lt;br /&gt;
    it 'should return a record of type :review if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :review is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_review()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_metareview' do&lt;br /&gt;
    it 'should return a record of type :metareview if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:metareview_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :metareview is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_metareview()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_feedback' do&lt;br /&gt;
    it 'should return a record of type :feedback if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:author_feedback_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :feedback is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_feedback()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'rscore_teammate' do&lt;br /&gt;
    it 'should return a record of type :teammate if available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:teammate_review_questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to_not be_nil&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil if no record of type :teammate is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = rscore_teammate()&lt;br /&gt;
      expect(result).to be_nil&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'p_total_score' do&lt;br /&gt;
    it 'should return the grade if available' do&lt;br /&gt;
      new_participant = create(:participant, grade: 90)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(90)&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return :total_score if no grade is available' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      questionnaire = create(:questionnaire)&lt;br /&gt;
      assignment_questionnaire = create(:assignment_questionnaire, user_id: new_participant.id, questionnaire: questionnaire)&lt;br /&gt;
      @questions = {}&lt;br /&gt;
      @questions[questionnaire.symbol] = questionnaire.questions&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_total_score()&lt;br /&gt;
      expect(result).to eq(0)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  describe 'p_title' do&lt;br /&gt;
    it 'should return a title when the participant has a grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      new_participant.grade = 90&lt;br /&gt;
      new_participant.save&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq('A score in blue indicates that the value was overwritten by the instructor or teaching assistant.')&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return nil when the participant has no grade' do&lt;br /&gt;
      new_participant = create(:participant)&lt;br /&gt;
      params[:id] = new_participant.id&lt;br /&gt;
      result = p_title()&lt;br /&gt;
      expect(result).to eq(nil)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_hamer_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(0.5)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 1.5' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(1.5)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 2.1' do&lt;br /&gt;
      result = get_css_style_for_hamer_reputation(2.1)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  describe 'get_css_style_for_lauw_reputation' do&lt;br /&gt;
    it 'should return correct css for a reputation of -0.1' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(-0.1)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0)&lt;br /&gt;
      expect(result).to be == 'c1'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.2' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.2)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.4' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.4)&lt;br /&gt;
      expect(result).to be == 'c2'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.6' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.6)&lt;br /&gt;
      expect(result).to be == 'c3'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.8' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.8)&lt;br /&gt;
      expect(result).to be == 'c4'&lt;br /&gt;
    end&lt;br /&gt;
    it 'should return correct css for a reputation of 0.9' do&lt;br /&gt;
      result = get_css_style_for_lauw_reputation(0.9)&lt;br /&gt;
      expect(result).to be == 'c5'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Test Plans==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 1&lt;br /&gt;
|-&lt;br /&gt;
! Test URL&lt;br /&gt;
| http://csc517.eastus.cloudapp.azure.com:8080&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if javascript changes for view_team.html.erb are working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
| &lt;br /&gt;
Make sure student to test with has the following:&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| &lt;br /&gt;
# Login to the site as a student2064&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select an assignment from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for Expertiza&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
|&lt;br /&gt;
# Login as &amp;quot;student2064&amp;quot;&lt;br /&gt;
# Click on &amp;quot;Assignments&amp;quot; on the top menu&lt;br /&gt;
# Select assignment &amp;quot;OSS project&amp;quot; from the list&lt;br /&gt;
# On the &amp;quot;Submit or Review work for OSS project&amp;quot; screen, click on &amp;quot;Alternate View&amp;quot; on the &amp;quot;Your Scores&amp;quot; line item&lt;br /&gt;
# On the &amp;quot;Summary Report for assignment&amp;quot; screen, test if the &amp;quot;Sort by total review score&amp;quot; button works and doesn't throw any javascript errors&lt;br /&gt;
|-&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Test Case 2&lt;br /&gt;
|-&lt;br /&gt;
! Test Type&lt;br /&gt;
| Functional&lt;br /&gt;
|-&lt;br /&gt;
! Scenario&lt;br /&gt;
| Test to see if logic elimination from _participant.html.erb is working&lt;br /&gt;
|-&lt;br /&gt;
! Pre-Conditions&lt;br /&gt;
|&lt;br /&gt;
# Has one course assigned&lt;br /&gt;
# Has one assigned as part of that course&lt;br /&gt;
# Has more than one review for that assignment&lt;br /&gt;
|-&lt;br /&gt;
! Description&lt;br /&gt;
| [Insert description here]&lt;br /&gt;
|-&lt;br /&gt;
! Test Execution using existing login&lt;br /&gt;
| [Insert test execution steps using existing login]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pdscott2</name></author>
	</entry>
</feed>