CSC/ECE 517 Fall 2017/E1792 OSS Visualizations for instructors: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
 
(37 intermediate revisions by 3 users not shown)
Line 1: Line 1:
This page provides a description of the expertiza based project for Fall 2017.
This wiki page is for the description of changes made under E1792 OSS Visualizations for instructors.


__TOC__
__TOC__


== About Expertiza ==
== '''Issue Statements & Approach to be followed''' ==
'''Issue 1:''' For links in each cell. We are not sure what will happen if we click the link and which page will open.


'''Approach to solve issue 1:''' The issue is about the cell contains link to different pages in expertiza, but there is no way to know which cell is linked to which page. So user don't know the destination page when he/she clicks on some page. So we know that there is link in each cell, so we will show a hover over text displaying the link of the destination page.
[http://expertiza.ncsu.edu/ Expertiza] is an open source project based on [http://rubyonrails.org/ Ruby on Rails] framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. It also allows the instructor to create a list of topics the students can sign up for. Students can form teams in Expertiza to work on various projects and assignments. Students can also peer review other students' submissions. Instructors can also use Expertiza for interactive views of class performances and reviews.


'''Issue 2:''' The scale is blue to green, which does not make sense. And colors will change randomly each time loading the page. It will be better to scale from red to green.
== Introduction ==
This project aims to improve the visualizations of certain pages related to reviews and feedback in Expertiza in the instructor's view. This would aid the instructors to judge outcomes of reviews and class performance in assignments via graphs and tables, which in turn would ease the process of grading the reviews.


'''Approach to solve issue 2:''' Here the color coding is blue to green and also it changes randomly. So we will use RBG color coding to make it red to green. We will use percentage to decide color coding. For example if it between 0 - 20% then make it red and so on. This way no matter what scale is being used it will always have appropriate color coding.
== Problem Statement ==
:* <b>Issue 1</b>: The scale is blue to green, which does not make sense. And colors will change randomly each time loading the page. It will be better to scale from red to green.
:* <b>Issue 2</b>: Two adjacent bar represents the response in round 1 to round k. It makes sense only if the rubrics in all review rounds are all the same. If the instructor implements the vary-rubric-by-round mechanism, this visualization will not make sense.
:*<b>Issue 3</b>:The table is presorted by teams in the page View Scores, but you can now also sort alphabetically. The cell view looks way too long, and should be divided into partials.
:*<b>Issue 4</b>: An interactive visualization or table that shows how a class performed on selected rubric criteria would be immensely helpful. It would show the instructors what they need to focus more attention on.


'''Issue 3:''' Two adjacent bar represents the response in round 1 to round k. It makes sense only if the rubrics in all review rounds are all the same. If the instructor implements the vary-rubric-by-round mechanism, this visualization will not make sense.
== UML Diagram ==
<br>[[File:uml_g_b_1.jpg]]<br><br>
== Solutions Implemented ==
===Issue 1===
'''Description:'''The scale is blue to green, which does not make sense. And colors will change randomly each time loading the page. It will be better to scale from red to green.


'''Approach to solve issue 3:''' Here The problem is that the the review responses show in a grid format that shows all the reviews from round 1 to round n. But this is ok when the rubrics is same for all the rounds. But when the rubric is different from round to round this view doesn't make sense. So what we are thinking is creating different grid for each review round. That will solve the problem for when the rubric is different as well as when the rubric is same.
'''Approach:''' Here the color coding is blue to green and also it changes randomly which one color in one graph may represent some other score in the next graph. To fix this issue, we have redefined the highchart_colors data structure to contain the colors defined in the grades.scss file (already defined earlier) as suggested by the Professor. This way no matter what scale is being used it will always have appropriate color coding. Also, the color legend is now appropriate to explain correctly the color-code used.


'''Issue 4:''' The table is presorted by teams, but you can now also sort alphabetically. The cell view looks way too long, and should be divided into partials.
highchart_colors = []
    highchart_colors.push("#2DE636")
    highchart_colors.push("#BCED91")
    highchart_colors.push("#FFEC8B")
    highchart_colors.push("#FD992D")
    highchart_colors.push("#ff8080")
    highchart_colors.push("#FD422D")


'''Approach to solve issue 4:''' Here we can sort the select query with appropriate column alphabetically, or we can sort the table automatically according to user criteria using dynamic table format. And the long view issue can be solved by using paging.
<br><i>Related screenshot</i><br>[[File:graph_g_b_1.jpg]]<br><br>


'''Issue 5:''' An interactive visualization or table that shows how a class performed on selected rubric criteria would be immensely helpful. It would show me what I need to focus more attention on.  
===Issue 2===
'''Description:''' Two adjacent bar represents the response in round 1 to round k. It makes sense only if the rubrics in all review rounds are all the same. If the instructor implements the vary-rubric-by-round mechanism, this visualization will not make sense.


'''Approach to solve issue 5 :''' The issue is about getting the performance of the entire class graded using the 5 rubrics criteria.A proposed solution may  be  use the below logic of calculating average grade for each student on all assignments in a particular course  from the controller assessment360_controller.rb
'''Approach:''' Here the problem was that the data was represented rubric-wise, that is, if there are 5 rubrics, there would be 5 graphs. A graph for a particular rubric would show the performance of a team in that rubric in all rounds of submissions. This is fine when the rubrics are same for all the rounds. But, since rubric 1 of round 1 may not be same as rubric 1 of round 2, therefore this representation seems illogical. In our approach, the graphs are rendered submission-wise, the performance of a team in all rubrics in a particular round are shown in one graph pertaining to that round of submission.
[[File:scr2.PNG]]<br><br>
The average grades of each student can be pulled and then used to plot a bar graph as shown below ( This has been created by using a dummy data ) :-<br>
[[File:Dummydata.PNG]]<br><br>
Graph for class performance on Assignment 1 :-<br><br>
[[File:Assignment1.PNG]]<br><br>
Graph for class performance on Assignment 2 :-<br><br>
[[File:Assignment2.PNG]]<br><br>
Graph for class performance on Assignment 3 :-<br><br>
[[File:Assignment3.PNG]]<br><br>
Graph for class performance on Assignment 4 :-<br><br>
[[File:Assignment4.PNG]]<br><br>
Graph for class performance on Assignment 5 :-<br><br>
[[File:Assignment5.PNG]]<br><br>
Graph for class performance on Average Class performance :-<br><br>
[[File:Average.PNG]]<br><br>
The work flow can be approximately like the below  diagram :-<br><br>
[[File:Workflow.PNG]]<br><br>


'''Issue 6:''' In the 'Review Summary' page for an assignment, currently there is no header indicating the name of the assignment. Also the page has review information for enrolled students. Currently, only student ids are displayed, which make it difficult to make out which student the information corresponds to. Extra white-space around checkboxes.
We create an array called chart_data to hold the information for creating the highchart stack charts.
Data is now stored in the highchart in form of hash-maps which contain the score as key and score pertaining to different rounds of reviews as value. Earlier, data was stored in the hash map in a way that in the value scores pertaining to different rubrics were getting stored. Our process facilitates the graph/chart formation submission/round-wise instead of rubric-wise (earlier approach). This guarantees the data is represented logically. In the earlier approach, data was represented rubric-wise. A graph for a particular rubric would show the performance of a team in that rubric in all rounds. But, since rubric 1 of round 1 may not be same as rubric 1 of round 2, therefore this representation seems illogical. Now, the graphs are submission-wise, the performance of a team in all rubrics in a particular round are shown in one graph pertaining to that round of submission. This addresses the problem of varying rubric by round. We can now have different number of rubrics in different rounds without affecting the visualizations for instructors.


'''Approach to solve issue 6:''' We will modify the view to show the heading for the assignment. Also, we will include the student name along with their id. We will use a method to truncate extra white-spaces around checkboxes before rendering the view.
def get_highchart_data(team_data, assignment, min, max, number_of_review_questions)
    chart_data = {}  # @chart_data is supposed to hold the general information for creating the highchart stack charts
    for i in 1..number_of_review_questions
      chart_data[i] = Hash[(min..max).map {|score| [score, Array.new(assignment.rounds_of_reviews, 0)] }]
    end
    team_data.each do |team|
      team.each do |vm|
        next if vm.round.nil?
        j = 1
        vm.list_of_rows.each do |row|
          row.score_row.each do |score|
            unless score.score_value.nil?
              #chart_data[vm.round][score.score_value][j] += 1
              chart_data[j][score.score_value][vm.round-1] += 1
            end
          end
          j += 1
        end
      end
    end
    chart_data
  end


'''Issue 7:''' Currently, the reputation scores are not set. They show up as 0.0 for all students.
vm is an object of the  model 'VmQuestionResponse' (was already defined earlier).
It contains the following information:


'''Approach to solve issue 7:''' The method to calculate the reputation scores is not being called. We will call that method in the corresponding controller and deliver the computed reputation score(s) to a view where it can be displayed. Also, once we have this data, we will display the reputation scores of all students for an assignment graphically to enable easy visualization for the instructor. This view can be accessed by a link/button (which we will add) on the ‘review summary’ page.
  @list_of_rows = []
    @list_of_reviewers = []
    @list_of_reviews = []
    @list_of_team_participants = []
    @max_score = questionnaire.max_question_score
    @questionnaire_type = questionnaire.type
    @questionnaire_display_type = questionnaire.display_type
    @rounds = rounds
    @round = round
    @name  = questionnaire.name


'''Issue 8:''' Currently, the instructor doesn't come to know which submission has not been updated since the previous review.
Here we build the 'series' array which is used by the highchart object to render the graph.
This array holds the actual data for the chart along with the legend name.
By introducing the counter count_rounds corresponding to the counter for rounds, we are able to compress the legends of highchart to a standard form, showing only what is required.
Also, we have redefined the highchart_colors data structure to contain the colors defined in the grades.scss file (already defined earlier) as suggested by the Professor.


'''Approach to solve issue 8:''' For n rounds of submissions, we can take a color (say red) as a background highlighter, and vary (say increase) it's intensity by a percentage of 100/(n-1) on every submission update. And this color code will be mentioned on the corresponding view. This will enable the instructor to quickly make out whether a student has updated/resubmitted the submission.
  def generate_highchart(chart_data, min, max, number_of_review_questions, assignment, team_data)
For example, if a project has 3 rounds of submissions, then n=3. Initially there will be no background color. When a student submits for the first time, the submission link will be shown in black font with no background color. After round 1 reviews, there will be time for second submission, in which the students can resubmit the updated version of their project based on the reviews. If a student resubmits the project, the submission link will now have a red background color of 50% intensity. The submission link for the students who did not resubmit/update their submission would remain as it was earlier, that is, without any background color. After this, there will be another round of reviews. After round 2 reviews, it will be time for third submission (which is the final submission in this case). If a student resubmits the project, the submission link will now have a background color of 100% intensity. The submission link for the students who did not resubmit/update their submission would remain as it was earlier. That is, for students who submitted the project only once and did not resubmit thereafter, there will be no background color of the submission link. For students, who resubmitted the project only once after the first submission, the background color of the submission link will have 50% intensity and for the students who resubmitted the project twice (as was expected) will have 100% intensity.
    # Here we actually build the 'series' array which will be used directly in the highchart Object in the _team_charts view file
    # This array holds the actual data of our chart with legend name
    highchart_series_data = []
 
    count_rounds=0
    chart_data.each do |round, scores|
      scores.to_a.reverse.to_h.each do |score, rubric_distribution|
        if count_rounds == 0
          highchart_series_data.push(name: "Score #{score}", data: rubric_distribution, stack: "S#{round}")
        else
          highchart_series_data.push(linkedTo: "previous",name: "Rubric #{round} - Score #{score}", data: rubric_distribution, stack: "S#{round}")
        end
      end
      count_rounds = count_rounds + 1
    end
    # Here we dynamically creates the categories which will be used later in the highchart Object
    highchart_categories = []
    for i in 1..@assignment.rounds_of_reviews
      highchart_categories.push("Submission #{i}")
    end
    # Here we dynamically create an array of the colors which the highchart uses to show the stack charts.
    # Currently we create 6 different colors based on the assumption that we always have scores from 0 to 5.
    highchart_colors = []
    highchart_colors.push("#2DE636")
    highchart_colors.push("#BCED91")
    highchart_colors.push("#FFEC8B")
    highchart_colors.push("#FD992D")
    highchart_colors.push("#ff8080")
    highchart_colors.push("#FD422D")
    [highchart_series_data, highchart_categories, highchart_colors]
  end


Pseudocode:


  Let n be total rounds of submission for the project
  For i from 2 to n
    Increase the intensity of background color red (R,0,0) by 100/(n-1) percent


'''Issue 9:''' Integration of review performance- The basic aim of this implementation is to somehow combine an author’s feedback on a review and the corresponding review of a reviewer, to build a method for grading the reviewers.
<i>Related screenshot</i><br>[[File:graph_g_b_2.jpg]]<br><br>
Basic rubrics to be considered:<br>
-Number of reviews completed<br>
-Length of reviews<br>
-Summary of reviews<br>
-Whether reviewers added a file or link to their review <br>
-The average ratings they received from the author.<br>
-A graph or table is prefered for easing this change.<br>


'''Approach to solve issue 9:''' The entire focus is on to what extent a review could help the author. The latter’s feedback is always available in the form of numerical scores. We intend to capture the author’s score of the review topic-wise like that of tone of review, plausible solutions to problems suggested by reviewer and the rate of how much it helped the author to improve(scored by himself). <br>
===Issue 3===
-In each round of review done, the author’s feedback is noted topic-wise as suggested in the previous step and ''possibly generate a graph comparing the “n” rounds of reviews''.<br>
'''Description''': The table is presorted by teams, but you can now also sort alphabetically. The cell view looks way too long, and should be divided into partials.
-If reviewer added ''a legit file or link to their review'', then we propose to add a few extra credits to the reviewer that can be added to their final grade.
The author’s feedback in the form of an average number can be taken and if that value exceeds that of a threshold , then the reviews were really meaningful.<br><br>


For example -
Approach: Here we can sort the select query with appropriate column alphabetically, or we can sort the table automatically according to user criteria using dynamic table format. And the long view issue can be solved by using paging.
Consider author’s feedback as follows for 2 rounds of reviews:


<br><br>
[[File:ExampleTableForAuthorFeedback.PNG]] <br><br>
Observations: Mean score of round 1 is 3 and Mean score of round 2 is 1.9. <br>
Suppose , we take into account that Round of 1 has more importance than the second one because it helps a reviewer improve, and also that every reviewer should atleast provide 2 suggestions or comments that help authors improve. Then the threshold value is considered to be 2 and the Mean score of round 1 > threshold. <br>
Therefore, the reviews were meaningful and deserves a credit in the higher range of grades. If the mean score of round 1 < threshold then the mean score of round 2 is compared with the threshold.  <br>
''Overall : Mean round 1> threshold : Higher range of grade to reviewer.<br>
  Mean round 1<threshold  : Check if Mean round 2 > threshold <br>
If true , Medium range of grade to reviewer.<br>
Else , lower range of grades.<br>''
Now once it’s decided in what range a reviewer deserves credit, The other factors like ''summary of review'' and ''length of review'' are taken into account by the instructors and graded according to the effectiveness of concepts explained in it. Also if a reviewer misses the second round of review then only the first round is taken into consideration. ''The instructor should believe that all but the last round of reviews are crucial to improve a document.''<br>
'''Graphs:'''Can be generated for comparing the effectiveness of the “n” rounds of reviews for the authors depending on the author’s score. <br>
 
In this case - (only an example visualization)


<br><br>
<i>Related screenshot</i><br>[[File:graph_g_b_4.jpg]]<br><br>
[[File:ExampleGraph.PNG]] <br><br>
Observations: The graph and table together suggest that round 1 of the review actually had an impact on the author.


----
===Issue 4===
'''Description:''' An interactive visualization or table that shows how a class performed on selected rubric criteria would be immensely helpful. It would show me what I need to focus more attention on.


== '''Test Plan''' ==
'''Approach:''' The issue is about getting the performance of the entire class graded using the 5 rubrics criteria.Scores will be color coded for each of the rubrics in each of the submissions. When hovered over the graph , for each score, the instructor should see the number of students , the percentage of students that has scored that particular point in that rubric in that submission.
'''Issue 1:''' Check the link matches to the hover over text


'''Issue 2:''' Check the color coding when different number is given in different scale.
The code snippet used to fix Issue 4 is same as that used to fix Issue 2. The graphs that we are rendering by making use of highcharts addresses this Issue.


'''Issue 3:''' Enter multiple reviews with different rubric and check different table/grid shows for each review round separately.
<i>Related screenshot</i><br>[[File:graph_g_b_3.jpg]]<br><br>


'''Issue 4:''' Check if the table is sorted with appropriate column alphabetically
== Test Plan ==
===Issue 1:Check if color coding from red to green for a range of score===
*1) Login as instructor
*2) Create a dummy assignment and come dummy reviews for the same.Log out.
*3) Login as a student. Attempt the assignment. Log out.
*4) Login as another student and repeat step 3.
*5) Login as either student and attempt the review. Logout.
*6) Login as instructor. Go to Review grades page and check the table. If color code ranges from red ( for least score) to green ( for highest score), then test passed.


'''Issue 5:'''
===Issue 2:Check if different graph visible for different review submission===
*1) Login as instructor.
*2) Create an assignment. Select 2 reviews. Also select different rubrics for both reviews.
*3) Login as a student ("A") and submit the assignment. Repeat this for another student("B").
*4) Login as student A and perform review. Do this for student B too.
*5) Now resubmit assignment as student A and B again.
*6) Resubmit reviews as student A and B again. This time the rubrics will be different from the previous round.
*7) Now login as instructor and see the visualization of the reviews. You can see the different graphs for different submissions.
 
===Issue 3:Check if the table is sorted with appropriate column alphabetically===
*1) Login as instructor
*2) Create a dummy assignment with some teams. Logout.
*3) Login as a student and attempt the assignment and logout.
*4) Repeat step 3 for all dummy teams.
*5) Login as instructor.
*6) Go to View Scores page. Check the grade table.
*7) Click on a column header and check if data in it is getting sorted alphabetically. If yes, then the test passed.
 
===Issue 4:Check if graph in Issue 3 shows how a class performed on selected rubric criteria===
*1) Login as the instructor
*1) Login as the instructor
*2) Click on the button to compute graphs
*2) Click on the button to compute graphs
*3) Compare the bar graphs with separate scores of students in each assignments.
*3) Compare the bar graphs with separate scores of students in each assignments.
'''Issue 6:'''
*1) Login as an instructor.
*2) Go to the 'Review Summary' page for an assignment.
*3) The instructor will now be able to see the name of the assignment as a heading. Also, the name of students will now be displayed beside their student id.
'''Issue 7:'''
*1) Login as instructor.
*2) Go to the 'Review Summary' page for an assignment.
*3) Click on the reputation scores.
*4) The instructor will now be able to see individual reputation scores of the students for that assignment as well as a graphical representation of the reputation scores of all the students enrolled in the exercise.
'''Issue 8:'''
*1) Login as instructor and create an assignment with 3 rounds of submissions. Enroll few students in the assignment.
*2) Impersonate at least three students.
*3) For first student, say 'A', just submit for the first round.
*4) For second student, say 'B', submit the assignment for first round and then resubmit it for second round.
*5) For third student, say 'C', submit the assignment for the first round, second round and third round.
*6) Now, login again as instructor and go to the review submissions page to view submissions of students. You can now see that for 'A', the submission link appears in black font with no background color, for 'B', the submission link appears with a red background color of 50% intensity, and for 'C', the submission link appears with a red background color of 100% intensity.
*7) Also, the color code is mentioned at the end of the page which clearly indicates the usage of background color-code to represent the number of times the project was submitted all together by any student.
'''Issue 9:''' Since it is a new algorithm being proposed, we intend to test it first manually using dummy datasets. Later on, automated test cases can be added for the same.
*1) Login as an instructor and create a dummy assignment with 2 rounds of review submissions. Assign 2 teams to that assignment of 1 student each. Also, put the deadline of first review as "1 hour". Log out
*2) Login as a student and submit some data for the assignment. Log out.
*3) Login as the second student and submit the same assignment using dummy data. Log out.
All the above steps will help us reach towards the section of reviews.
*4) Login as the first student and go to "Other's work" and request for a review document.By default, you will be assigned the document of the second student to review. Once assigned, complete that review with dummy scores. Log out.
*5) Login as the second student. Check the review in "Your scores" page and click on the section "Show review". There you can provide scored feedback to the reviewer. Definitely take into account the tone, solutions suggested and how it will help you. Log out.
*6) Login as the first student after the deadline for the first review completes. Go to "Other's work" and request for a new submission. You will be assigned the same document of the other student to review for the second round. Follow step 4.
*7) Login as the second student and follow step 5.
*8) The system has the data related to author's feedback and the reviewer's data and will calculate the scores of the reviewer according to the ''range of credits'' ( lower range, medium range and higher range) set by the instructor.
*Edge cases - May include options like ''adding a link in the review document by the reviewer'' and seeing how the grade is increased.
May also take into account the situation when author's feedback isn't available and the score of the reviewer would solely depend on the latter's reviews adn the instructor's jurisdiction because the algorithm in this case would fail.




== '''References''' ==
== References ==
*Expertiza repo : https://github.com/expertiza/expertiza
*[https://github.com/expertiza/expertiza Expertiza repo]
*[http://expertiza.ncsu.edu/ The live Expertiza website]
*[https://api.highcharts.com/highcharts Highcharts API]

Latest revision as of 05:36, 28 November 2017

This wiki page is for the description of changes made under E1792 OSS Visualizations for instructors.

About Expertiza

Expertiza is an open source project based on Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. It also allows the instructor to create a list of topics the students can sign up for. Students can form teams in Expertiza to work on various projects and assignments. Students can also peer review other students' submissions. Instructors can also use Expertiza for interactive views of class performances and reviews.

Introduction

This project aims to improve the visualizations of certain pages related to reviews and feedback in Expertiza in the instructor's view. This would aid the instructors to judge outcomes of reviews and class performance in assignments via graphs and tables, which in turn would ease the process of grading the reviews.

Problem Statement

  • Issue 1: The scale is blue to green, which does not make sense. And colors will change randomly each time loading the page. It will be better to scale from red to green.
  • Issue 2: Two adjacent bar represents the response in round 1 to round k. It makes sense only if the rubrics in all review rounds are all the same. If the instructor implements the vary-rubric-by-round mechanism, this visualization will not make sense.
  • Issue 3:The table is presorted by teams in the page View Scores, but you can now also sort alphabetically. The cell view looks way too long, and should be divided into partials.
  • Issue 4: An interactive visualization or table that shows how a class performed on selected rubric criteria would be immensely helpful. It would show the instructors what they need to focus more attention on.

UML Diagram




Solutions Implemented

Issue 1

Description:The scale is blue to green, which does not make sense. And colors will change randomly each time loading the page. It will be better to scale from red to green.

Approach: Here the color coding is blue to green and also it changes randomly which one color in one graph may represent some other score in the next graph. To fix this issue, we have redefined the highchart_colors data structure to contain the colors defined in the grades.scss file (already defined earlier) as suggested by the Professor. This way no matter what scale is being used it will always have appropriate color coding. Also, the color legend is now appropriate to explain correctly the color-code used.

highchart_colors = []
   highchart_colors.push("#2DE636")
   highchart_colors.push("#BCED91")
   highchart_colors.push("#FFEC8B")
   highchart_colors.push("#FD992D")
   highchart_colors.push("#ff8080")
   highchart_colors.push("#FD422D")


Related screenshot


Issue 2

Description: Two adjacent bar represents the response in round 1 to round k. It makes sense only if the rubrics in all review rounds are all the same. If the instructor implements the vary-rubric-by-round mechanism, this visualization will not make sense.

Approach: Here the problem was that the data was represented rubric-wise, that is, if there are 5 rubrics, there would be 5 graphs. A graph for a particular rubric would show the performance of a team in that rubric in all rounds of submissions. This is fine when the rubrics are same for all the rounds. But, since rubric 1 of round 1 may not be same as rubric 1 of round 2, therefore this representation seems illogical. In our approach, the graphs are rendered submission-wise, the performance of a team in all rubrics in a particular round are shown in one graph pertaining to that round of submission.

We create an array called chart_data to hold the information for creating the highchart stack charts. Data is now stored in the highchart in form of hash-maps which contain the score as key and score pertaining to different rounds of reviews as value. Earlier, data was stored in the hash map in a way that in the value scores pertaining to different rubrics were getting stored. Our process facilitates the graph/chart formation submission/round-wise instead of rubric-wise (earlier approach). This guarantees the data is represented logically. In the earlier approach, data was represented rubric-wise. A graph for a particular rubric would show the performance of a team in that rubric in all rounds. But, since rubric 1 of round 1 may not be same as rubric 1 of round 2, therefore this representation seems illogical. Now, the graphs are submission-wise, the performance of a team in all rubrics in a particular round are shown in one graph pertaining to that round of submission. This addresses the problem of varying rubric by round. We can now have different number of rubrics in different rounds without affecting the visualizations for instructors.

def get_highchart_data(team_data, assignment, min, max, number_of_review_questions)
   chart_data = {}  # @chart_data is supposed to hold the general information for creating the highchart stack charts
   for i in 1..number_of_review_questions
     chart_data[i] = Hash[(min..max).map {|score| [score, Array.new(assignment.rounds_of_reviews, 0)] }]
   end
   team_data.each do |team|
     team.each do |vm|
       next if vm.round.nil?
       j = 1
       vm.list_of_rows.each do |row|
         row.score_row.each do |score|
           unless score.score_value.nil?
             #chart_data[vm.round][score.score_value][j] += 1
             chart_data[j][score.score_value][vm.round-1] += 1
           end
         end
         j += 1
       end
     end
   end
   chart_data
 end

vm is an object of the model 'VmQuestionResponse' (was already defined earlier). It contains the following information:

 @list_of_rows = []
   @list_of_reviewers = []
   @list_of_reviews = []
   @list_of_team_participants = []
   @max_score = questionnaire.max_question_score
   @questionnaire_type = questionnaire.type
   @questionnaire_display_type = questionnaire.display_type
   @rounds = rounds
   @round = round
   @name  = questionnaire.name

Here we build the 'series' array which is used by the highchart object to render the graph. This array holds the actual data for the chart along with the legend name. By introducing the counter count_rounds corresponding to the counter for rounds, we are able to compress the legends of highchart to a standard form, showing only what is required. Also, we have redefined the highchart_colors data structure to contain the colors defined in the grades.scss file (already defined earlier) as suggested by the Professor.

 def generate_highchart(chart_data, min, max, number_of_review_questions, assignment, team_data)
   # Here we actually build the 'series' array which will be used directly in the highchart Object in the _team_charts view file
   # This array holds the actual data of our chart with legend name
   highchart_series_data = []
  
   count_rounds=0
   chart_data.each do |round, scores|
     scores.to_a.reverse.to_h.each do |score, rubric_distribution|
       if count_rounds == 0
         highchart_series_data.push(name: "Score #{score}", data: rubric_distribution, stack: "S#{round}")
       else
         highchart_series_data.push(linkedTo: "previous",name: "Rubric #{round} - Score #{score}", data: rubric_distribution, stack: "S#{round}")
       end
     end
     count_rounds = count_rounds + 1
   end
   # Here we dynamically creates the categories which will be used later in the highchart Object
   highchart_categories = []
   for i in 1..@assignment.rounds_of_reviews
     highchart_categories.push("Submission #{i}")
   end
   # Here we dynamically create an array of the colors which the highchart uses to show the stack charts.
   # Currently we create 6 different colors based on the assumption that we always have scores from 0 to 5.
   highchart_colors = []
   highchart_colors.push("#2DE636")
   highchart_colors.push("#BCED91")
   highchart_colors.push("#FFEC8B")
   highchart_colors.push("#FD992D")
   highchart_colors.push("#ff8080")
   highchart_colors.push("#FD422D")
   [highchart_series_data, highchart_categories, highchart_colors]
 end


Related screenshot


Issue 3

Description: The table is presorted by teams, but you can now also sort alphabetically. The cell view looks way too long, and should be divided into partials.

Approach: Here we can sort the select query with appropriate column alphabetically, or we can sort the table automatically according to user criteria using dynamic table format. And the long view issue can be solved by using paging.


Related screenshot


Issue 4

Description: An interactive visualization or table that shows how a class performed on selected rubric criteria would be immensely helpful. It would show me what I need to focus more attention on.

Approach: The issue is about getting the performance of the entire class graded using the 5 rubrics criteria.Scores will be color coded for each of the rubrics in each of the submissions. When hovered over the graph , for each score, the instructor should see the number of students , the percentage of students that has scored that particular point in that rubric in that submission.

The code snippet used to fix Issue 4 is same as that used to fix Issue 2. The graphs that we are rendering by making use of highcharts addresses this Issue.

Related screenshot


Test Plan

Issue 1:Check if color coding from red to green for a range of score

  • 1) Login as instructor
  • 2) Create a dummy assignment and come dummy reviews for the same.Log out.
  • 3) Login as a student. Attempt the assignment. Log out.
  • 4) Login as another student and repeat step 3.
  • 5) Login as either student and attempt the review. Logout.
  • 6) Login as instructor. Go to Review grades page and check the table. If color code ranges from red ( for least score) to green ( for highest score), then test passed.

Issue 2:Check if different graph visible for different review submission

  • 1) Login as instructor.
  • 2) Create an assignment. Select 2 reviews. Also select different rubrics for both reviews.
  • 3) Login as a student ("A") and submit the assignment. Repeat this for another student("B").
  • 4) Login as student A and perform review. Do this for student B too.
  • 5) Now resubmit assignment as student A and B again.
  • 6) Resubmit reviews as student A and B again. This time the rubrics will be different from the previous round.
  • 7) Now login as instructor and see the visualization of the reviews. You can see the different graphs for different submissions.

Issue 3:Check if the table is sorted with appropriate column alphabetically

  • 1) Login as instructor
  • 2) Create a dummy assignment with some teams. Logout.
  • 3) Login as a student and attempt the assignment and logout.
  • 4) Repeat step 3 for all dummy teams.
  • 5) Login as instructor.
  • 6) Go to View Scores page. Check the grade table.
  • 7) Click on a column header and check if data in it is getting sorted alphabetically. If yes, then the test passed.

Issue 4:Check if graph in Issue 3 shows how a class performed on selected rubric criteria

  • 1) Login as the instructor
  • 2) Click on the button to compute graphs
  • 3) Compare the bar graphs with separate scores of students in each assignments.


References