CSC/ECE 517 Spring 2025 - E2528 Testing for Survey Deployment

From Expertiza_Wiki
Jump to navigation Jump to search

πŸ“˜ Project Overview

Problem Statement

The SurveyDeployment feature in Expertiza is responsible for deploying surveys that collect feedback from students across different scopes: assignments, courses, and globally. Each survey is tied to a questionnaire and operates within a scheduled time window (start and end dates).

When we received the project, there were no existing RSpec tests for the SurveyDeploymentHelper module.

Project Accomplishments

Developed comprehensive RSpec test cases for:

  • SurveyDeployment model
  • SurveyDeploymentHelper module
    • Created a new spec file for the helper module from scratch
    • Achieved 100% test coverage
    • Structured tests around SOLID principles
    • Ensured coverage of edge cases and large dataset performance

Files Involved

File Path Description
app/models/survey_deployment.rb Model for managing survey deployment lifecycle
app/helpers/survey_deployment_helper.rb Helper methods for aggregating and filtering survey-related data
spec/models/survey_deployment_spec.rb To be expanded for model validations and polymorphic behavior
spec/helpers/survey_deployment_helper_spec.rb New file with complete helper test suite

Mentor

  • Mitesh Anil Agarwal

Team Members

  • Vibhav Deo
  • Marmik Patel
  • Harsh Vora

πŸ“‚ Class and Method Overview

SurveyDeployment Model

This model ensures that surveys are deployed with valid time intervals and are associated with a questionnaire. It also acts as a parent class to specific types of surveys like AssignmentSurveyDeployment and CourseSurveyDeployment.

  • Key Method: valid_start_end_time?
    • Checks that both start_date and end_date are present
    • Ensures end_date is after or equal to start_date
    • Requires that end_date is in the future
  • Polymorphic Methods (to be tested):
    • response_maps: Returns response maps associated with a specific deployment (delegated to subclasses)
    • parent_name: Abstract placeholder for future UI reference

SurveyDeploymentHelper Module

This module contains utility methods that support survey response aggregation and question filtering.

  • get_responses_for_question_in_a_survey_deployment(q_id, sd_id)
    • Collects responses to a question across multiple survey types (Assignment, Course, Global)
  • allowed_question_type?(question)
    • Returns true for allowed types (Criterion, Checkbox)
    • Used to filter questions eligible for statistical reporting

πŸ” Testing Summary

Model: SurveyDeployment

Single Responsibility Principle (SRP)

Each method in the SurveyDeployment model is responsible for a distinct task.

  1. The valid_start_end_time? method encapsulates all validation logic for date consistency.
  2. Abstract methods parent_name and response_maps are defined for implementation in child classes such as AssignmentSurveyDeployment and CourseSurveyDeployment.

Open/Closed Principle (OCP)

The model is open for extension but closed for modification.

  1. Abstract methods allow subclasses to define custom behavior without changing the base class logic.
  2. This promotes maintainability and adherence to polymorphic design.

Validation Logic

Method: valid_start_end_time?

Validates that:

  1. Both start_date and end_date are present.
  2. end_date is after or equal to start_date.
  3. end_date is in the future.

Test Scenarios

Scenario 1: Missing start_date

Tests that a SurveyDeployment is invalid if the start_date is not provided. It verifies that the appropriate error message is added to the model.

Scenario 2: Missing end_date

Ensures the model is invalid when the end_date is missing. Confirms that the validation catches this case and produces a corresponding error.

Scenario 3: Both start_date and end_date are nil

Checks the custom validation logic in valid_start_end_time? by omitting both start_date and end_date. The test confirms that a specific base-level error message is shown indicating both fields are required.

Scenario 4: end_date is before start_date

Validates that the model correctly identifies when end_date is earlier than start_date and adds an appropriate error message.

Scenario 5: end_date is in the past

Confirms that a deployment is invalid if its end_date lies in the past (i.e., before the current time), ensuring the deployment period is always forward-looking.

Scenario 6: Valid start_date and end_date

Tests a case with a valid start_date and a future end_date that follows it. The model should be valid in this scenario.

Scenario 7: start_date and end_date are the same

Ensures the model allows deployments where both dates are the same, which is logically acceptable since the duration would be zero.

Abstract Method Tests

Scenario 1: Responds to parent_name

Verifies that instances of SurveyDeployment respond to the parent_name method without raising an error, even though it is not implemented in the base class.

Scenario 2: Responds to response_maps

Similar to the above, this confirms that the SurveyDeployment base class defines the response_maps method as abstract and callable.

Scenario 3: AssignmentSurveyDeployment returns associated response maps

Tests that the AssignmentSurveyDeployment subclass implements the response_maps method properly and returns the expected response maps when queried using the instance’s ID.

Helper: SurveyDeploymentHelper

SOLID Principles:

  • SRP (Single Responsibility Principle):** Each method in the helper module is focused on one clear task.
  • DIP (Dependency Inversion Principle):** All logic depends on external inputs via parameters, not internal hardcoded state, making the methods testable and modular.

Object Instantiation (FactoryBot & RSpec Mocks)

In our tests, we used two main ways to create or simulate objects:

  • create(...) β€” FactoryBot::

FactoryBot allows us to generate real ActiveRecord objects saved to the test database. These are defined in the `spec/factories/` directory.

Example: [source,ruby]

let(:question) { create(:question) }

This creates a real `Question` object using the corresponding factory.

  • double(...) β€” RSpec Mocks::

We use `double` to simulate lightweight stand-in objects for isolated testing. These are not saved in the database.

Example: [source,ruby]

question = double('Question', type: 'Criterion')

This mock object is used to test behavior without needing a real `Question` instance.

Object Used Method Description
create(:model) FactoryBot Creates real DB-backed model instances using factories defined in spec/factories/
double(...) RSpec Mocks Creates lightweight mock objects for isolated unit tests (not persisted in the database)

Method: get_responses_for_question_in_a_survey_deployment

Description

This method returns an array of counts representing how many times each score (defined by the `@range_of_scores` variable) was selected in answers to a specific question within a given survey deployment. It does this by:

  • Identifying response maps linked to the survey deployment (supports types: Assignment, Course, Global).
  • Fetching all responses from those maps.
  • Counting how many times each score was recorded for the provided question across all responses.

Test Scenarios

Scenario 1: Counts multiple answers of the same score

Validates that if multiple answers with the same score exist for a question, the count is aggregated correctly.

Scenario 2: Returns all zeros if no answers exist

Ensures the method returns a zero count array if there are no answers associated with the question.

Scenario 3: Handles mixed score distribution

Tests the function's ability to correctly assign counts to different scores within the same response.

Scenario 4: Aggregates across multiple response maps

Ensures answers from multiple response maps are all considered in the score aggregation.

Scenario 5: Returns zeros for invalid deployment ID

Verifies that the method gracefully returns all zeros when given a non-existent survey deployment ID.

Scenario 6: Raises error on invalid question ID

Ensures that the method throws an `ActiveRecord::RecordNotFound` error if the question ID does not exist.

Scenario 7: Handles nil @range_of_scores

Tests the behavior when `@range_of_scores` is not defined. It should return an empty array without errors.

Scenario 8: Supports custom score range (e.g., 1 to 3)

Validates correct operation when a non-default range of scores is provided.

Scenario 9: Ignores answers with scores outside the range

Ensures that scores not included in `@range_of_scores` are excluded from the count.

Scenario 10: Ignores answers with nil scores

Confirms that answers with no score (nil) do not affect the result array.

Scenario 11: Returns zeros when response maps have no responses

Tests behavior when response maps exist but have no associated responses.

Scenario 12: Ignores answers from other questions

Ensures that only answers corresponding to the specific question ID are counted.

Scenario 13: Handles high-volume datasets

Simulates performance with 50 responses and ensures results remain accurate.

Scenario 14: Works across supported response map types

Ensures compatibility with AssignmentSurveyResponseMap, CourseSurveyResponseMap, and GlobalSurveyResponseMap.

Method: allowed_question_type?

Description

This method determines whether a given question type is eligible for statistical analysis in the UI. According to the application logic, only questions of type `'Criterion'` and `'Checkbox'` are valid for such calculations.

It returns:

  • `true` for questions of type `'Criterion'` or `'Checkbox'`
  • `false` for any other type, including unknown, `nil`, or incorrectly formatted inputs

Test Scenarios

Scenario 1: Accepts "Criterion" type

Confirms that the method returns `true` when the question type is `'Criterion'`, which is allowed for statistical reporting.

Scenario 2: Accepts "Checkbox" type

Ensures that `'Checkbox'` is recognized as a valid statistical question type and returns `true`.

Scenario 3: Rejects "TextArea" type

Validates that unsupported question types such as `'TextArea'` are rejected and return `false`.

Scenario 4: Rejects nil as type

Checks that if a question has a `nil` type, the method returns `false` without error.

Scenario 5: Rejects empty string as type

Ensures empty strings (``) are not mistakenly treated as valid question types.

Scenario 6: Rejects unknown types like "Dropdown"

Tests that the method returns `false` for unknown types not explicitly whitelisted (e.g., `'Dropdown'`).

Scenario 7: Accepts real Criterion question instance

Confirms that the method returns `true` when passed a real ActiveRecord question object of type `'Criterion'`.

Scenario 8: Enforces case sensitivity

Checks that lowercase valid types (e.g., `'criterion'`) are rejected, ensuring type matching is case-sensitive.

Scenario 9: Rejects non-string type like symbol

Verifies that a non-string type such as `:Criterion` is rejected, maintaining strict type checking.

Testing Details

Using RSpec

We implemented tests in survey_deployment_spec.rb and survey_deployment_helper_spec.rb.

How to See Test Coverage

  1. Run RSpec for [1], sudo su, yum install lynx, then lynx ./coverage/index.html
  1. Run RSpec for [2], sudo su, yum install lynx, then lynx ./coverage/index.html

Results

Both files are 100% covered and additional testing was added to ensure a solid groundwork.

survey_deployment_spec.rb

Coverage: 100%
Hits/line: 1.9

survey_deployment_helper_spec.rb

Coverage: 100%
Hits/line: 47.5

πŸ“ˆ Final Outcomes

  • 100% test coverage for both files
  • Survey logic is now fully tested for correctness, completeness, and edge-case handling
  • Design patterns like SRP, OCP and DIP ensured for future-proofing
  • Tests are modular, clean, and isolated with repeatable results

πŸ”— References