CSC/ECE 517 Spring 2019 - Project M1901 Implement missing WebAudio automation support

From Expertiza_Wiki
Jump to navigation Jump to search

Introduction

Servo

Servo is an developmental browser engine designed to make use of the memory safety properties and concurrency features of the Rust programming language. The project was initiated by Mozilla Research with effort from Samsung to port it to Android and ARM processors

More information about Servo is available here

Rust

Rust is a systems programming language which focuses on memory safety and concurrency. It is similar to C++ but ensures memory safety and provides high performance. Rust is brain child of Mozilla and simple as this

fn main() {
    println!("Hello World");
}

More information about Rust can be found here and here is why we love Rust, SF Loved

Web Audio API

The Web Audio API involves handling audio operations inside an audio context, and has been designed to allow modular routing. An audio routing graph has been created by linking together the Basic Audio operations performed with audio nodes. Several sources — with different types of channel layout — are supported even within a single context. This modular design provides the flexibility to create complex audio functions with dynamic effects.

Audio nodes are linked into chains and simple webs by their inputs and outputs. They typically start with one or more sources. Sources provide arrays of sound intensities (samples) at very small timeslices, often tens of thousands of them per second. These could be either computed mathematically (such as OscillatorNode), or they can be recordings from sound/video files (like AudioBufferSourceNode and MediaElementAudioSourceNode) and audio streams (MediaStreamAudioSourceNode). In fact, sound files are just recordings of sound intensities themselves, which come in from microphones or electric instruments, and get mixed down into a single, complicated wave.

Outputs of these nodes could be linked to inputs of others, which mix or modify these streams of sound samples into different streams. A common modification is multiplying the samples by a value to make them louder or quieter (as is the case with GainNode). Once the sound has been sufficiently processed for the intended effect, it can be linked to the input of a destination (AudioContext.destination), which sends the sound to the speakers or headphones. This last connection is only necessary if the user is supposed to hear the audio.

A simple, typical workflow for web audio would look something like this:

1. Create audio context
2. Inside the context, create sources — such as <audio>, oscillator, stream
3. Create effects nodes, such as reverb, biquad filter, panner, compressor
4. Choose final destination of audio, for example your system speakers
5. Connect the sources up to the effects, and the effects to the destination.


Problem Statement: Web Audio Automation

AudioParam is used to control the AudioNode functioning, say, Volume (a Gain parameter). These values can either be scheduled to be changed at precise times or set to particular value immediately following an event or at an event. The schedule to change when used with AudioContext.currentTime can helps in volume fades, filter sweeps etc. This would work aims to implement SetValueCurveAtTime().

setValueCurveAtTime() is one such method in AudioParam that takes an array as input and schedules a change. The array is usually a curve in wedAudio and achieved my linear interpolation between the values from the floating-point values array and for the duration, d from the startTime, s

v(t) = values[N * (t - s) / d], where N is the length of the values in the array. And after the end of the curve time interval (t >= s + d), the value will remain constant at the final curve value. This would persist to happen until the next automation event.

One of the application of setValueCurveAtTime() it to create tremolo effect. Suppose linear nor an exponential curve satisfy the needs then user can create curve based on values to setValueCurveAtTime() with an array of timing values. Its a much loved approached against multiple calls to setValueAtTime().

Build

Servo is built with Cargo, the Rust package manager. We also use Mozilla's Mach tools to orchestrate the build and other tasks.


Normal build from Master Repo

To build Servo in development mode. This is useful for development, but the resulting binary is very slow.

git clone https://github.com/servo/servo
cd servo
./mach build --dev
./mach run tests/html/about-mozilla.html

For benchmarking, performance testing, or real-world use, add the --release flag to create an optimized build:

./mach build --release
./mach run --release tests/html/about-mozilla.html


Mac OS

Follow the instructions posted in this URL and install GStreamer and Rust. After that follow the following lines for compiling on MacOS.

1. brew install gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-libav gst-rtsp-server gst-editing-services

2. export PKG_CONFIG_PATH=/usr/local/Cellar/libffi/3.2.1/lib/pkgconfig/

3. rustup override set nightly

After this run the following command to compile servo/media

 
cargo build 

After compiling you will be able to test our example that is places in the examples folder.


Windows and Linux

Head over here to install Rust ToolChain. If you need any additional details we would suggest reading the ReadMe on Rust. Some Servo media specific build issues are advised.

We would suggest Ubuntu 18.04 LTS with updates gstreamer package.

Servo Media

Requirements In order to build this crate you need to install gstreamer which can be found here [1]

Initial Implementation of SetValueCurveAtTime

Code

The code is implemented as in here. The code below is the main code snippet for the SetValueCurveAtTime function. The values and function behavior is define in the param.rs file.


            AutomationEvent::SetValueCurveAtTime(ref values, start, duration) => {
                let time_diff = ((duration.0 as f32) - (start.0 as f32)) as f32;
                let mut progress = ((((current_tick.0 as f32) - (start.0 as f32)) as f32) / time_diff) as f32;
                if progress < 0.0 {
                    progress = 0.0 as f32;
                }
                let n = values.len() as f32;
	            let k = (((n - 1.) * progress) as f32).floor();
                let next = k + 1. as f32;
                let step = time_diff / (n - 1.);
                if next < n {
                    let time_k = (k * step) as f32;
                    let time_k_next = (next * step) as f32;
                    let v_k = values[k as usize];
    				let v_k_next = values[next as usize];
                    *value = v_k + (v_k_next - v_k) * (((current_tick.0 as f32) - time_k) / (time_k_next - time_k));
                }
                true
            }
            AutomationEvent::CancelAndHoldAtTime(..) => false,
            AutomationEvent::CancelScheduledValues(..) | AutomationEvent::SetValue(..) => {
                unreachable!("CancelScheduledValues/SetValue should never appear in the timeline")
            }


Let's walk through the code.

AutomationEvent::SetValueCurveAtTime() is a fn and takes three arguments: values, start, duration.

progress, computes the difference between the first value in start_time and time and makes explicit conversion to f32.

                let time_diff = ((duration.0 as f32) - (start.0 as f32)) as f32;
                let mut progress = ((((current_tick.0 as f32) - (start.0 as f32)) as f32) / time_diff) as f32;

time_diff computes the total difference between duration and start_time, this is required when the values that needs to be computed goes beyond time_diff from start_time to set a constant value later.

                if progress < 0.0 {
                    progress = 0.0 as f32;
                }

Sets 0 if the progress from values array and present time being negative.

                let n = values.len() as f32;
	            let k = (((n - 1.) * progress) as f32).floor();
                let next = k + 1. as f32;
                let step = time_diff / (n - 1.);

n, total length of values array. k, compute as k as in [here and next is k+1, the delta or step is computed that would be added to generate the sound wave from the values array. Say, the frequency start at 44KHz and the step would be used to change the frequency of the curve.


                if next < n {
                    let time_k = (k * step) as f32;
                    let time_k_next = (next * step) as f32;
                    let v_k = values[k as usize];
    				let v_k_next = values[next as usize];
                    *value = v_k + (v_k_next - v_k) * (((current_tick.0 as f32) - time_k) / (time_k_next - time_k));
                }

The code block generates sequences of values from values</code array and using step value we computed. And does that till it reaches the total length of the array.


                true

Used to loop the function call made from here.


            AutomationEvent::CancelAndHoldAtTime(..) => false,
            AutomationEvent::CancelScheduledValues(..) | AutomationEvent::SetValue(..) => {
                unreachable!("CancelScheduledValues/SetValue should never appear in the timeline")
            }

Code instructs to cancel this event or end of this function and proceed to next instructions. In Audio API unless you explicitly say the block to end, the block would end and hardware device would be stopped from consumption.

Examples

Examples are unique aspect to Rust. This instructs that code to make use of the function we just wrote and generates a sound output. Refer our example here for the SetValueCurveAtTime()

And example is usually called by using cargo build in rust or cargo run.

Cargo.toml dictates the naming and the file to be called as.

cargo run --example set_value_curve

The above code instructs to run the example set_value_curve to generate a sound output through our terminal.

Compile

Cargo.lock instructs the dependencies that would be required to build this project.

Cargo.toml calls all the folders or members to be complied. Usually the compilation would be at /media

Cargo.lock installs number of packages called crates that would be used by the project to access the modules. Each of the member in Cargo.toml would have Cargo.toml in them that would instruct dependencies during the build. build-dependencies would be used if you have dependencies before the build.


.cargo config is similar to a dotenv file for Rust

Test Setup

If you would like to test our build please look into Fixes before proceeding. We would suggest MacOS or Linux 16.04 LTS for any development and testing. Virtual Machine or VirtualBox could be another option. We spent countless days in frustration with libraries and never ending apt installs so avoid Linux 18.04 for this build test.

Once cargo build is completed. Open /target/debug to view the examples that are generated.



To run all examples after the cargo build is complete, use cargo run --example <bin_name>


Test Case

After you run cargo build, you should be able to see the executable of the example inside the target/debug folder. Following is the test we have written file to check the functioning of the SetValueCurveAtTIme.

extern crate servo_media;
extern crate servo_media_auto;

use servo_media::audio::constant_source_node::ConstantSourceNodeOptions;
use servo_media::audio::gain_node::GainNodeOptions;
use servo_media::audio::node::{AudioNodeInit, AudioNodeMessage, AudioScheduledSourceNodeMessage};
use servo_media::audio::param::{ParamType, UserAutomationEvent};
use servo_media::ServoMedia;
use std::sync::Arc;
use std::{thread, time};

fn run_example(servo_media: Arc<ServoMedia>) {
    let context = servo_media.create_audio_context(Default::default());
    let dest = context.dest_node();

Initializing the values array is done in the following:


    //Initializing the values vector for SetValueCurve function
    let mut values: Vec<f32> = Vec::new();
    values.push(0.0);
    values.push(0.0);
    values.push(0.0);
    values.push(0.0);
    values.push(1.0);
    values.push(1.0);
    values.push(1.0);
    values.push(1.0);
    values.push(0.0);
    values.push(0.0);
    values.push(0.0);
    values.push(0.0);
    values.push(0.5);
    values.push(0.5);
    values.push(0.5);
    values.push(0.5);
    values.push(0.0);
    values.push(0.0);
    values.push(0.0);
    values.push(0.0);
    let start_time = 0.;
    let end_time = 5.;
    let n = values.len() as f32;
    let value_next = values[(n - 1.) as usize];

Initializing the nodes is done in the following code:

 
    let cs = context.create_node(
        AudioNodeInit::ConstantSourceNode(ConstantSourceNodeOptions::default()),
        Default::default(),
    );

    let mut gain_options = GainNodeOptions::default();
    gain_options.gain = 0.0;
    let gain = context.create_node(
        AudioNodeInit::GainNode(gain_options.clone()),
        Default::default(),
    );

    let osc = context.create_node(
        AudioNodeInit::OscillatorNode(Default::default()),
        Default::default(),
    );

    context.connect_ports(osc.output(0), gain.input(0));
    context.connect_ports(cs.output(0), gain.param(ParamType::Gain));
    context.connect_ports(gain.output(0), dest.input(0));

    let _ = context.resume();
    context.message_node(
        osc,
        AudioNodeMessage::AudioScheduledSourceNode(AudioScheduledSourceNodeMessage::Start(0.)),
    );

    context.message_node(
        gain,
        AudioNodeMessage::AudioScheduledSourceNode(AudioScheduledSourceNodeMessage::Start(0.)),
    );

    context.message_node(
        cs,
        AudioNodeMessage::AudioScheduledSourceNode(AudioScheduledSourceNodeMessage::Start(0.)),
    );

Calling of the SetValueCurveAtTime function is done in the following block of code:

    context.message_node(
        cs,
        AudioNodeMessage::SetParam(
            ParamType::Offset,
            UserAutomationEvent::SetValueCurveAtTime(values, start_time, end_time),
        ),

    );

    context.message_node(
        osc,
        AudioNodeMessage::SetParam(
            ParamType::Frequency,
            UserAutomationEvent::SetValueAtTime(value_next, end_time),
        ),
    );



    thread::sleep(time::Duration::from_millis(7000));
    let _ = context.close();
}

fn main() {
    ServoMedia::init::<servo_media_auto::Backend>();
    if let Ok(servo_media) = ServoMedia::get() {
        run_example(servo_media);
    } else {
        unreachable!();
    }
}

In the above program, the values vector is passed into the SetValueCurveAtTime function to modulate the output. The output of the same can also be checked in the test-case video at the end of this page. We can modify the values function as we want and observe the output. One more modification that we used was by using a sine curve. It is implemented as follows. Change the values vector in the set-value-curve function with the following:

 use std::f64::consts::PI;
    let mut values: Vec<f32> = Vec::new();
    let curvelength = 44100;
    let mut i = 0;
    while i < curvelength {
    values.push(f32::sin((PI *i as f64/curvelength as f64) as f32));
    i += 1;
    }

The output of this function can be seen in the test video attached to at the end of this page. <references></references>

Video of Testing (Audio Output)

Test-1 Test-2

Status of the Project

The build is ready and the Pull request is merged with Servo/media.

Final Project

The final project rely on our previous work. The next part would focus on implementing the code written to be accessible by Servo browser engine. This would require interfacing the DOM file reference the Param.rs setValueCurveAtTime to be interfaced with AudioParam.rs. The audioparam.rs would call the UserAutomationEvent in the param.rs setValueCurveAtTime.

Once the initial function call with arguments are updated, then the testing would be performed on ./mach test-wpt tests/wpt/web-platform-tests/webaudio/the-audio-api/the-audioparam-interface. The the-audioparam-interface is a folder of series of test cases that emulate the behavior of a function call, here the setValueCurveAtTime would need to be necessitated to mock the values as tested in previous example but would require to be performed over a .html webpage with DOM elements to test the complete functionality.

Servo webAudio tests are performed using testharness.js.

./mach create-wpt tests/wpt/path/to/new/test.html

would create a test.html using the WPT template for javascript tests. Once the skeleton code is completed with necessary function call, them the WPT tests would be executed using

./mach test-wpt tests/wpt/path/to/new/test.html
./mach test-wpt tests/wpt/path/to/new/reftest.html

reftest are used in Servo to test behavior related to rendering; that are related to interaction made up of several webpages [think passing or referencing values to another webpage] with assertions to test cases that are expected from the interaction.

./mach create-wpt --reftest tests/wpt/path/to/new/reftest.html --reference tests/wpt/path/to/reference.html


reference.html and reftest.html will be created using WPT reftest template. Once the reftest is run the upstream test would need to be updated using


./mach update-wpt --sync
./mach test-wpt --log-raw=update.log
./mach update-wpt update.log


When reftest fails to diagnose the behavior reftest raw log is required which is generate using the below command. The .log is fed into reftest analyzer to find those failing test. The reftest is also capable of pixel level comparison of the test and reference screenshots (incase of rendering)

./mach test-wpt --log-raw wpt.log

Design and Process Synopsis

This project has listed 4 known issues under the WebAudio title, each solving a different issue. One issue was to be solved in the previous project and one of the other issues has to be addressed in this project. Since we are implementing a few methods and writing few predefined functions to work with the Servo browser. We will not need to follow any design principles as such. The flow of work can be documented as follows:

1. Implement ValueCurveAtTime automation

This is the first issue, which has been addressed and successfully closed during the initial parts of this project. This was part of the 3rd project. The steps followed are shown in the following diagram.



2. Implement AudioParam.setValueCurveAtTime

This second issue builds up on the first one, and focuses on implementing yet another method called setValueCurveAtTime. The next focus of this project will likely be to work on this issue. This deals with the javascript API values for the curves.



3. Implement deprecated setPosition and setOrientation methods for AudioListener

The third issue deals with a few missing APIs for AudioListener. We need to accordingly add the new methods, i.e. setPosition and setOrientation to the audiolistener.rs file so that they properly update the values of the relevant AudioParam members of AudioListener along with the passed arguments.This should help stopping an endless list of javascript errors which would occur when these deprecated methods weren't implemented.



4. Implement WaveShaper node

This final issue listed tries to implement a new node called the WaveShaper node. This is an independant issue and may be done before or after the other issues have been solved.


Implemented Changes

Implementation of AudioParam.setValueCurveAtTime #22897

Pull Request had been merged in commit which implements AudioParam.setValueCurveAtTime.



    // https://webaudio.github.io/web-audio-api/#dom-audioparam-setvaluecurveattime

    // Declare a function
    fn SetValueCurveAtTime(
        &self,
    // Declaration of variable types
        values: Vec<Finite<f32>>,
        start_time: Finite<f64>,
        end_time: Finite<f64>,
    ) -> Fallible<DomRoot<AudioParam>> {

    // Start time validation be a positive number
        if *start_time < 0. {
            return Err(Error::Range(format!(
                "start time {} should not be negative",
                *start_time
            )));
        }

    // Values array or the passed audio values should be a vector of length of atleast 2
        if values.len() < 2. as usize {
            return Err(Error::InvalidState);
        }


    // End time should be greater than zero
        if *end_time < 0. {
            return Err(Error::Range(format!(
                "end time {} should not be negative",
                *end_time
            )));
        }


    // Collects the value and passes to the SetValueCurveAtTime
        self.message_node(AudioNodeMessage::SetParam(
            self.param,
            UserAutomationEvent::SetValueCurveAtTime(

    // The below is an iterator which does that by passing each value in the array
                values.into_iter().map(|v| *v).collect(),
                *start_time,
                *end_time,
            ),
        ));
        Ok(DomRoot::from_ref(self))
    }


The below snippet activates the interface of setValueCurveAtTime and makes it accessible for the usage.


    [Throws] AudioParam setValueCurveAtTime(sequence<float> values,
                                   double startTime,
                                   double duration);
Testing

Testing is descriptive,

  // This snippet generates and the test for a certain value limit

  [X Max error for test 8 at offset 10584 is not less than or equal to 0.0000037194. Got 0.4649144411087036.]
    expected: FAIL


The files that test the implementation are,

Checks interface through the browser window, idlharness.https.window.js.ini

Tests the function for exception handling in overall system design in AudioParam, audioparam-exceptional-values.html.ini

Tests the function when accessed from another fn (method chaining), audioparam-method-chaining.html.ini

Tests the function for exception handling within the individual fn, audioparam-setValueCurve-exceptions.html.ini

Tests the function, audioparam-setValueCurveAtTime.html.ini



Implementation of setPosition and setOrientation methods for AudioListener #22898

Pull Request had been merged in the commit which implements methods for AudioListener


    // Load required crates for the usage
    use crate::dom::bindings::error::Fallible;
    use crate::dom::bindings::num::Finite;
    use crate::dom::bindings::codegen::Bindings::AudioParamBinding::AudioParamMethods;

    // https://webaudio.github.io/web-audio-api/#dom-audiolistener-setorientation

    // Declare the SetOrientation function
    fn SetOrientation(

    // Declares variable types
        &self,
        x: Finite<f32>,
        y: Finite<f32>,
        z: Finite<f32>,
        xUp: Finite<f32>,
        yUp: Finite<f32>,
        zUp: Finite<f32>,
    ) -> Fallible<DomRoot<AudioListener>> {

    // Passes values to other functions associated with inputs which sets and activates auxiliary values required with this fn
        self.forward_x.SetValue(x);
        self.forward_y.SetValue(y);
        self.forward_z.SetValue(z);
        self.up_x.SetValue(xUp);
        self.up_y.SetValue(yUp);
        self.up_z.SetValue(zUp);
        Ok(DomRoot::from_ref(self))
    }

    // https://webaudio.github.io/web-audio-api/#dom-audiolistener-setposition

    // Declare the SetPosition function
    fn SetPosition(

    // Declares variable types
        &self,
        x: Finite<f32>,
        y: Finite<f32>,
        z: Finite<f32>,
    ) -> Fallible<DomRoot<AudioListener>> {

    // Passes values to other functions associated with inputs which sets and activates auxiliary values required with this fn
        self.position_x.SetValue(x);
        self.position_y.SetValue(y);
        self.position_z.SetValue(z);
        Ok(DomRoot::from_ref(self))
    }


Activates the usage of the functions in AudioListener.webidl

The files that test the implementation are,

Checks interface through the browser window, idlharness.https.window.js.ini

Tests the function when accessed from its parent fn, panner-equalpower.html.ini

Testing

The commands to run the Automated tests for the AudioParam interface are as follows

 

./mach test-wpt tests/wpt/web-platform-tests/webaudio/the-audio-api/the-audioparam-interface --log-raw /tmp/servo.log

./mach update-wpt /tmp/servo.log

These commands run the automated tests and store the logs in servo.log and we can use the update-wpt to update the logs automatically. This can been seen in the .ini files that have been modified as seen in the changes made in the repositories as that are linked above.

Conclusion

Major browsers support the WebAudio standard and we have tried ti implement some of the automation features to the Servo browse. Initially we worked on the servo/media crate and then later carried over these changes to the main servo build. We also worked with the AudioListener interface and have made changes to the some of the automations given for this interface. We have had three pull requests [1] [2] [3] merged with Mozilla during the course of this project and were able to complete the tasks given to us by learning a lot of interesting functionalities about RUST and also about Open source contribution.

Presentation

YouTube Presentation

Known Issues and Fixes

Windows Specific during build

https://github.com/holochain/holochain-cmd/issues/29

https://stackoverflow.com/questions/53136717/errore0554-feature-may-not-be-used-on-the-stable-release-channel-couldnt

https://github.com/servo/servo/issues/21429