M1901 Implement missing WebAudio automation support

From Expertiza_Wiki
Jump to navigation Jump to search

Introduction

Servo

Servo is a modern, high-performance browser engine designed for both application and embedded use and written in the Rust programming language. It is currently developed on 64bit OS X, 64bit Linux, and Android.

More information about Servo is available here

Rust

Rust is a systems programming language focuses on memory safety and concurrency. It is similar to C++ but ensures memory safety and high performance. Rust is brain child of Mozilla and simple as this

fn main() {
    println!("Hello World");
}

More information about Rust can be found here and here is why we love Rust, SF Loved

Web Audio API

The Web Audio API involves handling audio operations inside an audio context, and has been designed to allow modular routing. Basic audio operations are performed with audio nodes, which are linked together to form an audio routing graph. Several sources — with different types of channel layout — are supported even within a single context. This modular design provides the flexibility to create complex audio functions with dynamic effects.

Audio nodes are linked into chains and simple webs by their inputs and outputs. They typically start with one or more sources. Sources provide arrays of sound intensities (samples) at very small timeslices, often tens of thousands of them per second. These could be either computed mathematically (such as OscillatorNode), or they can be recordings from sound/video files (like AudioBufferSourceNode and MediaElementAudioSourceNode) and audio streams (MediaStreamAudioSourceNode). In fact, sound files are just recordings of sound intensities themselves, which come in from microphones or electric instruments, and get mixed down into a single, complicated wave.

Outputs of these nodes could be linked to inputs of others, which mix or modify these streams of sound samples into different streams. A common modification is multiplying the samples by a value to make them louder or quieter (as is the case with GainNode). Once the sound has been sufficiently processed for the intended effect, it can be linked to the input of a destination (AudioContext.destination), which sends the sound to the speakers or headphones. This last connection is only necessary if the user is supposed to hear the audio.

A simple, typical workflow for web audio would look something like this:

1. Create audio context
2. Inside the context, create sources — such as <audio>, oscillator, stream
3. Create effects nodes, such as reverb, biquad filter, panner, compressor
4. Choose final destination of audio, for example your system speakers
5. Connect the sources up to the effects, and the effects to the destination.


Web Audio Automation

AudioParam is used to control the AudioNode functioning, say, Volume (a Gain parameter). These values can either be scheduled to be changed at precise times or set to particular value immediately following an event or at an event. The schedule to change when used with AudioContext.currentTime can helps in volume fades, filter sweeps etc. This would work aims to implement SetValueCurveAtTime().

setValueCurveAtTime() is one such method in AudioParam that takes an array as input and schedules a change. The array is usually a curve in wedAudio and achieved my linear interpolation between the values from the floating-point values array and for the duration, d from the startTime, s

v(t) = values[N * (t - s) / d], where N is the length of the values in the array. And after the end of the curve time interval (t >= s + d), the value will remain constant at the final curve value. This would persist to happen until the next automation event.

One of the application of setValueCurveAtTime() it to create tremolo effect. Suppose linear nor an exponential curve satisfy the needs then user can create curve based on values to setValueCurveAtTime() with an array of timing values. Its a much loved approached against multiple calls to setValueAtTime().


var DURATION = 2;
var FREQUENCY = 1;
var SCALE = 0.4;
// Split the time into valueCount discrete steps.
var valueCount = 4096;
// Create a sinusoidal value curve.
var values = new Float32Array(valueCount);
for (var i = 0; i < valueCount; i++) {
 var percent = (i / valueCount) * DURATION*FREQUENCY;
 values[i] = 1 + (Math.sin(percent * 2*Math.PI) * SCALE);
 // Set the last value to one, to restore playbackRate to normal at the end.
 if (i == valueCount - 1) {
 values[i] = 1;
 }
}
// Apply it to the gain node immediately, and make it last for 2 seconds.
this.gainNode.gain.setValueCurveAtTime(values, context.currentTime, DURATION);