M1900: Implement missing WebAudio node support: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 27: Line 27:
== Subsequent steps (Final Project) ==
== Subsequent steps (Final Project) ==


# Implement a audio node message that is specific to OscillatorNode (use BiquadFilterNode as a model) which updates the node's oscillator type.
# Implement an audio node message that is specific to OscillatorNode (use BiquadFilterNode as a model) which updates the node's oscillator type.
# Implement the type attribute setter for the OscillatorNode interface which uses the new oscillator node message.
# Implement the type attribute setter for the OscillatorNode interface which uses the new oscillator node message.
# Implement the backend for StereoPannerNode in the media crate by creating a new node implementation using PannerNode as a guide. The processing algorithm is described in the specification. Create a runnable example based on the example for PannerNode.
# Implement the backend for StereoPannerNode in the media crate by creating a new node implementation using PannerNode as a guide. The processing algorithm is described in the specification. Create a runnable example based on the example for PannerNode.
# Create the DOM interface for StereoPannerNode and implement the createStereoPannerNode API for BaseAudioContext.
# Create the DOM interface for StereoPannerNode and implement the createStereoPannerNode API for BaseAudioContext.
== Design Choices ==
Our project did not involve any design patterns since we were just implementing methods.


== Completed Work ==
== Completed Work ==

Revision as of 01:25, 2 April 2019

Background information

Major browsers support the WebAudio standard which can be used to create complex media playback applications from low-level building blocks. Servo is a new, experimental browser that supports some of these building blocks (called audio nodes); the goal of this project is to improve compatibility with web content that relies on the WebAudio API by implementing missing pieces of incomplete node types (OscillatorNode) along with entire missing nodes (ConstantSourceNode, StereoPannerNode).

Tracking issues

ConstantSourceNode

StereoPannerNode

Useful References

  • Setup for modifying the standalone media backend implementation
  • Implementation of audio node processing algorithms
  • Runnable examples of audio node processing algorithms
  • Example pull request implementing a missing node type in Servo (another example)
  • Example pull request implementing the processing backend for a missing node type
  • Setup for making changes to Servo's code
  • Documentation for introducing new WebIDL-based DOM interfaces to Servo
  • Documentation for integrating a version of servo-media that contains your local changes into your local Servo build

Initial steps (OSS Project)

  1. Email the mozilla.dev.servo mailing list (be sure to subscribe to it first!) introducing your group and asking any necessary questions.
  2. Create the DOM interface for ConstantSourceNode and implement the createConstantSource API for BaseAudioContext.
  3. Connect the DOM interface to the underlying backend node by processing the unimplemented message type.
  4. Update the expected test results for the relevant tests.

Subsequent steps (Final Project)

  1. Implement an audio node message that is specific to OscillatorNode (use BiquadFilterNode as a model) which updates the node's oscillator type.
  2. Implement the type attribute setter for the OscillatorNode interface which uses the new oscillator node message.
  3. Implement the backend for StereoPannerNode in the media crate by creating a new node implementation using PannerNode as a guide. The processing algorithm is described in the specification. Create a runnable example based on the example for PannerNode.
  4. Create the DOM interface for StereoPannerNode and implement the createStereoPannerNode API for BaseAudioContext.

Design Choices

Our project did not involve any design patterns since we were just implementing methods.

Completed Work

We have created the DOM interface for ConstantSourceNode and implemented the createConstantSource API for BaseAudio Context.

Pull Request

DOM implementation:

	use crate::dom::audioparam::AudioParam;
	use crate::dom::audioscheduledsourcenode::AudioScheduledSourceNode;
	use crate::dom::baseaudiocontext::BaseAudioContext;
	use crate::dom::bindings::codegen::Bindings::AudioNodeBinding::{
	    ChannelCountMode, ChannelInterpretation,
	};
	use crate::dom::bindings::codegen::Bindings::AudioParamBinding::AutomationRate;
	use crate::dom::bindings::codegen::Bindings::ConstantSourceNodeBinding::ConstantSourceNodeMethods;
	use crate::dom::bindings::codegen::Bindings::ConstantSourceNodeBinding::{
	    self, ConstantSourceOptions,
	};
	use crate::dom::bindings::error::Fallible;
	use crate::dom::bindings::reflector::reflect_dom_object;
	use crate::dom::bindings::root::{Dom, DomRoot};
	use crate::dom::window::Window;
	use dom_struct::dom_struct;
	use servo_media::audio::node::AudioNodeInit;
	use servo_media::audio::constant_source_node::ConstantSourceNodeOptions as ServoMediaConstantSourceOptions;
	use servo_media::audio::param::ParamType;
	use std::f32;
	
	#[dom_struct]
	pub struct ConstantSourceNode {
	    source_node: AudioScheduledSourceNode,
	    offset: Dom<AudioParam>,
	}
	
	impl ConstantSourceNode {
	    #[allow(unrooted_must_root)]
	    pub fn new_inherited(
	        window: &Window,
	        context: &BaseAudioContext,
	        options: &ConstantSourceOptions,
	    ) -> Fallible<ConstantSourceNode> {
	        let node_options =
	            options
	                .parent
	                .unwrap_or(2, ChannelCountMode::Max, ChannelInterpretation::Speakers);
	        let source_node = AudioScheduledSourceNode::new_inherited(
	            AudioNodeInit::ConstantSourceNode(options.into()),
	            context,
	            node_options,
	            0, /* inputs */
	            1, /* outputs */
	        )?;
	        let node_id = source_node.node().node_id();
	        let offset = AudioParam::new(
	            window,
	            context,
	            node_id,
	            ParamType::Offset,
	            AutomationRate::A_rate,
	            1.,
	            f32::MIN,
	            f32::MAX,
	        );
	
	        Ok(ConstantSourceNode {
	            source_node,
	            offset: Dom::from_ref(&offset),
	        })
	    }
	
	    #[allow(unrooted_must_root)]
	    pub fn new(
	        window: &Window,
	        context: &BaseAudioContext,
	        options: &ConstantSourceOptions,
	    ) -> Fallible<DomRoot<ConstantSourceNode>> {
	        let node = ConstantSourceNode::new_inherited(window, context, options)?;
	        Ok(reflect_dom_object(
	            Box::new(node),
	            window,
	            ConstantSourceNodeBinding::Wrap,
	        ))
	    }
	
	    pub fn Constructor(
	        window: &Window,
	        context: &BaseAudioContext,
	        options: &ConstantSourceOptions,
	    ) -> Fallible<DomRoot<ConstantSourceNode>> {
	        ConstantSourceNode::new(window, context, options)
	    }
	}
	
	impl ConstantSourceNodeMethods for ConstantSourceNode {
	    fn Offset(&self) -> DomRoot<AudioParam> {
	        DomRoot::from_ref(&self.offset)
	    }
	}
	
	impl<'a> From<&'a ConstantSourceOptions> for ServoMediaConstantSourceOptions {
	    fn from(options: &'a ConstantSourceOptions) -> Self {
	        Self {
	            offset: *options.offset,
	        }
	    }
	}

Constant Source Node Webidl DOM:

dictionary ConstantSourceOptions: AudioNodeOptions {
   float offset = 1;
};
	
[Exposed=Window,
Constructor (BaseAudioContext context, optional ConstantSourceOptions options)]
interface ConstantSourceNode : AudioScheduledSourceNode {
   readonly attribute AudioParam offset;
};