CSC/ECE 517 Fall 2017/M1753 OffscreenCanvas
The HTML specification defines a <canvas> element that use a 2D or 3D rendering context and to draw graphics. The biggest limitation of <canvas> is that all of the javascript that manipulates the <canvas> has to run on the main thread, as the UI complexity of the application increases, developers inadvertently hit the performance wall. A new specification was recently developed that defines a canvas that can be used without being associated with an in-page canvas element, allowing for more efficient rendering. The OffscreenCanvas API achieves pre-rendering that using a separate off-screen canvas to render temporary image, and then rendering the off-screen canvas back to the main thread.<ref>https://www.html5rocks.com/en/tutorials/canvas/performance/</ref><ref>https://github.com/servo/servo/wiki/Offscreen-canvas-project</ref>
Introduction
The OffscreenCanvas API provides a way to interact with the same canvas APIs but in a different thread. This allows rendering to progress no matter what is going on in the main thread.
1.1 Background
Servo is a project to develop a new Web browser engine, aiming to take advantage of parallelism at many levels. Servo is written in Rust, a new language designed specifically with Servo's requirements. Rust is a systems programming language focused on three goals: safety, speed, and concurrency<ref>https://doc.rust-lang.org/book/first-edition/</ref>. Rust provides a task-parallel infrastructure and a strong type system that enforces memory safety and data race freedom<ref>https://github.com/servo/servo/wiki/Design</ref>. Now servo is focused both on supporting a full Web browser, through the use of the purely HTML-based user interface Browser.html and on creating a solid, embeddable engine.
The Canvas element is part of HTML5 and allows for dynamic, scriptable rendering of 2D shapes and bitmap images. The canvas element should be provided with content that conveys essentially the same function or purpose as the canvas's bitmap. The canvas element has two attributes to control the size of the element's bitmap: width and height. The intrinsic dimensions of the canvas element when it represents embedded content are equal to the dimensions of the element's bitmap<ref>https://html.spec.whatwg.org/multipage/canvas.html#the-offscreencanvas-interface</ref>.The biggest limitation of the canvas element is that all of the javascript that manipulates the canvas element has to run on the main thread, sometimes the render can take a long time and it seems to lock up the main JS thread, causing rest of the UI to become slow or unresponsive.
The OffscreenCanvas interface provides a canvas that can be pre-rendered off screen. It is available in both the window and worker contexts. Pre-rendering means using a separate offscreen canvas on which to render temporary images, and then rendering the offscreen canvases back onto the visible one<ref>https://www.html5rocks.com/en/tutorials/canvas/performance/</ref>. This API is the first that allows a thread other than the main thread to change what is displayed to the user. This allows rendering to progress no matter what is going on in the main thread<ref>https://hacks.mozilla.org/2016/01/webgl-off-the-main-thread/</ref>. This API is currently implemented for WebGL1 and WebGL2 contexts only. Our project is to implement the Offscreen API for all 2D, WebGL, WebGL2, and BitmapRenderer contexts.
1.2 Project Overview and Scope
The OffscreenCanvas API project is composed of two primary implementation: the OffscreenCanvas that provides a canvas that can be rendered off screen, and the OffscreenCanvasRenderingContext2D that is a rendering context interface for drawing to the bitmap of an OffscreenCanvas object. In the end it achieves pre-rendering that using a separate off-screen canvas to render temporary image, and then rendering the off-screen canvas back to the main thread.
Application Achitecture
2.1 Expected Behavior
Rendering heavy shaders in servo should not slow down the main thread (e.g. other UI components). Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.
2.2 Processing Model
The processing model involves synchronous display of new frames produced by the OffscreenCanvas.The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to "tear off" the most recently rendered image from the OffscreenCanvas.The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling getContext. Each frame is displayed in the second canvas using the transferImageBitmap method on this rendering context. File:Tasks-comm.png
2.3 Implementation Steps
- Create the OffscreenCanvas and OffscreenCanvasRenderingContext2d interfaces.
- Hide the new interfaces by default
- Enable the existing automated tests for this feature
- Implement the OffscreenCanvas constructor that creates a new canvas
- Implement the OffscreenCanvas.getContext ignoring the WebGL requirements
- Extract all relevant canvas operations from CanvasRenderingContext2d
- Implement the convertToBlob API to allow testing the contents of the canvas
- Support offscreen webgl contexts in a similar fashion to the 2d context
Implementation
- Extract all relevant canvas operations from CanvasRenderingContext2d into an implementation shared with OffscreenCanvasRenderingContext2d
- create a trait that abstracts away any operation that currently uses self.canvas in the 2d canvas rendering context, since the offscreen canvas rendering context has no associated <canvas> element
- Implement the convertToBlob API to allow testing the contents of the canvas<ref>https://html.spec.whatwg.org/multipage/canvas.html#dom-offscreencanvas-converttoblob</ref>
- Support offscreen webgl contexts in a similar fashion to the 2d context, by sharing relevant code between OffscreenCanvasRenderingContext2d and WebGLRenderingContext
Testing Details
- 1. Building
Servo is built with the Rust package manager, Cargo.
The following commands are steps to built servo in development mode, please note the resulting binary is very slow.
git clone https://github.com/jitendra29/servo.git
cd servo
./mach build --dev
./mach run tests/html/about-mozilla.html
- 2. Testing
Servo provides automated tests, adding the --release
flag to create an optimized build:
./mach build --release
./mach run --release tests/html/about-mozilla.html
Notes:
- We have not added any new tests to the test suite as servo follows TDD and tests were previously written for OffscreenCanvas. We are adjusting some of the test expectations for the tests to pass our implementation.
- Our project cannot be tested from UI since it is basically improving some javascript features (OffsereenCanvas) in servo. However you can check that it doesn't break the existing code and the browser runs correctly by running a test page on servo after performing the build as mentioned above.
Design Pattern
Design patterns are not applicable as our task is involved in extracting all the relevant canvas operation from already existing file and implementing a new API.
Pull Request
Here is our pull request. In the link you can see all code snippets changed due to implementing the above steps, as well as integration test progression information.
References
<references/>