Partager via


Hardware Acceleration in Silverlight for Windows Embedded (Compact 7)

3/12/2014

Many modern device platforms include on-board graphics processing units (GPUs) with two-dimensional or three-dimensional capabilities or both. Microsoft Silverlight for Windows Embedded provides support for using a GPU to accelerate certain types of animations. Hardware acceleration is accomplished by using the GPU (rather than the CPU) to do some critical composition steps in the rendering process. Silverlight for Windows Embedded supports hardware-based acceleration of graphics for both Microsoft DirectDraw and OpenGL.

For information on how to implement hardware acceleration, see Implement Hardware Acceleration for Graphics in Silverlight for Windows Embedded [Reference].

How Silverlight for Windows Embedded Supports Hardware Acceleration

In a Silverlight for Windows Embedded application, the UI elements in a visual tree can be divided into two sets:

  • Static items
  • Moving (or animated) items

The pixel-based images for each of these sets are stored on the GPU as textures and then, for each frame of the animation, they are composed by the GPU. For the example UI shown in Example of the Rendering Process in Graphics and Rendering Process in Silverlight for Windows Embedded, the translation transform for the moving globe is changed slightly for each frame, creating the illusion of motion.

DirectDraw

To use the DirectDraw implementation of hardware acceleration, you must use a GPU or video hardware with a DirectDraw interface that supports the following:

  • Per pixel and constant, premultiplied and hardware accelerated alpha blits
  • Hardware accelerated blits on SCRCCOPY raster operations
  • Hardware accelerated color fill
  • 20 MB video memory (or system memory that the GPU can directly operate)

DirectDraw is an older technology that has some limitations on the types of transformations it can support. DirectDraw supports translation, scaling from 50 percent to 200 percent, and rotation by 90 degrees.

OpenGL

To use the OpenGL implementation of hardware acceleration, you must use a GPU with an OpenGL interface that supports the following:

  • An OpenGL Embedded Systems (ES) 2.0 driver, included in the board support package (BSP)
  • A simple vertex shader
  • A simple fragment shader

For details about using shaders, see "Adding Support for Binary Shaders to the BSP" in Graphics and Performance in Silverlight for Windows Embedded.

To support OpenGL for Embedded Systems (OpenGL ES) hardware acceleration in Silverlight for Windows Embedded, use the architecture shown in the following image.

Hardware acceleration architecture for OpenGL

  1. XAML Run-time Core. This component is the software portion of the Silverlight for Windows Embedded rendering engine. This component works with the OpenGL Plug-in to provide acceleration.

  2. OpenGL Plug-in. This component handles the interaction with the OpenGL driver. Silverlight for Windows Embedded contains a sample version of a plug-in that supports OpenGL ES 2.0. To customize the OpenGL (for example, to support OpenGL 1.2), you modify this component.

  3. OpenGL Driver. The OpenGL driver is provided as a binary by the GPU provider and is specific to the chipset on the BSP.

  4. Vertex/Fragment Shader. The vertex/fragment shader is about 25 lines of code that manage the interaction with the OpenGL Plug-in. Silverlight for Windows Embedded includes sample code for shaders, and it must be compiled on the GPU into a Shaders.dll. Consult your hardware provider for instructions on compiling shaders for the target GPU.

    Note

    If Silverlight for Windows Embedded cannot find the Shaders.dll library for the GPU, it will compile the default shaders at run time. However, many OpenGL drivers do not support run-time compilation, and compiling the shaders at run time can result in poor performance.

The technique used by OpenGL technology is to split each pixel-based image into two triangular pieces and store them in the GPU as textures. (GPUs typically use textured triangles as the building blocks for rendering three-dimensional objects.) During composition, the two triangles are drawn as a "triangle strip," creating a rectangular shape on the screen, as shown in the following figure.

GraphicsPrimitivesTriangleStrips

The GPU can compose these triangle strips very rapidly (with trivial CPU use), and the GPU supports a number of simple transformations. The most important are translation, scaling, rotation, deformation, and plane projection. For more information, see the list at the end of this topic.

OpenGL Rendering Process

When OpenGL hardware acceleration is used, the Silverlight for Windows Embedded rendering process is slightly different. Continuing with our simple UI example in Graphics and Rendering Process in Silverlight for Windows Embedded, the Silverlight for Windows Embedded renderer performs the following steps when it renders the image for the first time:

  1. Sets up for rasterization.
    The rendering engine asks the OpenGL plug-in for a buffer that is the size of the display window.
  2. Rasterizes the first set of non-cached items.
    The rendering engine steps (in z-order) through the visual tree until a cached object is encountered. It rasterizes and composes (using opacity information) each object into the buffer obtained in step 1. Note that this is all done in the CPU. In our example, the renderer processes objects in the following order: _Background, _LightHexes, _DarkHexes, _Border, _Label, and _Button (and its children).
  3. Sends the buffer to the GPU.
    When all of the items in the first set have been processed, the buffer is marked as dirty (indicating that it needs to be refreshed on the screen) and sent to the GPU, using the appropriate calls to the OpenGL Plug-in.
  4. Rasterizes the first cached item.
    When the rendering engine reaches a UI element that is cached, it asks the OpenGL plug-in for a buffer that is the size of the element. Then, the rendering engine steps through the visual tree for the cached element, rasterizing each object into the buffer. In our example, _Globe does not have any children, so just the single item is rasterized to the buffer.
  5. Sends the buffer to the GPU.
    When everything has been processed for the cached UI element, the buffer is marked as dirty and sent to the GPU.
  6. Composes the buffers.
    In our example, there are now two buffers in the GPU. Note that in a larger or more complicated example there would be many buffers. Each of these buffers is stored as two triangular textures. Starting at the bottom of the z-order, the GPU composes the textured triangles corresponding to the buffers from step 2 and step 3.

During each frame of animation, if the XAML that corresponds to one of the buffers does not change, that buffer doesn’t need to be redrawn. The XAML run-time engine keeps track of animation changes. For a cached item, simple transformations (translate, scale, rotate, and skew) and opacity changes, which are applied to the cached object as a whole, do not require it to be redrawn.

The GPU can compose textures very rapidly (and with trivial CPU usage), and the GPU can support a number of simple transformations. The most important are:

  • Translation. Changes the location of the object
  • Scaling. Zooms in and out and creates the illusion of depth
  • Rotation. Turns the object about a point or an axis
  • Deformation. Changes the skew or aspect ratio of the object
  • Plane Projection. Represents the object by mapping it to a two-dimensional plane

The amount of video memory on the GPU determines the size and number of buffers that it can use for compositing graphics.

See Also

Concepts

Graphics and Performance in Silverlight for Windows Embedded