Google is committed to advancing racial equity for Black communities. See how.

TextureView

The TextureView class is a view object that combines a view with a SurfaceTexture.

Rendering with OpenGL ES

A TextureView object wraps a SurfaceTexture, responding to callbacks and acquiring new buffers. When a TextureView acquires new buffers, a TextureView issues a view invalidate request and draws using the contents of the newest buffer as its data source, rendering wherever and however the view state indicates it should.

OpenGL ES (GLES) can render on a TextureView by passing the SurfaceTexture to the EGL creation call, but this creates a problem. When GLES renders on a TextureView, BufferQueue producers and consumers are in the same thread, which can cause the buffer swap call to stall or fail. For example, if a producer submits several buffers in quick succession from the UI thread, the EGL buffer swap call needs to dequeue a buffer from the BufferQueue. However, because the consumer and producer are on the same thread, there won't be any buffers available and the swap call hangs or fails.

To ensure that the buffer swap doesn't stall, BufferQueue always needs a buffer available to be dequeued. To implement this, BufferQueue discards the contents of the previously acquired buffer when a new buffer is queued and places restrictions on minimum and maximum buffer counts to prevent a consumer from consuming all buffers at once.

Choosing SurfaceView or TextureView

SurfaceView and TextureView fill similar roles and are both citizens of the view hierarchy. However, SurfaceView and TextureView have different implementations. A SurfaceView takes the same parameters as other views, but SurfaceView contents are transparent when rendered.

A TextureView has better alpha and rotation handling than a SurfaceView, but a SurfaceView has performance advantages when compositing UI elements layered over videos. When a client renders with a SurfaceView, the SurfaceView provides the client with a separate composition layer. SurfaceFlinger composes the separate layer as a hardware overlay if supported by the device. When a client renders with a TextureView, the UI toolkit composites the TextureView's content into the view hierarchy with the GPU. Updates to the content may cause other view elements to redraw, for example, if the other views are positioned on top of a TextureView. After view rendering completes, SurfaceFlinger composites the app UI layer and all other layers, so that every visible pixel is composited twice.

Case Study: Grafika's Play Video

Grafika's Play Video includes a pair of video players, one implemented with TextureView and one implemented with SurfaceView. The video decoding portion of the activity sends frames from MediaCodec to a surface for both TextureView and SurfaceView. The biggest difference between the implementations are the steps required to present the correct aspect ratio.

Scaling SurfaceView requires a custom implementation of FrameLayout. WindowManager needs to send a new window position and new size values to SurfaceFlinger. Scaling a TextureView's SurfaceTexture requires configuring a transformation matrix with TextureView#setTransform().

After presenting the correct aspect ratio, both implementations follow the same pattern. When SurfaceView/TextureView creates the surface, the app code enables playback. When a user taps play, it starts a video decoding thread, with the surface as the output target. After that, the app code doesn't do anything—composition and display are handled by SurfaceFlinger (for the SurfaceView) or by the TextureView.

Case Study: Grafika's Double Decode

Grafika's Double Decode demonstrates manipulation of the SurfaceTexture inside a TextureView.

Grafika's Double Decode uses a pair of TextureView objects to show two videos playing side by side, simulating a video conferencing app. When the orientation of the screen changes and the activity restarts, the MediaCodec decoders don't stop, simulating playback of a real-time video stream. To improve efficiency, the client should keep the surface alive. The surface is a handle to the producer interface in the SurfaceTexture's BufferQueue. Because the TextureView manages the SurfaceTexture, the client needs to keep the SurfaceTexture alive to keep the surface alive.

To keep the SurfaceTexture alive, Grafika's Double Decode obtains references to SurfaceTextures from the TextureView objects and saves them in a static field. Then, Grafika's Double Decode returns false from TextureView.SurfaceTextureListener#onSurfaceTextureDestroyed() to prevent the destruction of the SurfaceTexture. TextureView then passes a SurfaceTexture to onSurfaceTextureDestroyed() that can be maintained across the activity configuration change, which the client passes to the new TextureView through setSurfaceTexture().

Separate threads drive each video decoder. Mediaserver sends buffers with decoded output to the SurfaceTextures, the BufferQueue consumers. The TextureView objects perform rendering and execute on the UI thread.

Implementing Grafika's Double Decode with SurfaceView is harder than implementing with TextureView because SurfaceView objects destroy surfaces during orientation changes. Additionally, using SurfaceView objects adds two layers, which isn't ideal because of the limitations on the number of overlays available on the hardware.