Package: android.hardware.graphics.common@1.0

types

Properties

PixelFormat

enum PixelFormat: int32_t

Pixel formats for graphics buffers.

Details
Members
RGBA_8888 = 0x1
32-bit format that has 8-bit R, G, B, and A components, in that order, from the lowest memory address to the highest memory address.
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.
RGBX_8888 = 0x2
32-bit format that has 8-bit R, G, B, and unused components, in that order, from the lowest memory address to the highest memory address.
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.
RGB_888 = 0x3
24-bit format that has 8-bit R, G, and B components, in that order, from the lowest memory address to the highest memory address.
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.
RGB_565 = 0x4
16-bit packed format that has 5-bit R, 6-bit G, and 5-bit B components, in that order, from the most-sigfinicant bits to the least-significant bits.
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.
BGRA_8888 = 0x5
32-bit format that has 8-bit B, G, R, and A components, in that order, from the lowest memory address to the highest memory address.
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.
YCBCR_422_SP = 0x10
Legacy formats deprecated in favor of YCBCR_420_888.
YCRCB_420_SP = 0x11
YCBCR_422_I = 0x14
RGBA_FP16 = 0x16
64-bit format that has 16-bit R, G, B, and A components, in that order, from the lowest memory address to the highest memory address.
The component values are signed floats, whose interpretation is defined by the dataspace.
RAW16 = 0x20
RAW16 is a single-channel, 16-bit, little endian format, typically representing raw Bayer-pattern images from an image sensor, with minimal processing.
The exact pixel layout of the data in the buffer is sensor-dependent, and needs to be queried from the camera device.
Generally, not all 16 bits are used;more common values are 10 or 12 bits.If not all bits are used, the lower-order bits are filled first.All parameters to interpret the raw data(black and white points, color space, etc)must be queried from the camera device.
This format assumes - an even width - an even height - a horizontal stride multiple of 16 pixels - a vertical stride equal to the height - strides are specified in pixels, not in bytes
size = stride * height * 2
This format must be accepted by the allocator when used with the following usage flags:
- BufferUsage::CAMERA_* - BufferUsage::CPU_* - BufferUsage::RENDERSCRIPT
The mapping of the dataspace to buffer contents for RAW16 is as follows:
Dataspace value | Buffer contents -------------------------------+----------------------------------------- Dataspace::ARBITRARY | Raw image sensor data, layout is as | defined above.Dataspace::DEPTH | Unprocessed implementation-dependent raw | depth measurements, opaque with 16 bit | samples.Other | Unsupported
BLOB = 0x21
BLOB is used to carry task-specific data which does not have a standard image structure.The details of the format are left to the two endpoints.
A typical use case is for transporting JPEG-compressed images from the Camera HAL to the framework or to applications.
Buffers of this format must have a height of 1, and width equal to their size in bytes.
The mapping of the dataspace to buffer contents for BLOB is as follows:
Dataspace value | Buffer contents -------------------------------+----------------------------------------- Dataspace::JFIF | An encoded JPEG image Dataspace::DEPTH | An android_depth_points buffer Dataspace::SENSOR | Sensor event data.Other | Unsupported
IMPLEMENTATION_DEFINED = 0x22
A format indicating that the choice of format is entirely up to the allocator.
The allocator should examine the usage bits passed in when allocating a buffer with this format, and it should derive the pixel format from those usage flags.This format must never be used with any of the BufferUsage::CPU_* usage flags.
Even when the internally chosen format has an alpha component, the clients must assume the alpha vlaue to be 1.0.
The interpretation of the component values is defined by the dataspace.
YCBCR_420_888 = 0x23
This format allows platforms to use an efficient YCbCr/YCrCb 4:2:0 buffer layout, while still describing the general format in a layout-independent manner.While called YCbCr, it can be used to describe formats with either chromatic ordering, as well as whole planar or semiplanar layouts.
This format must be accepted by the allocator when BufferUsage::CPU_* are set.
Buffers with this format must be locked with IMapper::lockYCbCr.Locking with IMapper::lock must return an error.
The interpretation of the component values is defined by the dataspace.
RAW_OPAQUE = 0x24
RAW_OPAQUE is a format for unprocessed raw image buffers coming from an image sensor.The actual structure of buffers of this format is implementation-dependent.
This format must be accepted by the allocator when used with the following usage flags:
- BufferUsage::CAMERA_* - BufferUsage::CPU_* - BufferUsage::RENDERSCRIPT
The mapping of the dataspace to buffer contents for RAW_OPAQUE is as follows:
Dataspace value | Buffer contents -------------------------------+----------------------------------------- Dataspace::ARBITRARY | Raw image sensor data.Other | Unsupported
RAW10 = 0x25
RAW10 is a single-channel, 10-bit per pixel, densely packed in each row, unprocessed format, usually representing raw Bayer-pattern images coming from an image sensor.
In an image buffer with this format, starting from the first pixel of each row, each 4 consecutive pixels are packed into 5 bytes(40 bits). Each one of the first 4 bytes contains the top 8 bits of each pixel, The fifth byte contains the 2 least significant bits of the 4 pixels, the exact layout data for each 4 consecutive pixels is illustrated below(Pi[j]stands for the jth bit of the ith pixel):
bit 7 bit 0 =====|=====|=====|=====|=====|=====|=====|=====| Byte 0:|P0[9]|P0[8]|P0[7]|P0[6]|P0[5]|P0[4]|P0[3]|P0[2]| |-----|-----|-----|-----|-----|-----|-----|-----| Byte 1:|P1[9]|P1[8]|P1[7]|P1[6]|P1[5]|P1[4]|P1[3]|P1[2]| |-----|-----|-----|-----|-----|-----|-----|-----| Byte 2:|P2[9]|P2[8]|P2[7]|P2[6]|P2[5]|P2[4]|P2[3]|P2[2]| |-----|-----|-----|-----|-----|-----|-----|-----| Byte 3:|P3[9]|P3[8]|P3[7]|P3[6]|P3[5]|P3[4]|P3[3]|P3[2]| |-----|-----|-----|-----|-----|-----|-----|-----| Byte 4:|P3[1]|P3[0]|P2[1]|P2[0]|P1[1]|P1[0]|P0[1]|P0[0]| ===============================================
This format assumes - a width multiple of 4 pixels - an even height - a vertical stride equal to the height - strides are specified in bytes, not in pixels
size = stride * height
When stride is equal to width *(10 / 8), there will be no padding bytes at the end of each row, the entire image data is densely packed.When stride is larger than width *(10 / 8), padding bytes will be present at the end of each row(including the last row).
This format must be accepted by the allocator when used with the following usage flags:
- BufferUsage::CAMERA_* - BufferUsage::CPU_* - BufferUsage::RENDERSCRIPT
The mapping of the dataspace to buffer contents for RAW10 is as follows:
Dataspace value | Buffer contents -------------------------------+----------------------------------------- Dataspace::ARBITRARY | Raw image sensor data.Other | Unsupported
RAW12 = 0x26
RAW12 is a single-channel, 12-bit per pixel, densely packed in each row, unprocessed format, usually representing raw Bayer-pattern images coming from an image sensor.
In an image buffer with this format, starting from the first pixel of each row, each two consecutive pixels are packed into 3 bytes(24 bits). The first and second byte contains the top 8 bits of first and second pixel.The third byte contains the 4 least significant bits of the two pixels, the exact layout data for each two consecutive pixels is illustrated below(Pi[j]stands for the jth bit of the ith pixel):
bit 7 bit 0 ======|======|======|======|======|======|======|======| Byte 0:|P0[11]|P0[10]|P0[9]|P0[8]|P0[7]|P0[6]|P0[5]|P0[4]| |------|------|------|------|------|------|------|------| Byte 1:|P1[11]|P1[10]|P1[9]|P1[8]|P1[7]|P1[6]|P1[5]|P1[4]| |------|------|------|------|------|------|------|------| Byte 2:|P1[3]|P1[2]|P1[1]|P1[0]|P0[3]|P0[2]|P0[1]|P0[0]| =======================================================
This format assumes:- a width multiple of 4 pixels - an even height - a vertical stride equal to the height - strides are specified in bytes, not in pixels
size = stride * height
When stride is equal to width *(12 / 8), there will be no padding bytes at the end of each row, the entire image data is densely packed.When stride is larger than width *(12 / 8), padding bytes will be present at the end of each row(including the last row).
This format must be accepted by the allocator when used with the following usage flags:
- BufferUsage::CAMERA_* - BufferUsage::CPU_* - BufferUsage::RENDERSCRIPT
The mapping of the dataspace to buffer contents for RAW12 is as follows:
Dataspace value | Buffer contents -------------------------------+----------------------------------------- Dataspace::ARBITRARY | Raw image sensor data.Other | Unsupported
RGBA_1010102 = 0x2B
32-bit packed format that has 2-bit A, 10-bit B, G, and R components, in that order, from the most-sigfinicant bits to the least-significant bits.
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.
Y8 = 0x20203859
Y8 is a YUV planar format comprised of a WxH Y plane, with each pixel being represented by 8 bits.It is equivalent to just the Y plane from YV12.
This format assumes - an even width - an even height - a horizontal stride multiple of 16 pixels - a vertical stride equal to the height
size = stride * height
This format must be accepted by the allocator when used with the following usage flags:
- BufferUsage::CAMERA_* - BufferUsage::CPU_*
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.
Y16 = 0x20363159
Y16 is a YUV planar format comprised of a WxH Y plane, with each pixel being represented by 16 bits.It is just like Y8, but has double the bits per pixel(little endian).
This format assumes - an even width - an even height - a horizontal stride multiple of 16 pixels - a vertical stride equal to the height - strides are specified in pixels, not in bytes
size = stride * height * 2
This format must be accepted by the allocator when used with the following usage flags:
- BufferUsage::CAMERA_* - BufferUsage::CPU_*
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.When the dataspace is Dataspace::DEPTH, each pixel is a distance value measured by a depth camera, plus an associated confidence value.
YV12 = 0x32315659
YV12 is a 4:2:0 YCrCb planar format comprised of a WxH Y plane followed by(W/2)x(H/2)Cr and Cb planes.
This format assumes - an even width - an even height - a horizontal stride multiple of 16 pixels - a vertical stride equal to the height
y_size = stride * height c_stride = ALIGN(stride/2, 16)c_size = c_stride * height/2 size = y_size + c_size * 2 cr_offset = y_size cb_offset = y_size + c_size
This range is reserved for vendor extensions.Formats in this range must support BufferUsage::GPU_TEXTURE.Clients must assume they do not have an alpha component.
This format must be accepted by the allocator when used with the following usage flags:
- BufferUsage::CAMERA_* - BufferUsage::CPU_* - BufferUsage::GPU_TEXTURE
The component values are unsigned normalized to the range[0, 1], whose interpretation is defined by the dataspace.
Annotations
export
name="android_pixel_format_t" , value_prefix="HAL_PIXEL_FORMAT_"

BufferUsage

enum BufferUsage: uint64_t

Buffer usage definitions.

Details
Members
CPU_READ_MASK = 0xfULL
bit 0-3 is an enum
CPU_READ_NEVER = 0
buffer is never read by CPU
CPU_READ_RARELY = 2
buffer is rarely read by CPU
CPU_READ_OFTEN = 3
buffer is often read by CPU
CPU_WRITE_MASK = 0xfULL << 4
bit 4-7 is an enum
CPU_WRITE_NEVER = 0 << 4
buffer is never written by CPU
CPU_WRITE_RARELY = 2 << 4
buffer is rarely written by CPU
CPU_WRITE_OFTEN = 3 << 4
buffer is often written by CPU
GPU_TEXTURE = 1ULL << 8
buffer is used as a GPU texture
GPU_RENDER_TARGET = 1ULL << 9
buffer is used as a GPU render target
COMPOSER_OVERLAY = 1ULL << 11
buffer is used as a composer HAL overlay layer
COMPOSER_CLIENT_TARGET = 1ULL << 12
buffer is used as a composer HAL client target
PROTECTED = 1ULL << 14
Buffer is allocated with hardware-level protection against copying the contents(or information derived from the contents)into unprotected memory.
COMPOSER_CURSOR = 1ULL << 15
buffer is used as a hwcomposer HAL cursor layer
VIDEO_ENCODER = 1ULL << 16
buffer is used as a video encoder input
CAMERA_OUTPUT = 1ULL << 17
buffer is used as a camera HAL output
CAMERA_INPUT = 1ULL << 18
buffer is used as a camera HAL input
RENDERSCRIPT = 1ULL << 20
buffer is used as a renderscript allocation
VIDEO_DECODER = 1ULL << 22
buffer is used as a video decoder output
SENSOR_DIRECT_DATA = 1ULL << 23
buffer is used as a sensor direct report output
GPU_DATA_BUFFER = 1ULL << 24
buffer is used as as an OpenGL shader storage or uniform buffer object
VENDOR_MASK = 0xfULL << 28
bits 28-31 are reserved for vendor extensions
VENDOR_MASK_HI = 0xffffULL << 48
bits 48-63 are reserved for vendor extensions

Transform

enum Transform: int32_t

Transformation definitions

Details
Members
FLIP_H = 1 << 0
Horizontal flip.FLIP_H/FLIP_V is applied before ROT_90.
FLIP_V = 1 << 1
Vertical flip.FLIP_H/FLIP_V is applied before ROT_90.
ROT_90 = 1 << 2
90 degree clockwise rotation.FLIP_H/FLIP_V is applied before ROT_90.
ROT_180 = FLIP_H | FLIP_V
Commonly used combinations.
ROT_270 = FLIP_H | FLIP_V | ROT_90
Annotations
export
name="android_transform_t" , value_prefix="HAL_TRANSFORM_"

Dataspace

enum Dataspace: int32_t

Dataspace Definitions ======================

Dataspace is the definition of how pixel values should be interpreted.

For many formats, this is the colorspace of the image data, which includes primaries(including white point)and the transfer characteristic function, which describes both gamma curve and numeric range(within the bit depth).

Other dataspaces include depth measurement data from a depth camera.

A dataspace is comprised of a number of fields.

Version -------- The top 2 bits represent the revision of the field specification.This is currently always 0.

bits 31-30 29 - 0 +-----+----------------------------------------------------+ fields | Rev | Revision specific fields | +-----+----------------------------------------------------+

Field layout for version = 0:----------------------------

A dataspace is comprised of the following fields:Standard Transfer function Range

bits 31-30 29-27 26 - 22 21 - 16 15 - 0 +-----+-----+--------+--------+----------------------------+ fields | 0 |Range|Transfer|Standard| Legacy and custom | +-----+-----+--------+--------+----------------------------+ VV RRR TTTTT SSSSSS LLLLLLLL LLLLLLLL

If range, transfer and standard fields are all 0(e.g.top 16 bits are all zeroes), the bottom 16 bits contain either a legacy dataspace value, or a custom value.

Details
Members
UNKNOWN = 0x0
Default-assumption data space, when not explicitly specified.
It is safest to assume the buffer is an image with sRGB primaries and encoding ranges, but the consumer and/or the producer of the data may simply be using defaults.No automatic gamma transform should be expected, except for a possible display gamma transform when drawn to a screen.
ARBITRARY = 0x1
Arbitrary dataspace with manually defined characteristics.Definition for colorspaces or other meaning must be communicated separately.
This is used when specifying primaries, transfer characteristics, etc.separately.
A typical use case is in video encoding parameters(e.g.for H.264), where a colorspace can have separately defined primaries, transfer characteristics, etc.
STANDARD_SHIFT = 16
Color-description aspects
The following aspects define various characteristics of the color specification.These represent bitfields, so that a data space value can specify each of them independently.
STANDARD_MASK = 63 << STANDARD_SHIFT
Standard aspect
Defines the chromaticity coordinates of the source primaries in terms of the CIE 1931 definition of x and y specified in ISO 11664-1.
STANDARD_UNSPECIFIED = 0 << STANDARD_SHIFT
Chromacity coordinates are unknown or are determined by the application.Implementations shall use the following suggested standards:
All YCbCr formats:BT709 if size is 720p or larger(since most video content is letterboxed this corresponds to width is 1280 or greater, or height is 720 or greater). BT601_625 if size is smaller than 720p or is JPEG.All RGB formats:BT709.
For all other formats standard is undefined, and implementations should use an appropriate standard for the data represented.
STANDARD_BT709 = 1 << STANDARD_SHIFT
Primaries:x y green 0.300 0.600 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
Use the unadjusted KR = 0.2126, KB = 0.0722 luminance interpretation for RGB conversion.
STANDARD_BT601_625 = 2 << STANDARD_SHIFT
Primaries:x y green 0.290 0.600 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
KR = 0.299, KB = 0.114.This adjusts the luminance interpretation for RGB conversion from the one purely determined by the primaries to minimize the color shift into RGB space that uses BT.709 primaries.
STANDARD_BT601_625_UNADJUSTED = 3 << STANDARD_SHIFT
Primaries:x y green 0.290 0.600 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
Use the unadjusted KR = 0.222, KB = 0.071 luminance interpretation for RGB conversion.
STANDARD_BT601_525 = 4 << STANDARD_SHIFT
Primaries:x y green 0.310 0.595 blue 0.155 0.070 red 0.630 0.340 white(D65)0.3127 0.3290
KR = 0.299, KB = 0.114.This adjusts the luminance interpretation for RGB conversion from the one purely determined by the primaries to minimize the color shift into RGB space that uses BT.709 primaries.
STANDARD_BT601_525_UNADJUSTED = 5 << STANDARD_SHIFT
Primaries:x y green 0.310 0.595 blue 0.155 0.070 red 0.630 0.340 white(D65)0.3127 0.3290
Use the unadjusted KR = 0.212, KB = 0.087 luminance interpretation for RGB conversion(as in SMPTE 240M).
STANDARD_BT2020 = 6 << STANDARD_SHIFT
Primaries:x y green 0.170 0.797 blue 0.131 0.046 red 0.708 0.292 white(D65)0.3127 0.3290
Use the unadjusted KR = 0.2627, KB = 0.0593 luminance interpretation for RGB conversion.
STANDARD_BT2020_CONSTANT_LUMINANCE = 7 << STANDARD_SHIFT
Primaries:x y green 0.170 0.797 blue 0.131 0.046 red 0.708 0.292 white(D65)0.3127 0.3290
Use the unadjusted KR = 0.2627, KB = 0.0593 luminance interpretation for RGB conversion using the linear domain.
STANDARD_BT470M = 8 << STANDARD_SHIFT
Primaries:x y green 0.21 0.71 blue 0.14 0.08 red 0.67 0.33 white(C)0.310 0.316
Use the unadjusted KR = 0.30, KB = 0.11 luminance interpretation for RGB conversion.
STANDARD_FILM = 9 << STANDARD_SHIFT
Primaries:x y green 0.243 0.692 blue 0.145 0.049 red 0.681 0.319 white(C)0.310 0.316
Use the unadjusted KR = 0.254, KB = 0.068 luminance interpretation for RGB conversion.
STANDARD_DCI_P3 = 10 << STANDARD_SHIFT
SMPTE EG 432-1 and SMPTE RP 431-2 .(DCI-P3)Primaries:x y green 0.265 0.690 blue 0.150 0.060 red 0.680 0.320 white(D65)0.3127 0.3290
STANDARD_ADOBE_RGB = 11 << STANDARD_SHIFT
Adobe RGB Primaries:x y green 0.210 0.710 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
TRANSFER_SHIFT = 22
TRANSFER_MASK = 31 << TRANSFER_SHIFT
Transfer aspect
Transfer characteristics are the opto-electronic transfer characteristic at the source as a function of linear optical intensity(luminance).
For digital signals, E corresponds to the recorded value.Normally, the transfer function is applied in RGB space to each of the R, G and B components independently.This may result in color shift that can be minized by applying the transfer function in Lab space only for the L component.Implementation may apply the transfer function in RGB space for all pixel formats if desired.
TRANSFER_UNSPECIFIED = 0 << TRANSFER_SHIFT
Transfer characteristics are unknown or are determined by the application.
Implementations should use the following transfer functions:
For YCbCr formats:use TRANSFER_SMPTE_170M For RGB formats:use TRANSFER_SRGB
For all other formats transfer function is undefined, and implementations should use an appropriate standard for the data represented.
TRANSFER_LINEAR = 1 << TRANSFER_SHIFT
Transfer characteristic curve:E = L L - luminance of image 0 <= L <= 1 for conventional colorimetry E - corresponding electrical signal
TRANSFER_SRGB = 2 << TRANSFER_SHIFT
Transfer characteristic curve:
E = 1.055 * L^(1/2.4)- 0.055 for 0.0031308 <= L <= 1 = 12.92 * L for 0 <= L<0.0031308 L - luminance of image 0 <= L <= 1 for conventional colorimetry E - corresponding electrical signal
TRANSFER_SMPTE_170M = 3 << TRANSFER_SHIFT
BT.601 525, BT.601 625, BT.709, BT.2020
Transfer characteristic curve:E = 1.099 * L ^ 0.45 - 0.099 for 0.018 <= L <= 1 = 4.500 * L for 0 <= L<0.018 L - luminance of image 0 <= L <= 1 for conventional colorimetry E - corresponding electrical signal
TRANSFER_GAMMA2_2 = 4 << TRANSFER_SHIFT
Assumed display gamma 2.2.
Transfer characteristic curve:E = L ^(1/2.2)L - luminance of image 0 <= L <= 1 for conventional colorimetry E - corresponding electrical signal
TRANSFER_GAMMA2_6 = 5 << TRANSFER_SHIFT
display gamma 2.6.
Transfer characteristic curve:E = L ^(1/2.6)L - luminance of image 0 <= L <= 1 for conventional colorimetry E - corresponding electrical signal
TRANSFER_GAMMA2_8 = 6 << TRANSFER_SHIFT
display gamma 2.8.
Transfer characteristic curve:E = L ^(1/2.8)L - luminance of image 0 <= L <= 1 for conventional colorimetry E - corresponding electrical signal
TRANSFER_ST2084 = 7 << TRANSFER_SHIFT
SMPTE ST 2084(Dolby Perceptual Quantizer)
Transfer characteristic curve:E =(( c1 + c2 * L^n)/(1 + c3 * L^n)) ^ m c1 = c3 - c2 + 1 = 3424 / 4096 = 0.8359375 c2 = 32 * 2413 / 4096 = 18.8515625 c3 = 32 * 2392 / 4096 = 18.6875 m = 128 * 2523 / 4096 = 78.84375 n = 0.25 * 2610 / 4096 = 0.1593017578125 L - luminance of image 0 <= L <= 1 for HDR colorimetry.L = 1 corresponds to 10000 cd/m2 E - corresponding electrical signal
TRANSFER_HLG = 8 << TRANSFER_SHIFT
ARIB STD-B67 Hybrid Log Gamma
Transfer characteristic curve:E = r * L^0.5 for 0 <= L <= 1 = a * ln(L - b)+ c for 1<L a = 0.17883277 b = 0.28466892 c = 0.55991073 r = 0.5 L - luminance of image 0 <= L for HDR colorimetry.L = 1 corresponds to reference white level of 100 cd/m2 E - corresponding electrical signal
RANGE_SHIFT = 27
RANGE_MASK = 7 << RANGE_SHIFT
Range aspect
Defines the range of values corresponding to the unit range of 0-1.This is defined for YCbCr only, but can be expanded to RGB space.
RANGE_UNSPECIFIED = 0 << RANGE_SHIFT
Range is unknown or are determined by the application.Implementations shall use the following suggested ranges:
All YCbCr formats:limited range.All RGB or RGBA formats(including RAW and Bayer): full range.All Y formats:full range
For all other formats range is undefined, and implementations should use an appropriate range for the data represented.
RANGE_FULL = 1 << RANGE_SHIFT
Full range uses all values for Y, Cb and Cr from 0 to 2^b-1, where b is the bit depth of the color format.
RANGE_LIMITED = 2 << RANGE_SHIFT
Limited range uses values 16/256*2^b to 235/256*2^b for Y, and 1/16*2^b to 15/16*2^b for Cb, Cr, R, G and B, where b is the bit depth of the color format.
E.g.For 8-bit-depth formats:Luma(Y)samples should range from 16 to 235, inclusive Chroma(Cb, Cr)samples should range from 16 to 240, inclusive
For 10-bit-depth formats:Luma(Y)samples should range from 64 to 940, inclusive Chroma(Cb, Cr)samples should range from 64 to 960, inclusive
RANGE_EXTENDED = 3 << RANGE_SHIFT
Extended range is used for scRGB.Intended for use with floating point pixel formats .[0.0 - 1.0]is the standard sRGB space.Values outside the range 0.0 - 1.0 can encode color outside the sRGB gamut.Used to blend / merge multiple dataspaces on a single display.
SRGB_LINEAR = 0x200
sRGB linear encoding:
The red, green, and blue components are stored in sRGB space, but are linear, not gamma-encoded.The RGB primaries and the white point are the same as BT.709.
The values are encoded using the full range([ 0, 255]for 8-bit)for all components.
V0_SRGB_LINEAR = STANDARD_BT709 | TRANSFER_LINEAR | RANGE_FULL
V0_SCRGB_LINEAR = STANDARD_BT709 | TRANSFER_LINEAR | RANGE_EXTENDED
scRGB linear encoding:
The red, green, and blue components are stored in extended sRGB space, but are linear, not gamma-encoded.The RGB primaries and the white point are the same as BT.709.
The values are floating point.A pixel value of 1.0, 1.0, 1.0 corresponds to sRGB white(D65)at 80 nits.Values beyond the range[0.0 - 1.0]would correspond to other colors spaces and/or HDR content.
SRGB = 0x201
sRGB gamma encoding:
The red, green and blue components are stored in sRGB space, and converted to linear space when read, using the SRGB transfer function for each of the R, G and B components.When written, the inverse transformation is performed.
The alpha component, if present, is always stored in linear space and is left unmodified when read or written.
Use full range and BT.709 standard.
V0_SRGB = STANDARD_BT709 | TRANSFER_SRGB | RANGE_FULL
V0_SCRGB = STANDARD_BT709 | TRANSFER_SRGB | RANGE_EXTENDED
scRGB:
The red, green, and blue components are stored in extended sRGB space, but are linear, not gamma-encoded.The RGB primaries and the white point are the same as BT.709.
The values are floating point.A pixel value of 1.0, 1.0, 1.0 corresponds to sRGB white(D65)at 80 nits.Values beyond the range[0.0 - 1.0]would correspond to other colors spaces and/or HDR content.
JFIF = 0x101
JPEG File Interchange Format(JFIF)
Same model as BT.601-625, but all values(Y, Cb, Cr)range from 0 to 255
Use full range, BT.601 transfer and BT.601_625 standard.
V0_JFIF = STANDARD_BT601_625 | TRANSFER_SMPTE_170M | RANGE_FULL
BT601_625 = 0x102
ITU-R Recommendation 601(BT.601)- 625-line
Standard-definition television, 625 Lines(PAL)
Use limited range, BT.601 transfer and BT.601_625 standard.
V0_BT601_625 = STANDARD_BT601_625 | TRANSFER_SMPTE_170M | RANGE_LIMITED
BT601_525 = 0x103
ITU-R Recommendation 601(BT.601)- 525-line
Standard-definition television, 525 Lines(NTSC)
Use limited range, BT.601 transfer and BT.601_525 standard.
V0_BT601_525 = STANDARD_BT601_525 | TRANSFER_SMPTE_170M | RANGE_LIMITED
BT709 = 0x104
ITU-R Recommendation 709(BT.709)
High-definition television
Use limited range, BT.709 transfer and BT.709 standard.
V0_BT709 = STANDARD_BT709 | TRANSFER_SMPTE_170M | RANGE_LIMITED
DCI_P3_LINEAR = STANDARD_DCI_P3 | TRANSFER_LINEAR | RANGE_FULL
SMPTE EG 432-1 and SMPTE RP 431-2.
Digital Cinema DCI-P3
Use full range, linear transfer and D65 DCI-P3 standard
DCI_P3 = STANDARD_DCI_P3 | TRANSFER_GAMMA2_6 | RANGE_FULL
SMPTE EG 432-1 and SMPTE RP 431-2.
Digital Cinema DCI-P3
Use full range, gamma 2.6 transfer and D65 DCI-P3 standard Note:Application is responsible for gamma encoding the data as a 2.6 gamma encoding is not supported in HW.
DISPLAY_P3_LINEAR = STANDARD_DCI_P3 | TRANSFER_LINEAR | RANGE_FULL
Display P3
Display P3 uses same primaries and white-point as DCI-P3 linear transfer function makes this the same as DCI_P3_LINEAR.
DISPLAY_P3 = STANDARD_DCI_P3 | TRANSFER_SRGB | RANGE_FULL
Display P3
Use same primaries and white-point as DCI-P3 but sRGB transfer function.
ADOBE_RGB = STANDARD_ADOBE_RGB | TRANSFER_GAMMA2_2 | RANGE_FULL
Adobe RGB
Use full range, gamma 2.2 transfer and Adobe RGB primaries Note:Application is responsible for gamma encoding the data as a 2.2 gamma encoding is not supported in HW.
BT2020_LINEAR = STANDARD_BT2020 | TRANSFER_LINEAR | RANGE_FULL
ITU-R Recommendation 2020(BT.2020)
Ultra High-definition television
Use full range, linear transfer and BT2020 standard
BT2020 = STANDARD_BT2020 | TRANSFER_SMPTE_170M | RANGE_FULL
ITU-R Recommendation 2020(BT.2020)
Ultra High-definition television
Use full range, BT.709 transfer and BT2020 standard
BT2020_PQ = STANDARD_BT2020 | TRANSFER_ST2084 | RANGE_FULL
ITU-R Recommendation 2020(BT.2020)
Ultra High-definition television
Use full range, SMPTE 2084(PQ)transfer and BT2020 standard
DEPTH = 0x1000
The buffer contains depth ranging measurements from a depth camera.This value is valid with formats:HAL_PIXEL_FORMAT_Y16:16-bit samples, consisting of a depth measurement and an associated confidence value.The 3 MSBs of the sample make up the confidence value, and the low 13 LSBs of the sample make up the depth measurement.For the confidence section, 0 means 100% confidence, 1 means 0% confidence.The mapping to a linear float confidence value between 0.f and 1.f can be obtained with float confidence =(((depthSample >> 13)- 1)& 0x7)/ 7.0f;The depth measurement can be extracted simply with uint16_t range =(depthSample & 0x1FFF); HAL_PIXEL_FORMAT_BLOB:A depth point cloud, as a variable-length float(x, y, z, confidence)coordinate point list.The point cloud will be represented with the android_depth_points structure.
SENSOR = 0x1001
The buffer contains sensor events from sensor direct report.This value is valid with formats:HAL_PIXEL_FORMAT_BLOB:an array of sensor event structure that forms a lock free queue.Format of sensor event structure is specified in Sensors HAL.
Annotations
export
name="android_dataspace_t" , value_prefix="HAL_DATASPACE_"

ColorMode

enum ColorMode: int32_t

Color modes that may be supported by a display.

Definitions:Rendering intent generally defines the goal in mapping a source(input)color to a destination device color for a given color mode.

It is important to keep in mind three cases where mapping may be applied:1.The source gamut is much smaller than the destination(display)gamut 2.The source gamut is much larger than the destination gamut(this will ordinarily be handled using colorimetric rendering, below)3.The source and destination gamuts are roughly equal, although not completely overlapping Also, a common requirement for mappings is that skin tones should be preserved, or at least remain natural in appearance.

Colorimetric Rendering Intent(All cases): Colorimetric indicates that colors should be preserved.In the case that the source gamut lies wholly within the destination gamut or is about the same(#1, #3), this will simply mean that no manipulations(no saturation boost, for example)are applied.In the case where some source colors lie outside the destination gamut(#2, #3), those will need to be mapped to colors that are within the destination gamut, while the already in-gamut colors remain unchanged.

Non-colorimetric transforms can take many forms.There are no hard rules and it's left to the implementation to define.Two common intents are described below.

Stretched-Gamut Enhancement Intent(Source<Destination): When the destination gamut is much larger than the source gamut(#1), the source primaries may be redefined to reflect the full extent of the destination space, or to reflect an intermediate gamut.Skin-tone preservation would likely be applied.An example might be sRGB input displayed on a DCI-P3 capable device, with skin-tone preservation.

Within-Gamut Enhancement Intent(Source >= Destination): When the device(destination)gamut is not larger than the source gamut(#2 or #3), but the appearance of a larger gamut is desired, techniques such as saturation boost may be applied to the source colors.Skin-tone preservation may be applied.There is no unique method for within-gamut enhancement;it would be defined within a flexible color mode.

Details
Members
NATIVE = 0
DEFAULT is the "native" gamut of the display.White Point:Vendor/OEM defined Panel Gamma:Vendor/OEM defined(typically 2.2)Rendering Intent:Vendor/OEM defined(typically 'enhanced' )
STANDARD_BT601_625 = 1
STANDARD_BT601_625 corresponds with display settings that implement the ITU-R Recommendation BT.601 or Rec 601.Using 625 line version Rendering Intent:Colorimetric Primaries:x y green 0.290 0.600 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
KR = 0.299, KB = 0.114.This adjusts the luminance interpretation for RGB conversion from the one purely determined by the primaries to minimize the color shift into RGB space that uses BT.709 primaries.
Gamma Correction(GC):
if Vlinear<0.018 Vnonlinear = 4.500 * Vlinear else Vnonlinear = 1.099 *(Vlinear)^(0.45)– 0.099
STANDARD_BT601_625_UNADJUSTED = 2
Primaries:x y green 0.290 0.600 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
Use the unadjusted KR = 0.222, KB = 0.071 luminance interpretation for RGB conversion.
Gamma Correction(GC):
if Vlinear<0.018 Vnonlinear = 4.500 * Vlinear else Vnonlinear = 1.099 *(Vlinear)^(0.45)– 0.099
STANDARD_BT601_525 = 3
Primaries:x y green 0.310 0.595 blue 0.155 0.070 red 0.630 0.340 white(D65)0.3127 0.3290
KR = 0.299, KB = 0.114.This adjusts the luminance interpretation for RGB conversion from the one purely determined by the primaries to minimize the color shift into RGB space that uses BT.709 primaries.
Gamma Correction(GC):
if Vlinear<0.018 Vnonlinear = 4.500 * Vlinear else Vnonlinear = 1.099 *(Vlinear)^(0.45)– 0.099
STANDARD_BT601_525_UNADJUSTED = 4
Primaries:x y green 0.310 0.595 blue 0.155 0.070 red 0.630 0.340 white(D65)0.3127 0.3290
Use the unadjusted KR = 0.212, KB = 0.087 luminance interpretation for RGB conversion(as in SMPTE 240M).
Gamma Correction(GC):
if Vlinear<0.018 Vnonlinear = 4.500 * Vlinear else Vnonlinear = 1.099 *(Vlinear)^(0.45)– 0.099
STANDARD_BT709 = 5
REC709 corresponds with display settings that implement the ITU-R Recommendation BT.709 / Rec.709 for high-definition television.Rendering Intent:Colorimetric Primaries:x y green 0.300 0.600 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
HDTV REC709 Inverse Gamma Correction(IGC): V represents normalized(with[0 to 1]range)value of R, G, or B.
if Vnonlinear<0.081 Vlinear = Vnonlinear / 4.5 else Vlinear =(( Vnonlinear + 0.099)/ 1.099)^(1/0.45)
HDTV REC709 Gamma Correction(GC):
if Vlinear<0.018 Vnonlinear = 4.5 * Vlinear else Vnonlinear = 1.099 *(Vlinear)^ 0.45 – 0.099
DCI_P3 = 6
DCI_P3 corresponds with display settings that implement SMPTE EG 432-1 and SMPTE RP 431-2 Rendering Intent:Colorimetric Primaries:x y green 0.265 0.690 blue 0.150 0.060 red 0.680 0.320 white(D65)0.3127 0.3290
Gamma:2.6
SRGB = 7
SRGB corresponds with display settings that implement the sRGB color space.Uses the same primaries as ITU-R Recommendation BT.709 Rendering Intent:Colorimetric Primaries:x y green 0.300 0.600 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
PC/Internet(sRGB)Inverse Gamma Correction(IGC):
if Vnonlinear ≤ 0.03928 Vlinear = Vnonlinear / 12.92 else Vlinear =(( Vnonlinear + 0.055)/1.055)^ 2.4
PC/Internet(sRGB)Gamma Correction(GC):
if Vlinear ≤ 0.0031308 Vnonlinear = 12.92 * Vlinear else Vnonlinear = 1.055 *(Vlinear)^(1/2.4)– 0.055
ADOBE_RGB = 8
ADOBE_RGB corresponds with the RGB color space developed by Adobe Systems, Inc.in 1998.Rendering Intent:Colorimetric Primaries:x y green 0.210 0.710 blue 0.150 0.060 red 0.640 0.330 white(D65)0.3127 0.3290
Gamma:2.2
DISPLAY_P3 = 9
DISPLAY_P3 is a color space that uses the DCI_P3 primaries, the D65 white point and the SRGB transfer functions.Rendering Intent:Colorimetric Primaries:x y green 0.265 0.690 blue 0.150 0.060 red 0.680 0.320 white(D65)0.3127 0.3290
PC/Internet(sRGB)Gamma Correction(GC):
if Vlinear ≤ 0.0030186 Vnonlinear = 12.92 * Vlinear else Vnonlinear = 1.055 *(Vlinear)^(1/2.4)– 0.055
Note:In most cases sRGB transfer function will be fine.
Annotations
export
name="android_color_mode_t" , value_prefix="HAL_COLOR_MODE_"

ColorTransform

enum ColorTransform: int32_t

Color transforms that may be applied by hardware composer to the whole display.

Details
Members
IDENTITY = 0
Applies no transform to the output color
ARBITRARY_MATRIX = 1
Applies an arbitrary transform defined by a 4x4 affine matrix
VALUE_INVERSE = 2
Applies a transform that inverts the value or luminance of the color, but does not modify hue or saturation
GRAYSCALE = 3
Applies a transform that maps all colors to shades of gray
CORRECT_PROTANOPIA = 4
Applies a transform which corrects for protanopic color blindness
CORRECT_DEUTERANOPIA = 5
Applies a transform which corrects for deuteranopic color blindness
CORRECT_TRITANOPIA = 6
Applies a transform which corrects for tritanopic color blindness
Annotations
export
name="android_color_transform_t" , value_prefix="HAL_COLOR_TRANSFORM_"

Hdr

enum Hdr: int32_t

Supported HDR formats.Must be kept in sync with equivalents in Display.java.

Details
Members
DOLBY_VISION = 1
Device supports Dolby Vision HDR
HDR10 = 2
Device supports HDR10
HLG = 3
Device supports hybrid log-gamma HDR
Annotations
export
name="android_hdr_t" , value_prefix="HAL_HDR_"