webp-lossless-bitstream-spec: minor wording updates

Mostly grammatical and addition/subtraction of commas from the AUTH48
portion of the RFC review process.

The serial comma changes are based on the Chicago Manual of Style
(CMOS), 17th edition.

Bug: webp:611
Change-Id: I5ae2d1cc0196009dbf3a4c2195cc73c2ef809b49
This commit is contained in:
James Zern 2023-06-05 16:32:02 -07:00
parent 7f75c91ced
commit 29c9f2d410

View File

@ -22,10 +22,10 @@ lossless format stores and restores the pixel values exactly, including the
color values for pixels whose alpha value is 0. The format uses subresolution color values for pixels whose alpha value is 0. The format uses subresolution
images, recursively embedded into the format itself, for storing statistical images, recursively embedded into the format itself, for storing statistical
data about the images, such as the used entropy codes, spatial predictors, color data about the images, such as the used entropy codes, spatial predictors, color
space conversion, and color table. LZ77, prefix coding, and a color cache are space conversion, and color table. A universal algorithm for sequential data
used for compression of the bulk data. Decoding speeds faster than PNG have been compression (LZ77), prefix coding, and a color cache are used for compression of
demonstrated, as well as 25% denser compression than can be achieved using the bulk data. Decoding speeds faster than PNG have been demonstrated, as well
today's PNG format. as 25% denser compression than can be achieved using today's PNG format.
* TOC placeholder * TOC placeholder
@ -40,7 +40,7 @@ image. It is intended as a detailed reference for the WebP lossless encoder and
decoder implementation. decoder implementation.
In this document, we extensively use C programming language syntax to describe In this document, we extensively use C programming language syntax to describe
the bitstream, and assume the existence of a function for reading bits, the bitstream and assume the existence of a function for reading bits,
`ReadBits(n)`. The bytes are read in the natural order of the stream containing `ReadBits(n)`. The bytes are read in the natural order of the stream containing
them, and bits of each byte are read in least-significant-bit-first order. When them, and bits of each byte are read in least-significant-bit-first order. When
multiple bits are read at the same time, the integer is constructed from the multiple bits are read at the same time, the integer is constructed from the
@ -61,14 +61,14 @@ b |= ReadBits(1) << 1;
We assume that each color component, that is, alpha, red, blue and green, is We assume that each color component, that is, alpha, red, blue and green, is
represented using an 8-bit byte. We define the corresponding type as uint8. A represented using an 8-bit byte. We define the corresponding type as uint8. A
whole ARGB pixel is represented by a type called uint32, an unsigned integer whole ARGB pixel is represented by a type called uint32, which is an unsigned
consisting of 32 bits. In the code showing the behavior of the transformations, integer consisting of 32 bits. In the code showing the behavior of the
alpha value is codified in bits 31..24, red in bits 23..16, green in bits 15..8 transformations, these values are codified in the following bits: alpha in bits
and blue in bits 7..0, but implementations of the format are free to use another 31..24, red in bits 23..16, green in bits 15..8 and blue in bits 7..0; however,
representation internally. implementations of the format are free to use another representation internally.
Broadly, a WebP lossless image contains header data, transform information and Broadly, a WebP lossless image contains header data, transform information, and
actual image data. Headers contain width and height of the image. A WebP actual image data. Headers contain the width and height of the image. A WebP
lossless image can go through four different types of transformation before lossless image can go through four different types of transformation before
being entropy encoded. The transform information in the bitstream contains the being entropy encoded. The transform information in the bitstream contains the
data required to apply the respective inverse transforms. data required to apply the respective inverse transforms.
@ -84,7 +84,7 @@ ARGB image
: A two-dimensional array containing ARGB pixels. : A two-dimensional array containing ARGB pixels.
color cache color cache
: A small hash-addressed array to store recently used colors, to be able to : A small hash-addressed array to store recently used colors to be able to
recall them with shorter codes. recall them with shorter codes.
color indexing image color indexing image
@ -96,20 +96,16 @@ color transform image
color components. color components.
distance mapping distance mapping
: Changes LZ77 distances to have the smallest values for pixels in 2D : Changes LZ77 distances to have the smallest values for pixels in
proximity. two-dimensional proximity.
entropy image entropy image
: A two-dimensional subresolution image indicating which entropy coding should : A two-dimensional subresolution image indicating which entropy coding should
be used in a respective square in the image, that is, each pixel is a meta be used in a respective square in the image, that is, each pixel is a meta
prefix code. prefix code.
prefix code
: A classic way to do entropy coding where a smaller number of bits are used
for more frequent codes.
LZ77 LZ77
: Dictionary-based sliding window compression algorithm that either emits : A dictionary-based sliding window compression algorithm that either emits
symbols or describes them as sequences of past symbols. symbols or describes them as sequences of past symbols.
meta prefix code meta prefix code
@ -120,16 +116,20 @@ predictor image
: A two-dimensional subresolution image indicating which spatial predictor is : A two-dimensional subresolution image indicating which spatial predictor is
used for a particular square in the image. used for a particular square in the image.
prefix code
: A classic way to do entropy coding where a smaller number of bits are used
for more frequent codes.
prefix coding prefix coding
: A way to entropy code larger integers that codes a few bits of the integer : A way to entropy code larger integers, which codes a few bits of the integer
using an entropy code and codifies the remaining bits raw. This allows for using an entropy code and codifies the remaining bits raw. This allows for
the descriptions of the entropy codes to remain relatively small even when the descriptions of the entropy codes to remain relatively small even when
the range of symbols is large. the range of symbols is large.
scan-line order scan-line order
: A processing order of pixels, left-to-right, top-to-bottom, starting from : A processing order of pixels (left to right and top to bottom), starting
the left-hand-top pixel, proceeding to the right. Once a row is completed, from the left-hand-top pixel. Once a row is completed, continue from the
continue from the left-hand column of the next row. left-hand column of the next row.
3 RIFF Header 3 RIFF Header
------------- -------------
@ -137,16 +137,16 @@ scan-line order
The beginning of the header has the RIFF container. This consists of the The beginning of the header has the RIFF container. This consists of the
following 21 bytes: following 21 bytes:
1. String "RIFF" 1. String 'RIFF'.
2. A little-endian 32 bit value of the block length, the whole size 2. A little-endian, 32-bit value of the block length, which is the whole size
of the block controlled by the RIFF header. Normally this equals of the block controlled by the RIFF header. Normally, this equals
the payload size (file size minus 8 bytes: 4 bytes for the 'RIFF' the payload size (file size minus 8 bytes: 4 bytes for the 'RIFF'
identifier and 4 bytes for storing the value itself). identifier and 4 bytes for storing the value itself).
3. String "WEBP" (RIFF container name). 3. String 'WEBP' (RIFF container name).
4. String "VP8L" (chunk tag for lossless encoded image data). 4. String 'VP8L' (FourCC for lossless-encoded image data).
5. A little-endian 32-bit value of the number of bytes in the 5. A little-endian, 32-bit value of the number of bytes in the
lossless stream. lossless stream.
6. One byte signature 0x2f. 6. 1-byte signature 0x2f.
The first 28 bits of the bitstream specify the width and height of the image. The first 28 bits of the bitstream specify the width and height of the image.
Width and height are decoded as 14-bit integers as follows: Width and height are decoded as 14-bit integers as follows:
@ -181,10 +181,10 @@ Transformations are reversible manipulations of the image data that can reduce
the remaining symbolic entropy by modeling spatial and color correlations. the remaining symbolic entropy by modeling spatial and color correlations.
Transformations can make the final compression more dense. Transformations can make the final compression more dense.
An image can go through four types of transformation. A 1 bit indicates the An image can go through four types of transformations. A 1 bit indicates the
presence of a transform. Each transform is allowed to be used only once. The presence of a transform. Each transform is allowed to be used only once. The
transformations are used only for the main level ARGB image: the subresolution transformations are used only for the main-level ARGB image; the subresolution
images have no transforms, not even the 0 bit indicating the end-of-transforms. images have no transforms, not even the 0 bit indicating the end of transforms.
Typically, an encoder would use these transforms to reduce the Shannon entropy Typically, an encoder would use these transforms to reduce the Shannon entropy
in the residual image. Also, the transform data can be decided based on entropy in the residual image. Also, the transform data can be decided based on entropy
@ -201,7 +201,7 @@ while (ReadBits(1)) { // Transform present.
// Decode actual image data (Section 4). // Decode actual image data (Section 4).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a transform is present then the next two bits specify the transform type. If a transform is present, then the next two bits specify the transform type.
There are four types of transforms. There are four types of transforms.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -215,7 +215,7 @@ enum TransformType {
The transform type is followed by the transform data. Transform data contains The transform type is followed by the transform data. Transform data contains
the information required to apply the inverse transform and depends on the the information required to apply the inverse transform and depends on the
transform type. Next we describe the transform data for different types. transform type. Next, we describe the transform data for different types.
### 4.1 Predictor Transform ### 4.1 Predictor Transform
@ -225,11 +225,11 @@ that neighboring pixels are often correlated. In the predictor transform, the
current pixel value is predicted from the pixels already decoded (in scan-line current pixel value is predicted from the pixels already decoded (in scan-line
order) and only the residual value (actual - predicted) is encoded. The order) and only the residual value (actual - predicted) is encoded. The
_prediction mode_ determines the type of prediction to use. We divide the image _prediction mode_ determines the type of prediction to use. We divide the image
into squares and all the pixels in a square use the same prediction mode. into squares, and all the pixels in a square use the same prediction mode.
The first 3 bits of prediction data define the block width and height in number The first 3 bits of prediction data define the block width and height in number
of bits. The number of block columns, `block_xsize`, is used in indexing of bits. The number of block columns, `block_xsize`, is used in two-dimension
two-dimensionally. indexing.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
int size_bits = ReadBits(3) + 2; int size_bits = ReadBits(3) + 2;
@ -240,9 +240,9 @@ int block_xsize = DIV_ROUND_UP(image_width, 1 << size_bits);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The transform data contains the prediction mode for each block of the image. All The transform data contains the prediction mode for each block of the image. All
the `block_width * block_height` pixels of a block use same prediction mode. The the `block_width * block_height` pixels of a block use the same prediction mode.
prediction modes are treated as pixels of an image and encoded using the same The prediction modes are treated as pixels of an image and encoded using the
techniques described in [Chapter 5](#image-data). same techniques described in [Chapter 5](#image-data).
For a pixel _x, y_, one can compute the respective filter block address by: For a pixel _x, y_, one can compute the respective filter block address by:
@ -255,7 +255,7 @@ There are 14 different prediction modes. In each prediction mode, the current
pixel value is predicted from one or more neighboring pixels whose values are pixel value is predicted from one or more neighboring pixels whose values are
already known. already known.
We choose the neighboring pixels (TL, T, TR, and L) of the current pixel (P) as We chose the neighboring pixels (TL, T, TR, and L) of the current pixel (P) as
follows: follows:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -267,12 +267,12 @@ X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
where TL means top-left, T top, TR top-right, L left pixel. At the time of where TL means top-left, T means top, TR means top-right, and L means left. At
predicting a value for P, all pixels O, TL, T, TR and L have already been the time of predicting a value for P, all O, TL, T, TR and L pixels have already
processed, and pixel P and all pixels X are unknown. been processed, and the P pixel and all X pixels are unknown.
Given the above neighboring pixels, the different prediction modes are defined Given the preceding neighboring pixels, the different prediction modes are
as follows. defined as follows.
| Mode | Predicted value of each channel of the current pixel | | Mode | Predicted value of each channel of the current pixel |
| ------ | ------------------------------------------------------- | | ------ | ------------------------------------------------------- |
@ -304,7 +304,7 @@ The Select predictor is defined as follows:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
uint32 Select(uint32 L, uint32 T, uint32 TL) { uint32 Select(uint32 L, uint32 T, uint32 TL) {
// L = left pixel, T = top pixel, TL = top left pixel. // L = left pixel, T = top pixel, TL = top-left pixel.
// ARGB component estimates for prediction. // ARGB component estimates for prediction.
int pAlpha = ALPHA(L) + ALPHA(T) - ALPHA(TL); int pAlpha = ALPHA(L) + ALPHA(T) - ALPHA(TL);
@ -351,25 +351,26 @@ int ClampAddSubtractHalf(int a, int b) {
There are special handling rules for some border pixels. If there is a There are special handling rules for some border pixels. If there is a
prediction transform, regardless of the mode \[0..13\] for these pixels, the prediction transform, regardless of the mode \[0..13\] for these pixels, the
predicted value for the left-topmost pixel of the image is 0xff000000, L-pixel predicted value for the left-topmost pixel of the image is 0xff000000, all
for all pixels on the top row, and T-pixel for all pixels on the leftmost pixels on the top row are L-pixel, and all pixels on the leftmost column are
column. T-pixel.
Addressing the TR-pixel for pixels on the rightmost column is Addressing the TR-pixel for pixels on the rightmost column is
exceptional. The pixels on the rightmost column are predicted by using the modes exceptional. The pixels on the rightmost column are predicted by using the modes
\[0..13\] just like pixels not on the border, but the leftmost pixel on the same \[0..13\], just like pixels not on the border, but the leftmost pixel on the
row as the current pixel is instead used as the TR-pixel. same row as the current pixel is instead used as the TR-pixel.
### 4.2 Color Transform ### 4.2 Color Transform
The goal of the color transform is to decorrelate the R, G and B values of each The goal of the color transform is to decorrelate the R, G, and B values of each
pixel. The color transform keeps the green (G) value as it is, transforms red pixel. The color transform keeps the green (G) value as it is, transforms the
(R) based on green and transforms blue (B) based on green and then based on red. red (R) value based on the green value, and transforms the blue (B) value based
on the green value and then on the red value.
As is the case for the predictor transform, first the image is divided into As is the case for the predictor transform, first the image is divided into
blocks and the same transform mode is used for all the pixels in a block. For blocks, and the same transform mode is used for all the pixels in a block. For
each block there are three types of color transform elements. each block, there are three types of color transform elements.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
typedef struct { typedef struct {
@ -379,7 +380,7 @@ typedef struct {
} ColorTransformElement; } ColorTransformElement;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The actual color transformation is done by defining a color transform delta. The The actual color transform is done by defining a color transform delta. The
color transform delta depends on the `ColorTransformElement`, which is the same color transform delta depends on the `ColorTransformElement`, which is the same
for all the pixels in a particular block. The delta is subtracted during the for all the pixels in a particular block. The delta is subtracted during the
color transform. The inverse color transform then is just adding those deltas. color transform. The inverse color transform then is just adding those deltas.
@ -405,7 +406,7 @@ void ColorTransform(uint8 red, uint8 blue, uint8 green,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`ColorTransformDelta` is computed using a signed 8-bit integer representing a `ColorTransformDelta` is computed using a signed 8-bit integer representing a
3.5-fixed-point number, and a signed 8-bit RGB color channel (c) \[-128..127\] 3.5-fixed-point number and a signed 8-bit RGB color channel (c) \[-128..127\]
and is defined as follows: and is defined as follows:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -415,16 +416,16 @@ int8 ColorTransformDelta(int8 t, int8 c) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A conversion from the 8-bit unsigned representation (uint8) to the 8-bit signed A conversion from the 8-bit unsigned representation (uint8) to the 8-bit signed
one (int8) is required before calling `ColorTransformDelta()`. It should be one (int8) is required before calling `ColorTransformDelta()`. The signed value
performed using 8-bit two's complement (that is: uint8 range \[128..255\] is should be interpreted as an 8-bit two's complement number (that is: uint8 range
mapped to the \[-128..-1\] range of its converted int8 value). \[128..255\] is mapped to the \[-128..-1\] range of its converted int8 value).
The multiplication is to be done using more precision (with at least 16-bit The multiplication is to be done using more precision (with at least 16-bit
precision). The sign extension property of the shift operation does not matter precision). The sign extension property of the shift operation does not matter
here: only the lowest 8 bits are used from the result, and there the sign here; only the lowest 8 bits are used from the result, and there the sign
extension shifting and unsigned shifting are consistent with each other. extension shifting and unsigned shifting are consistent with each other.
Now we describe the contents of color transform data so that decoding can apply Now, we describe the contents of color transform data so that decoding can apply
the inverse color transform and recover the original red and blue values. The the inverse color transform and recover the original red and blue values. The
first 3 bits of the color transform data contain the width and height of the first 3 bits of the color transform data contain the width and height of the
image block in number of bits, just like the predictor transform: image block in number of bits, just like the predictor transform:
@ -436,7 +437,7 @@ int block_height = 1 << size_bits;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The remaining part of the color transform data contains `ColorTransformElement` The remaining part of the color transform data contains `ColorTransformElement`
instances corresponding to each block of the image. `ColorTransformElement` instances, corresponding to each block of the image. `ColorTransformElement`
instances are treated as pixels of an image and encoded using the methods instances are treated as pixels of an image and encoded using the methods
described in [Chapter 5](#image-data). described in [Chapter 5](#image-data).
@ -470,8 +471,8 @@ void InverseTransform(uint8 red, uint8 green, uint8 blue,
The subtract green transform subtracts green values from red and blue values of The subtract green transform subtracts green values from red and blue values of
each pixel. When this transform is present, the decoder needs to add the green each pixel. When this transform is present, the decoder needs to add the green
value to both red and blue. There is no data associated with this transform. The value to both the red and blue values. There is no data associated with this
decoder applies the inverse transform as follows: transform. The decoder applies the inverse transform as follows:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
void AddGreenToBlueAndRed(uint8 green, uint8 *red, uint8 *blue) { void AddGreenToBlueAndRed(uint8 green, uint8 *red, uint8 *blue) {
@ -480,7 +481,7 @@ void AddGreenToBlueAndRed(uint8 green, uint8 *red, uint8 *blue) {
} }
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This transform is redundant as it can be modeled using the color transform, but This transform is redundant, as it can be modeled using the color transform, but
since there is no additional data here, the subtract green transform can be since there is no additional data here, the subtract green transform can be
coded using fewer bits than a full-blown color transform. coded using fewer bits than a full-blown color transform.
@ -491,30 +492,30 @@ If there are not many unique pixel values, it may be more efficient to create a
color index array and replace the pixel values by the array's indices. The color color index array and replace the pixel values by the array's indices. The color
indexing transform achieves this. (In the context of WebP lossless, we indexing transform achieves this. (In the context of WebP lossless, we
specifically do not call this a palette transform because a similar but more specifically do not call this a palette transform because a similar but more
dynamic concept exists in WebP lossless encoding: color cache). dynamic concept exists in WebP lossless encoding: color cache.)
The color indexing transform checks for the number of unique ARGB values in the The color indexing transform checks for the number of unique ARGB values in the
image. If that number is below a threshold (256), it creates an array of those image. If that number is below a threshold (256), it creates an array of those
ARGB values, which is then used to replace the pixel values with the ARGB values, which is then used to replace the pixel values with the
corresponding index: the green channel of the pixels are replaced with the corresponding index: the green channel of the pixels are replaced with the
index; all alpha values are set to 255; all red and blue values to 0. index, all alpha values are set to 255, and all red and blue values to 0.
The transform data contains color table size and the entries in the color table. The transform data contains the color table size and the entries in the color
The decoder reads the color indexing transform data as follows: table. The decoder reads the color indexing transform data as follows:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// 8 bit value for color table size // 8-bit value for the color table size
int color_table_size = ReadBits(8) + 1; int color_table_size = ReadBits(8) + 1;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The color table is stored using the image storage format itself. The color table The color table is stored using the image storage format itself. The color table
can be obtained by reading an image, without the RIFF header, image size, and can be obtained by reading an image, without the RIFF header, image size, and
transforms, assuming a height of one pixel and a width of `color_table_size`. transforms, assuming the height of 1 pixel and the width of `color_table_size`.
The color table is always subtraction-coded to reduce image entropy. The deltas The color table is always subtraction-coded to reduce image entropy. The deltas
of palette colors contain typically much less entropy than the colors of palette colors contain typically much less entropy than the colors
themselves, leading to significant savings for smaller images. In decoding, themselves, leading to significant savings for smaller images. In decoding,
every final color in the color table can be obtained by adding the previous every final color in the color table can be obtained by adding the previous
color component values by each ARGB component separately, and storing the least color component values by each ARGB component separately and storing the least
significant 8 bits of the result. significant 8 bits of the result.
The inverse transform for the image is simply replacing the pixel values (which The inverse transform for the image is simply replacing the pixel values (which
@ -526,14 +527,14 @@ is done based on the green component of the ARGB color.
argb = color_table[GREEN(argb)]; argb = color_table[GREEN(argb)];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the index is equal or larger than `color_table_size`, the argb color value If the index is equal to or larger than `color_table_size`, the argb color value
should be set to 0x00000000 (transparent black). should be set to 0x00000000 (transparent black).
When the color table is small (equal to or less than 16 colors), several pixels When the color table is small (equal to or less than 16 colors), several pixels
are bundled into a single pixel. The pixel bundling packs several (2, 4, or 8) are bundled into a single pixel. The pixel bundling packs several (2, 4, or 8)
pixels into a single pixel, reducing the image width respectively. Pixel pixels into a single pixel, reducing the image width respectively. Pixel
bundling allows for a more efficient joint distribution entropy coding of bundling allows for a more efficient joint distribution entropy coding of
neighboring pixels, and gives some arithmetic coding-like benefits to the neighboring pixels and gives some arithmetic coding-like benefits to the
entropy code, but it can only be used when there are 16 or fewer unique values. entropy code, but it can only be used when there are 16 or fewer unique values.
`color_table_size` specifies how many pixels are combined: `color_table_size` specifies how many pixels are combined:
@ -551,7 +552,7 @@ if (color_table_size <= 2) {
} }
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`width_bits` has a value of 0, 1, 2 or 3. A value of 0 indicates no pixel `width_bits` has a value of 0, 1, 2, or 3. A value of 0 indicates no pixel
bundling is to be done for the image. A value of 1 indicates that two pixels are bundling is to be done for the image. A value of 1 indicates that two pixels are
combined, and each pixel has a range of \[0..15\]. A value of 2 indicates that combined, and each pixel has a range of \[0..15\]. A value of 2 indicates that
four pixels are combined, and each pixel has a range of \[0..3\]. A value of 3 four pixels are combined, and each pixel has a range of \[0..3\]. A value of 3
@ -560,18 +561,18 @@ that is, a binary value.
The values are packed into the green component as follows: The values are packed into the green component as follows:
* `width_bits` = 1: for every x value where x ≡ 0 (mod 2), a green * `width_bits` = 1: For every x value, where x ≡ 0 (mod 2), a green
value at x is positioned into the 4 least-significant bits of the value at x is positioned into the 4 least significant bits of the
green value at x / 2, a green value at x + 1 is positioned into the green value at x / 2, and a green value at x + 1 is positioned into the
4 most-significant bits of the green value at x / 2. 4 most significant bits of the green value at x / 2.
* `width_bits` = 2: for every x value where x ≡ 0 (mod 4), a green * `width_bits` = 2: For every x value, where x ≡ 0 (mod 4), a green
value at x is positioned into the 2 least-significant bits of the value at x is positioned into the 2 least-significant bits of the
green value at x / 4, green values at x + 1 to x + 3 are positioned in order green value at x / 4, and green values at x + 1 to x + 3 are positioned in
to the more significant bits of the green value at x / 4. order to the more significant bits of the green value at x / 4.
* `width_bits` = 3: for every x value where x ≡ 0 (mod 8), a green * `width_bits` = 3: For every x value, where x ≡ 0 (mod 8), a green
value at x is positioned into the least-significant bit of the green value at x is positioned into the least significant bit of the green
value at x / 8, green values at x + 1 to x + 7 are positioned in order to value at x / 8, and green values at x + 1 to x + 7 are positioned in order
the more significant bits of the green value at x / 8. to the more significant bits of the green value at x / 8.
5 Image Data 5 Image Data
@ -588,18 +589,18 @@ We use image data in five different roles:
[meta prefix codes](#decoding-of-meta-prefix-codes). The red and green [meta prefix codes](#decoding-of-meta-prefix-codes). The red and green
components of a pixel define the meta prefix code used in a particular components of a pixel define the meta prefix code used in a particular
block of the ARGB image. block of the ARGB image.
1. Predictor image: Stores the metadata for 1. Predictor image: Stores the metadata for the
[Predictor Transform](#predictor-transform). The green component of a pixel [predictor transform](#predictor-transform). The green component of a pixel
defines which of the 14 predictors is used within a particular block of the defines which of the 14 predictors is used within a particular block of the
ARGB image. ARGB image.
1. Color transform image. It is created by `ColorTransformElement` values 1. Color transform image: Created by `ColorTransformElement` values
(defined in [Color Transform](#color-transform)) for different blocks of (defined in ["Color Transform"](#color-transform)) for different blocks of
the image. Each `ColorTransformElement` `'cte'` is treated as a pixel whose the image. Each `ColorTransformElement` `'cte'` is treated as a pixel whose
alpha component is `255`, red component is `cte.red_to_blue`, green alpha component is `255`, red component is `cte.red_to_blue`, green
component is `cte.green_to_blue` and blue component is `cte.green_to_red`. component is `cte.green_to_blue`, and blue component is `cte.green_to_red`.
1. Color indexing image: An array of size `color_table_size` (up to 256 1. Color indexing image: An array of size `color_table_size` (up to 256
ARGB values) storing the metadata for the ARGB values) storing the metadata for the
[Color Indexing Transform](#color-indexing-transform). This is stored as an [color indexing transform](#color-indexing-transform). This is stored as an
image of width `color_table_size` and height `1`. image of width `color_table_size` and height `1`.
### 5.2 Encoding of Image Data ### 5.2 Encoding of Image Data
@ -613,13 +614,13 @@ several blocks may share the same entropy codes.
**Rationale:** Storing an entropy code incurs a cost. This cost can be minimized **Rationale:** Storing an entropy code incurs a cost. This cost can be minimized
if statistically similar blocks share an entropy code, thereby storing that code if statistically similar blocks share an entropy code, thereby storing that code
only once. For example, an encoder can find similar blocks by clustering them only once. For example, an encoder can find similar blocks by clustering them
using their statistical properties, or by repeatedly joining a pair of randomly using their statistical properties or by repeatedly joining a pair of randomly
selected clusters when it reduces the overall amount of bits needed to encode selected clusters when it reduces the overall amount of bits needed to encode
the image. the image.
Each pixel is encoded using one of the three possible methods: Each pixel is encoded using one of the three possible methods:
1. Prefix coded literal: each channel (green, red, blue and alpha) is 1. Prefix-coded literals: each channel (green, red, blue, and alpha) is
entropy-coded independently; entropy-coded independently;
2. LZ77 backward reference: a sequence of pixels are copied from elsewhere 2. LZ77 backward reference: a sequence of pixels are copied from elsewhere
in the image; or in the image; or
@ -628,9 +629,9 @@ Each pixel is encoded using one of the three possible methods:
The following subsections describe each of these in detail. The following subsections describe each of these in detail.
#### 5.2.1 Prefix Coded Literals #### 5.2.1 Prefix-Coded Literals
The pixel is stored as prefix coded values of green, red, blue and alpha (in The pixel is stored as prefix-coded values of green, red, blue, and alpha (in
that order). See [Section 6.2.3](#decoding-entropy-coded-image-data) for that order). See [Section 6.2.3](#decoding-entropy-coded-image-data) for
details. details.
@ -646,12 +647,12 @@ Backward references are tuples of _length_ and _distance code_:
The length and distance values are stored using **LZ77 prefix coding**. The length and distance values are stored using **LZ77 prefix coding**.
LZ77 prefix coding divides large integer values into two parts: the _prefix LZ77 prefix coding divides large integer values into two parts: the _prefix
code_ and the _extra bits_: the prefix code is stored using an entropy code, code_ and the _extra bits_. The prefix code is stored using an entropy code,
while the extra bits are stored as they are (without an entropy code). while the extra bits are stored as they are (without an entropy code).
**Rationale**: This approach reduces the storage requirement for the entropy **Rationale**: This approach reduces the storage requirement for the entropy
code. Also, large values are usually rare, and so extra bits would be used for code. Also, large values are usually rare, so extra bits would be used for very
very few values in the image. Thus, this approach results in better compression few values in the image. Thus, this approach results in better compression
overall. overall.
The following table denotes the prefix codes and extra bits used for storing The following table denotes the prefix codes and extra bits used for storing
@ -697,16 +698,16 @@ previously seen pixel, from which the pixels are to be copied. This subsection
defines the mapping between a distance code and the position of a previous defines the mapping between a distance code and the position of a previous
pixel. pixel.
Distance codes larger than 120 denote the pixel-distance in scan-line order, Distance codes larger than 120 denote the pixel distance in scan-line order,
offset by 120. offset by 120.
The smallest distance codes \[1..120\] are special, and are reserved for a close The smallest distance codes \[1..120\] are special and are reserved for a close
neighborhood of the current pixel. This neighborhood consists of 120 pixels: neighborhood of the current pixel. This neighborhood consists of 120 pixels:
* Pixels that are 1 to 7 rows above the current pixel, and are up to 8 columns * Pixels that are 1 to 7 rows above the current pixel and are up to 8 columns
to the left or up to 7 columns to the right of the current pixel. \[Total to the left or up to 7 columns to the right of the current pixel. \[Total
such pixels = `7 * (8 + 1 + 7) = 112`\]. such pixels = `7 * (8 + 1 + 7) = 112`\].
* Pixels that are in same row as the current pixel, and are up to 8 columns to * Pixels that are in same row as the current pixel and are up to 8 columns to
the left of the current pixel. \[`8` such pixels\]. the left of the current pixel. \[`8` such pixels\].
The mapping between distance code `i` and the neighboring pixel offset The mapping between distance code `i` and the neighboring pixel offset
@ -735,8 +736,8 @@ The mapping between distance code `i` and the neighboring pixel offset
For example, the distance code `1` indicates an offset of `(0, 1)` for the For example, the distance code `1` indicates an offset of `(0, 1)` for the
neighboring pixel, that is, the pixel above the current pixel (0 pixel neighboring pixel, that is, the pixel above the current pixel (0 pixel
difference in the X-direction and 1 pixel difference in the Y-direction). difference in the X direction and 1 pixel difference in the Y direction).
Similarly, the distance code `3` indicates the left-top pixel. Similarly, the distance code `3` indicates the top-left pixel.
The decoder can convert a distance code `i` to a scan-line order distance `dist` The decoder can convert a distance code `i` to a scan-line order distance `dist`
as follows: as follows:
@ -749,7 +750,7 @@ if (dist < 1) {
} }
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
where `distance_map` is the mapping noted above and `xsize` is the width of the where `distance_map` is the mapping noted above, and `xsize` is the width of the
image in pixels. image in pixels.
@ -760,7 +761,7 @@ Color cache stores a set of colors that have been recently used in the image.
**Rationale:** This way, the recently used colors can sometimes be referred to **Rationale:** This way, the recently used colors can sometimes be referred to
more efficiently than emitting them using the other two methods (described in more efficiently than emitting them using the other two methods (described in
[5.2.1](#prefix-coded-literals) and [5.2.2](#lz77-backward-reference)). Sections [5.2.1](#prefix-coded-literals) and [5.2.2](#lz77-backward-reference)).
Color cache codes are stored as follows. First, there is a 1-bit value that Color cache codes are stored as follows. First, there is a 1-bit value that
indicates if the color cache is used. If this bit is 0, no color cache codes indicates if the color cache is used. If this bit is 0, no color cache codes
@ -773,7 +774,7 @@ int color_cache_code_bits = ReadBits(4);
int color_cache_size = 1 << color_cache_code_bits; int color_cache_size = 1 << color_cache_code_bits;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`color_cache_code_bits` defines the size of the color_cache by (1 << `color_cache_code_bits` defines the size of the color_cache (1 <<
`color_cache_code_bits`). The range of allowed values for `color_cache_code_bits`). The range of allowed values for
`color_cache_code_bits` is \[1..11\]. Compliant decoders must indicate a `color_cache_code_bits` is \[1..11\]. Compliant decoders must indicate a
corrupted bitstream for other values. corrupted bitstream for other values.
@ -799,7 +800,7 @@ Most of the data is coded using a [canonical prefix code][canonical_huff].
Hence, the codes are transmitted by sending the _prefix code lengths_, as Hence, the codes are transmitted by sending the _prefix code lengths_, as
opposed to the actual _prefix codes_. opposed to the actual _prefix codes_.
In particular, the format uses **spatially-variant prefix coding**. In other In particular, the format uses **spatially variant prefix coding**. In other
words, different blocks of the image can potentially use different entropy words, different blocks of the image can potentially use different entropy
codes. codes.
@ -827,7 +828,7 @@ This section describes how to read the prefix code lengths from the bitstream.
The prefix code lengths can be coded in two ways. The method used is specified The prefix code lengths can be coded in two ways. The method used is specified
by a 1-bit value. by a 1-bit value.
* If this bit is 1, it is a _simple code length code_, and * If this bit is 1, it is a _simple code length code_.
* If this bit is 0, it is a _normal code length code_. * If this bit is 0, it is a _normal code length code_.
In both cases, there can be unused code lengths that are still part of the In both cases, there can be unused code lengths that are still part of the
@ -851,9 +852,9 @@ The first bit indicates the number of symbols:
int num_symbols = ReadBits(1) + 1; int num_symbols = ReadBits(1) + 1;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Following are the symbol values. The following are the symbol values.
This first symbol is coded using 1 or 8 bits depending on the value of This first symbol is coded using 1 or 8 bits, depending on the value of
`is_first_8bits`. The range is \[0..1\] or \[0..255\], respectively. The second `is_first_8bits`. The range is \[0..1\] or \[0..255\], respectively. The second
symbol, if present, is always assumed to be in the range \[0..255\] and coded symbol, if present, is always assumed to be in the range \[0..255\] and coded
using 8 bits. using 8 bits.
@ -886,7 +887,7 @@ int num_code_lengths = 4 + ReadBits(4);
If `num_code_lengths` is > 19, the bitstream is invalid. If `num_code_lengths` is > 19, the bitstream is invalid.
The code lengths are themselves encoded using prefix codes: lower level code The code lengths are themselves encoded using prefix codes; lower-level code
lengths, `code_length_code_lengths`, first have to be read. The rest of those lengths, `code_length_code_lengths`, first have to be read. The rest of those
`code_length_code_lengths` (according to the order in `kCodeLengthCodeOrder`) `code_length_code_lengths` (according to the order in `kCodeLengthCodeOrder`)
are zeros. are zeros.
@ -916,20 +917,20 @@ to `max_symbol` code lengths.
* Code \[0..15\] indicates literal code lengths. * Code \[0..15\] indicates literal code lengths.
* Value 0 means no symbols have been coded. * Value 0 means no symbols have been coded.
* Values \[1..15\] indicate the bit length of the respective code. * Values \[1..15\] indicate the bit length of the respective code.
* Code 16 repeats the previous non-zero value \[3..6\] times, that is, * Code 16 repeats the previous nonzero value \[3..6\] times, that is,
`3 + ReadBits(2)` times. If code 16 is used before a non-zero `3 + ReadBits(2)` times. If code 16 is used before a nonzero
value has been emitted, a value of 8 is repeated. value has been emitted, a value of 8 is repeated.
* Code 17 emits a streak of zeros \[3..10\], that is, `3 + ReadBits(3)` * Code 17 emits a streak of zeros of length \[3..10\], that is, `3 +
times. ReadBits(3)` times.
* Code 18 emits a streak of zeros of length \[11..138\], that is, * Code 18 emits a streak of zeros of length \[11..138\], that is,
`11 + ReadBits(7)` times. `11 + ReadBits(7)` times.
Once code lengths are read, a prefix code for each symbol type (A, R, G, B, Once code lengths are read, a prefix code for each symbol type (A, R, G, B, and
distance) is formed using their respective alphabet sizes: distance) is formed using their respective alphabet sizes:
* G channel: 256 + 24 + `color_cache_size` * G channel: 256 + 24 + `color_cache_size`
* other literals (A,R,B): 256 * Other literals (A, R, and B): 256
* distance code: 40 * Distance code: 40
The Normal Code Length Code must code a full decision tree, that is, the sum of The Normal Code Length Code must code a full decision tree, that is, the sum of
`2 ^ (-length)` for all non-zero codes must be exactly one. There is however `2 ^ (-length)` for all non-zero codes must be exactly one. There is however
@ -958,8 +959,8 @@ value:
The entropy image defines which prefix codes are used in different parts of the The entropy image defines which prefix codes are used in different parts of the
image, as described below. image, as described below.
The first 3-bits contain the `prefix_bits` value. The dimensions of the entropy The first 3 bits contain the `prefix_bits` value. The dimensions of the entropy
image are derived from `prefix_bits`. image are derived from `prefix_bits`:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
int prefix_bits = ReadBits(3) + 2; int prefix_bits = ReadBits(3) + 2;
@ -977,9 +978,9 @@ The next bits contain an entropy image of width `prefix_xsize` and height
For any given pixel (x, y), there is a set of five prefix codes associated with For any given pixel (x, y), there is a set of five prefix codes associated with
it. These codes are (in bitstream order): it. These codes are (in bitstream order):
* **Prefix code #1**: used for green channel, backward-reference length and * **Prefix code #1**: used for green channel, backward-reference length, and
color cache. color cache.
* **Prefix code #2, #3 and #4**: used for red, blue and alpha channels * **Prefix code #2, #3, and #4**: used for red, blue, and alpha channels,
respectively. respectively.
* **Prefix code #5**: used for backward-reference distance. * **Prefix code #5**: used for backward-reference distance.
@ -1011,42 +1012,43 @@ int meta_prefix_code = (entropy_image[position] >> 8) & 0xffff;
PrefixCodeGroup prefix_group = prefix_code_groups[meta_prefix_code]; PrefixCodeGroup prefix_group = prefix_code_groups[meta_prefix_code];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
where, we have assumed the existence of `PrefixCodeGroup` structure, which where we have assumed the existence of `PrefixCodeGroup` structure, which
represents a set of five prefix codes. Also, `prefix_code_groups` is an array of represents a set of five prefix codes. Also, `prefix_code_groups` is an array of
`PrefixCodeGroup` (of size `num_prefix_groups`). `PrefixCodeGroup` (of size `num_prefix_groups`).
The decoder then uses prefix code group `prefix_group` to decode the pixel The decoder then uses prefix code group `prefix_group` to decode the pixel
(x, y) as explained in the [next section](#decoding-entropy-coded-image-data). (x, y), as explained in ["Decoding Entropy-Coded Image
Data"](#decoding-entropy-coded-image-data).
#### 6.2.3 Decoding Entropy-Coded Image Data #### 6.2.3 Decoding Entropy-Coded Image Data
For the current position (x, y) in the image, the decoder first identifies the For the current position (x, y) in the image, the decoder first identifies the
corresponding prefix code group (as explained in the last section). Given the corresponding prefix code group (as explained in the last section). Given the
prefix code group, the pixel is read and decoded as follows: prefix code group, the pixel is read and decoded as follows.
Read the next symbol S from the bitstream using prefix code #1. Note that S is Next, read the symbol S from the bitstream using prefix code #1. Note that S is
any integer in the range `0` to any integer in the range `0` to
`(256 + 24 + ` [`color_cache_size`](#color-cache-code)` - 1)`. `(256 + 24 + ` [`color_cache_size`](#color-cache-code)` - 1)`.
The interpretation of S depends on its value: The interpretation of S depends on its value:
1. if S < 256 1. If S < 256
1. Use S as the green component. 1. Use S as the green component.
1. Read red from the bitstream using prefix code #2. 1. Read red from the bitstream using prefix code #2.
1. Read blue from the bitstream using prefix code #3. 1. Read blue from the bitstream using prefix code #3.
1. Read alpha from the bitstream using prefix code #4. 1. Read alpha from the bitstream using prefix code #4.
1. if S >= 256 && S < 256 + 24 1. If S >= 256 & S < 256 + 24
1. Use S - 256 as a length prefix code. 1. Use S - 256 as a length prefix code.
1. Read extra bits for length from the bitstream. 1. Read extra bits for the length from the bitstream.
1. Determine backward-reference length L from length prefix code and the 1. Determine backward-reference length L from length prefix code and the
extra bits read. extra bits read.
1. Read distance prefix code from the bitstream using prefix code #5. 1. Read the distance prefix code from the bitstream using prefix code #5.
1. Read extra bits for distance from the bitstream. 1. Read extra bits for the distance from the bitstream.
1. Determine backward-reference distance D from distance prefix code and 1. Determine backward-reference distance D from the distance prefix code
the extra bits read. and the extra bits read.
1. Copy the L pixels (in scan-line order) from the sequence of pixels 1. Copy the L pixels (in scan-line order) from the sequence of pixels
prior to them by D pixels. prior to them by D pixels.
1. if S >= 256 + 24 1. If S >= 256 + 24
1. Use S - (256 + 24) as the index into the color cache. 1. Use S - (256 + 24) as the index into the color cache.
1. Get ARGB color from the color cache at that index. 1. Get ARGB color from the color cache at that index.
@ -1055,7 +1057,7 @@ The interpretation of S depends on its value:
--------------------------------- ---------------------------------
Below is a view into the format in Augmented Backus-Naur Form ([ABNF]). It does Below is a view into the format in Augmented Backus-Naur Form ([ABNF]). It does
not cover all details. End-of-image (EOI) is only implicitly coded into the not cover all details. The end-of-image (EOI) is only implicitly coded into the
number of pixels (xsize * ysize). number of pixels (xsize * ysize).
@ -1124,7 +1126,7 @@ lz77-coded-image =
*((argb-pixel / lz77-copy / color-cache-code) lz77-coded-image) *((argb-pixel / lz77-copy / color-cache-code) lz77-coded-image)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A possible example sequence: The following is a possible example sequence:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RIFF-header image-size %b1 subtract-green-tx RIFF-header image-size %b1 subtract-green-tx