Compare commits

...

23 Commits
main ... 0.6.1

Author SHA1 Message Date
bbaaf8dbef Fix invalid incremental decoding check.
(cherry picked from commit 95ea5226c8)

Change-Id: I80c2165aa9fdf43077db155d2d00e0e99db73eab
2023-10-09 16:24:38 +02:00
a298d9d127 Fix OOB write in BuildHuffmanTable.
First, BuildHuffmanTable is called to check if the data is valid.
If it is and the table is not big enough, more memory is allocated.

This will make sure that valid (but unoptimized because of unbalanced
codes) streams are still decodable.
(cherry picked from commit 902bc91)

Change-Id: I3abe4db460dcac62c14a84832284c0b530630af2
2023-10-09 16:24:38 +02:00
ef52aca922 Limit memory allocation when reading invalid Huffman codes.
This is a backported fix for: CVE-2020-36332

This is a merge of:
dce5d76431
39cb9aad85
067031eaed

Change-Id: Iab84d2ca459327cdcee1038499842d30370fe486
2023-10-09 16:24:38 +02:00
e194928e8b Modernize CMake.
This is mostly to be compliant with Cmake CI tests.

Change-Id: I4bb20d7f93b3808bbb1374cef4fd4cb9767e91e0
2023-10-09 16:24:38 +02:00
5357804f52 EncodeAlphaInternal: clear result->bw on error
This avoids a double free should the function fail prior to
VP8BitWriterInit() and a previous trial result's buffer carried over.
Previously in ApplyFiltersAndEncode() trial.bw (with a previous
iteration's buffer) would be freed, followed by best.bw pointing to the
same buffer.

Since:
187d379d add a fallback to ALPHA_NO_COMPRESSION

In addition, check the return value of VP8BitWriterInit() in this
function.

Bug: webp:603
Change-Id: Ic258381ee26c8c16bc211d157c8153831c8c6910
(cherry picked from commit a486d800b6)
2023-02-28 00:27:13 +00:00
5c0690bc75 GetBackwardReferences: fail on alloc error
previously failures in the call to
VP8LBackwardReferencesTraceBackwards() would be ignored which, though it
wouldn't result in a crash, would produce non-deterministic output

Change-Id: Id9890a60883c3270ec75e968506d46eea32b76d4
(cherry picked from commit e3cfafaf71)
(cherry picked from commit 20ef03ee35)
(cherry picked from commit 89e226a3c7)
2022-04-06 21:57:08 -07:00
91cc4e377f BackwardReferencesHashChainDistanceOnly: fix segfault on OOM
change CostManager to calloc to avoid frees on undefined pointer
values in CostManagerClear() should the cost_model allocation succeed,
but the cost_manager allocation fail

since:
v0.5.0-93-g3e023c17 Speed-up BackwardReferencesHashChainDistanceOnly.

Tested:
for i in `seq 1 639`; do
  export MALLOC_FAIL_AT=$i
  ./examples/cwebp -m 6 -q 100 -lossless jpeg_file
done

Bug: webp:565
Change-Id: I376d81e6f41eb73529053e9e30c142b4b4f6b45b
(cherry picked from commit a828a59b49)
(cherry picked from commit dd80bb4343)
(cherry picked from commit 4d0964cd0c)
2022-04-06 21:57:08 -07:00
c0299b779c VP8LEncodeStream: fix segfault on OOM
initialize bw_side before calling EncoderAnalyze() & EncoderInit() which
may fail; previously this would cause a free of an invalid pointer in
VP8LBitWriterWipeOut().

since at least:
v0.6.0-120-gf8c2ac15 Multi-thread the lossless cruncher.

Tested:
for i in `seq 1 639`; do
  export MALLOC_FAIL_AT=$i
  ./examples/cwebp -m 6 -q 100 -lossless jpeg_file
done

Bug: webp:565
Change-Id: I1c95883834b6e4b13aee890568ce3bad0f4266f0
(cherry picked from commit fe153fae98)
(cherry picked from commit ddd65f0d19)
(cherry picked from commit 5d805f7205)
2022-04-06 21:57:08 -07:00
36fa3a48f7 alpha_processing_neon.c: fix 0x01... typo
one instance was overlong leading to a int64->uint32 conversion warning

Change-Id: I56d5ab75d89960c79293f62cd489d7ab519bbc34
(cherry picked from commit 03d1219055)
2022-03-08 19:38:13 +00:00
6debf34c54 alpha_processing_neon.c: fix Dispatch/ExtractAlpha_NEON
the trailing width % 8 bytes would clear the upper bytes of
alpha_mask as they're done one at a time

since:
49d0280d NEON: implement several alpha-processing functions

Change-Id: Iff76c0af3094597285a6aa6ed032b345f9856aae
(cherry picked from commit 924e7ca654)
2022-03-03 18:07:34 +00:00
f9298cb8b4 Make sure partition #0 is read before VP8 data in IDecode.
BUG=oss-fuzz:9186,webp:512

Change-Id: Ie0b264b6422774343206ddba3c2820a0cf37ffc0
(cherry picked from commit 5f0f5c07c4)
(cherry picked from commit 99d0790233)
2021-03-23 17:09:21 -07:00
2cb7701480 fix read-overflow while parsing VP8X chunk
The available size was not checked before parsing the VP8X data

BUG=oss-fuzz:9100,oss-fuzz:9123,webp:512

Change-Id: I0143cc4554883c1015e2f084a0e371229e04a8ca
(cherry picked from commit 95fd650706)
(cherry picked from commit c0226fd91c)
2021-03-23 17:09:16 -07:00
35de4be698 Fix VP8IoTeardownHook being called twice on worker sync failure
idec_dec.c, DecodeRemaining: Set decoder state to ERROR to prevent VP8ExitCritical to be called again

BUG=webp:512

Change-Id: Id5f893f45c348e1c529680d930e640f780a73d4c
(cherry picked from commit 9e729fe19b)
(cherry picked from commit a14e0f6465)
2021-03-23 17:09:08 -07:00
641fbb5e89 fix endian problems in pattern copy
CopyBlock8b() was over-using memcpy() of 16b values.

BUG=webp:393,webp:512

Change-Id: Id56f10d334b9a453fbcf50dabfaa63529bcff7e5
(cherry picked from commit 211f37ee63)
(cherry picked from commit 667d17a8a4)
2021-03-23 17:09:03 -07:00
b5e0b231c1 muxread,anmf: fail on multiple image chunks
treat an ANMF chunk containing multiple VP8/VP8L file as malformed.
fixes a WebPMuxImage::img_ leak.

Though the invalid free in #9106 was avoided in (ubsan):
be738c6d muxread,ChunkVerifyAndAssign: validate chunk_size
that file would still cause a leak similar to #9099.

BUG=oss-fuzz:9099,oss-fuzz:9106,webp:512

Change-Id: Ib873446a1188afeeb2fe5d53a86b75e0c5de9573
(cherry picked from commit eb82ce76dd)
(cherry picked from commit f4cf238a41)
2021-03-23 17:08:55 -07:00
2ccbb406e1 fix alpha-filtering crash when image width is larger than radius
(we also limit radius based on height too, for good measure, although it's not an asan bug)

fixes oss-fuzz issue #9105

BUG=webp:512

Change-Id: Ie0d79dd81480dc4e2b653b7e992e5cdcd3dfa834
(cherry picked from commit 1344a2e947)
(cherry picked from commit 61ff26aeeb)
2021-03-23 17:08:47 -07:00
47768596f6 muxread,ChunkVerifyAndAssign: validate chunk_size
before accounting for padding which might overflow if chunk_size is >
MAX_CHUNK_PAYLOAD.

BUG=webp:387,webp:388,webp:512

Change-Id: I3985b8817ed4faaec0629102c5333c228a0e9c98
(cherry picked from commit be738c6d39)
(cherry picked from commit 6f643f2417)
2021-03-23 17:08:41 -07:00
12669892f6 muxread,CreateInternal: fix riff size checks
previously when adjusting size down based on a smaller riff_size the
checks were insufficient to prevent 'size -= RIFF_HEADER_SIZE' from
rolling over causing ChunkVerifyAndAssign to over read. the new checks
are imported from demux.c.

BUG=webp:386,webp:512

Change-Id: If863c4a9892977b9ade7dd894392a0ecae13775c
(cherry picked from commit 2c70ad76c9)
(cherry picked from commit 706ff9c325)
2021-03-23 17:08:35 -07:00
528c8909ef Fix for thread race heap-use-after-free
BUG=webp:385,webp:512

Change-Id: I3a300b45ccae33470888cf2e35a7e937579c9409
(cherry picked from commit 569001f19f)
(cherry picked from commit a0b85e4a36)
2021-03-23 17:08:30 -07:00
16fc937d2e fix invalid check for buffer size
BUG=webp:383,webp:512

Change-Id: I8ebbb5ca4860d73c3b59b12e238b54a89184bed0
(cherry picked from commit 71ed73cf86)
(cherry picked from commit dad31750e3)
2021-03-23 17:08:25 -07:00
1f14632a18 gif2webp: fix transcode of loop count=65535
with loop_compatibility disabled (the default), non-zero loop counts
will be incremented by 1 for browser rendering compatibility. the max,
65535, is a special case as the muxer will fail if it is exceeded; avoid
increasing the limit in this case. this isn't 100% correct, but should
be close enough given the high number of iterations.

BUG=webp:382,webp:512

Change-Id: Icde3e98a58e9ee89604a72fafda30ab71060dec5
(cherry picked from commit af0e4fbb06)
(cherry picked from commit 4b282e13ad)
2021-03-23 17:08:16 -07:00
dcf860bad1 Import,RGBA: fix for BigEndian import
+ simplification of the logic

Change-Id: Ia20ce844793ed35ea03a17cef45838f3d0ae4afa
(cherry picked from commit 3b07d32712)
2018-02-18 20:29:30 -08:00
ab7b23e93c ReadWebP: fix for big-endian
Change-Id: I36b3c12ccf02eb5dad350c460387c0528fff8df3
(cherry picked from commit 3005237a5d)
2018-02-18 20:29:24 -08:00
26 changed files with 513 additions and 162 deletions

View File

@ -29,6 +29,7 @@ endif()
# Include dependencies.
include(cmake/deps.cmake)
include(GNUInstallDirs)
################################################################################
# Options.
@ -120,13 +121,32 @@ target_link_libraries(webpdecoder ${WEBP_DEP_LIBRARIES})
# Build the webp library.
add_library(webpencode OBJECT ${WEBP_ENC_SRCS})
target_include_directories(
webpencode PRIVATE ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_CURRENT_SOURCE_DIR}/src)
add_library(webpdsp OBJECT ${WEBP_DSP_COMMON_SRCS} ${WEBP_DSP_DEC_SRCS}
${WEBP_DSP_ENC_SRCS})
${WEBP_DSP_ENC_SRCS})
target_include_directories(webpdsp PRIVATE ${CMAKE_CURRENT_BINARY_DIR}
${CMAKE_CURRENT_SOURCE_DIR})
add_library(webputils OBJECT ${WEBP_UTILS_COMMON_SRCS} ${WEBP_UTILS_DEC_SRCS}
${WEBP_UTILS_ENC_SRCS})
${WEBP_UTILS_ENC_SRCS})
target_include_directories(webputils PRIVATE ${CMAKE_CURRENT_BINARY_DIR}
${CMAKE_CURRENT_SOURCE_DIR})
add_library(webp $<TARGET_OBJECTS:webpdecode> $<TARGET_OBJECTS:webpdsp>
$<TARGET_OBJECTS:webpencode> $<TARGET_OBJECTS:webputils>)
$<TARGET_OBJECTS:webpencode> $<TARGET_OBJECTS:webputils>)
if(XCODE)
libwebp_add_stub_file(webp)
endif()
target_link_libraries(webp ${WEBP_DEP_LIBRARIES})
target_include_directories(
webp PRIVATE ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR}
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
$<INSTALL_INTERFACE:include>)
set_target_properties(
webp
PROPERTIES PUBLIC_HEADER "${CMAKE_CURRENT_SOURCE_DIR}/src/webp/decode.h;\
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/encode.h;\
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/types.h")
# Make sure the OBJECT libraries are built with position independent code
# (it is not ON by default).
@ -136,6 +156,17 @@ set_target_properties(webpdecode webpdspdecode webputilsdecode
# Build the webp demux library.
add_library(webpdemux ${WEBP_DEMUX_SRCS})
target_link_libraries(webpdemux webp)
target_include_directories(
webpdemux PRIVATE ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR}
PUBLIC $<INSTALL_INTERFACE:include>)
set_target_properties(
webpdemux
PROPERTIES
PUBLIC_HEADER
"${CMAKE_CURRENT_SOURCE_DIR}/src/webp/decode.h;\
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/demux.h;\
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/mux_types.h;\
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/types.h")
# Set the version numbers.
function(parse_version FILE NAME VAR)
@ -242,6 +273,11 @@ if(WEBP_BUILD_GIF2WEBP OR WEBP_BUILD_IMG2WEBP)
parse_version(mux/Makefile.am webpmux WEBP_MUX_SOVERSION)
set_target_properties(webpmux PROPERTIES VERSION ${PACKAGE_VERSION}
SOVERSION ${WEBP_MUX_SOVERSION})
set_target_properties(
webpmux
PROPERTIES PUBLIC_HEADER "${CMAKE_CURRENT_SOURCE_DIR}/src/webp/mux.h;\
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/mux_types.h;\
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/types.h;")
list(APPEND INSTALLED_LIBRARIES webpmux)
endif()
@ -314,16 +350,18 @@ add_definitions(-DHAVE_CONFIG_H)
include_directories(${CMAKE_CURRENT_BINARY_DIR})
# Install the different headers and libraries.
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/src/webp/decode.h
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/demux.h
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/encode.h
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/mux.h
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/mux_types.h
${CMAKE_CURRENT_SOURCE_DIR}/src/webp/types.h
DESTINATION include/webp)
install(TARGETS ${INSTALLED_LIBRARIES}
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib)
install(
TARGETS ${INSTALLED_LIBRARIES}
EXPORT WebPTargets
PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/webp
INCLUDES
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
set(ConfigPackageLocation ${CMAKE_INSTALL_DATADIR}/WebP/cmake/)
install(EXPORT WebPTargets NAMESPACE WebP::
DESTINATION ${ConfigPackageLocation})
# Create the CMake version file.
include(CMakePackageConfigHelpers)
@ -340,7 +378,7 @@ configure_package_config_file(
${CMAKE_CURRENT_SOURCE_DIR}/cmake/WebPConfig.cmake.in
${CMAKE_CURRENT_BINARY_DIR}/WebPConfig.cmake
INSTALL_DESTINATION ${ConfigPackageLocation}
)
PATH_VARS CMAKE_INSTALL_INCLUDEDIR)
# Install the generated CMake files.
install(

View File

@ -1,6 +1,19 @@
set(WebP_VERSION @PROJECT_VERSION@)
set(WEBP_VERSION ${WebP_VERSION})
@PACKAGE_INIT@
set(WebP_INCLUDE_DIRS "webp")
set(WEBP_INCLUDE_DIRS ${WebP_INCLUDE_DIRS})
if(@WEBP_USE_THREAD@)
include(CMakeFindDependencyMacro)
find_dependency(Threads REQUIRED)
endif()
include("${CMAKE_CURRENT_LIST_DIR}/WebPTargets.cmake")
set_and_check(WebP_INCLUDE_DIR "@PACKAGE_CMAKE_INSTALL_INCLUDEDIR@")
set(WebP_INCLUDE_DIRS ${WebP_INCLUDE_DIR})
set(WEBP_INCLUDE_DIRS ${WebP_INCLUDE_DIR})
set(WebP_LIBRARIES "@INSTALLED_LIBRARIES@")
set(WEBP_LIBRARIES "${WebP_LIBRARIES}")
check_required_components(WebP)

View File

@ -143,8 +143,18 @@ static int CompareAnimatedImagePair(const AnimatedImage* const img1,
if (!ok) return 0; // These are fatal failures, can't proceed.
if (is_multi_frame_image) { // Checks relevant for multi-frame images only.
ok = CompareValues(img1->loop_count, img2->loop_count,
"Loop count mismatch") && ok;
int max_loop_count_workaround = 0;
// Transcodes to webp increase the gif loop count by 1 for compatibility.
// When the gif has the maximum value the webp value will be off by one.
if ((img1->format == ANIM_GIF && img1->loop_count == 65536 &&
img2->format == ANIM_WEBP && img2->loop_count == 65535) ||
(img1->format == ANIM_WEBP && img1->loop_count == 65535 &&
img2->format == ANIM_GIF && img2->loop_count == 65536)) {
max_loop_count_workaround = 1;
}
ok = (max_loop_count_workaround ||
CompareValues(img1->loop_count, img2->loop_count,
"Loop count mismatch")) && ok;
ok = CompareBackgroundColor(img1->bgcolor, img2->bgcolor,
premultiply) && ok;
}

View File

@ -275,6 +275,7 @@ static int ReadAnimatedWebP(const char filename[],
prev_frame_timestamp = timestamp;
}
ok = dump_ok;
if (ok) image->format = ANIM_WEBP;
End:
WebPAnimDecoderDelete(dec);
@ -684,6 +685,7 @@ static int ReadAnimatedGIF(const char filename[], AnimatedImage* const image,
}
}
}
image->format = ANIM_GIF;
DGifCloseFile(gif, NULL);
return 1;
}

View File

@ -22,6 +22,11 @@
extern "C" {
#endif
typedef enum {
ANIM_GIF,
ANIM_WEBP
} AnimatedFileFormat;
typedef struct {
uint8_t* rgba; // Decoded and reconstructed full frame.
int duration; // Frame duration in milliseconds.
@ -29,6 +34,7 @@ typedef struct {
} DecodedFrame;
typedef struct {
AnimatedFileFormat format;
uint32_t canvas_width;
uint32_t canvas_height;
uint32_t bgcolor;

View File

@ -460,7 +460,7 @@ int main(int argc, const char *argv[]) {
stored_loop_count = 1;
loop_count = 1;
}
} else if (loop_count > 0) {
} else if (loop_count > 0 && loop_count < 65535) {
// adapt GIF's semantic to WebP's (except in the infinite-loop case)
loop_count += 1;
}

View File

@ -9,6 +9,10 @@
//
// WebP decode.
#ifdef HAVE_CONFIG_H
#include "webp/config.h"
#endif
#include "./webpdec.h"
#include <stdio.h>
@ -162,7 +166,11 @@ int ReadWebP(const uint8_t* const data, size_t data_size,
break;
}
if (pic->use_argb) {
#ifdef WORDS_BIGENDIAN
output_buffer->colorspace = MODE_ARGB;
#else
output_buffer->colorspace = MODE_BGRA;
#endif
output_buffer->u.RGBA.rgba = (uint8_t*)pic->argb;
output_buffer->u.RGBA.stride = pic->argb_stride * sizeof(uint32_t);
output_buffer->u.RGBA.size = output_buffer->u.RGBA.stride * pic->height;

View File

@ -74,7 +74,8 @@ static VP8StatusCode CheckDecBuffer(const WebPDecBuffer* const buffer) {
} else { // RGB checks
const WebPRGBABuffer* const buf = &buffer->u.RGBA;
const int stride = abs(buf->stride);
const uint64_t size = MIN_BUFFER_SIZE(width, height, stride);
const uint64_t size =
MIN_BUFFER_SIZE(width * kModeBpp[mode], height, stride);
ok &= (size <= buf->size);
ok &= (stride >= width * kModeBpp[mode]);
ok &= (buf->rgba != NULL);

View File

@ -283,10 +283,8 @@ static void RestoreContext(const MBContext* context, VP8Decoder* const dec,
static VP8StatusCode IDecError(WebPIDecoder* const idec, VP8StatusCode error) {
if (idec->state_ == STATE_VP8_DATA) {
VP8Io* const io = &idec->io_;
if (io->teardown != NULL) {
io->teardown(io);
}
// Synchronize the thread, clean-up and check for errors.
VP8ExitCritical((VP8Decoder*)idec->dec_, &idec->io_);
}
idec->state_ = STATE_ERROR;
return error;
@ -451,7 +449,10 @@ static VP8StatusCode DecodeRemaining(WebPIDecoder* const idec) {
VP8Decoder* const dec = (VP8Decoder*)idec->dec_;
VP8Io* const io = &idec->io_;
assert(dec->ready_);
// Make sure partition #0 has been read before, to set dec to ready_.
if (!dec->ready_) {
return IDecError(idec, VP8_STATUS_BITSTREAM_ERROR);
}
for (; dec->mb_y_ < dec->mb_h_; ++dec->mb_y_) {
if (idec->last_mb_y_ != dec->mb_y_) {
if (!VP8ParseIntraModeRow(&dec->br_, dec)) {
@ -491,6 +492,7 @@ static VP8StatusCode DecodeRemaining(WebPIDecoder* const idec) {
}
// Synchronize the thread and check for errors.
if (!VP8ExitCritical(dec, io)) {
idec->state_ = STATE_ERROR; // prevent re-entry in IDecError
return IDecError(idec, VP8_STATUS_USER_ABORT);
}
dec->ready_ = 0;
@ -571,6 +573,10 @@ static VP8StatusCode IDecode(WebPIDecoder* idec) {
status = DecodePartition0(idec);
}
if (idec->state_ == STATE_VP8_DATA) {
const VP8Decoder* const dec = (VP8Decoder*)idec->dec_;
if (dec == NULL) {
return VP8_STATUS_SUSPENDED; // can't continue if we have no decoder.
}
status = DecodeRemaining(idec);
}
if (idec->state_ == STATE_VP8L_HEADER) {

View File

@ -253,11 +253,11 @@ static int ReadHuffmanCodeLengths(
int symbol;
int max_symbol;
int prev_code_len = DEFAULT_CODE_LENGTH;
HuffmanCode table[1 << LENGTHS_TABLE_BITS];
HuffmanTables tables;
if (!VP8LBuildHuffmanTable(table, LENGTHS_TABLE_BITS,
code_length_code_lengths,
NUM_CODE_LENGTH_CODES)) {
if (!VP8LHuffmanTablesAllocate(1 << LENGTHS_TABLE_BITS, &tables) ||
!VP8LBuildHuffmanTable(&tables, LENGTHS_TABLE_BITS,
code_length_code_lengths, NUM_CODE_LENGTH_CODES)) {
goto End;
}
@ -277,7 +277,7 @@ static int ReadHuffmanCodeLengths(
int code_len;
if (max_symbol-- == 0) break;
VP8LFillBitWindow(br);
p = &table[VP8LPrefetchBits(br) & LENGTHS_TABLE_MASK];
p = &tables.curr_segment->start[VP8LPrefetchBits(br) & LENGTHS_TABLE_MASK];
VP8LSetBitPos(br, br->bit_pos_ + p->bits);
code_len = p->value;
if (code_len < kCodeLengthLiterals) {
@ -300,6 +300,7 @@ static int ReadHuffmanCodeLengths(
ok = 1;
End:
VP8LHuffmanTablesDeallocate(&tables);
if (!ok) dec->status_ = VP8_STATUS_BITSTREAM_ERROR;
return ok;
}
@ -307,7 +308,8 @@ static int ReadHuffmanCodeLengths(
// 'code_lengths' is pre-allocated temporary buffer, used for creating Huffman
// tree.
static int ReadHuffmanCode(int alphabet_size, VP8LDecoder* const dec,
int* const code_lengths, HuffmanCode* const table) {
int* const code_lengths,
HuffmanTables* const table) {
int ok = 0;
int size = 0;
VP8LBitReader* const br = &dec->br_;
@ -362,12 +364,18 @@ static int ReadHuffmanCodes(VP8LDecoder* const dec, int xsize, int ysize,
VP8LMetadata* const hdr = &dec->hdr_;
uint32_t* huffman_image = NULL;
HTreeGroup* htree_groups = NULL;
HuffmanCode* huffman_tables = NULL;
HuffmanCode* next = NULL;
HuffmanTables* huffman_tables = &hdr->huffman_tables_;
int num_htree_groups = 1;
int num_htree_groups_max = 1;
int max_alphabet_size = 0;
int* code_lengths = NULL;
const int table_size = kTableSize[color_cache_bits];
int* mapping = NULL;
int ok = 0;
// Check the table has been 0 initialized (through InitMetadata).
assert(huffman_tables->root.start == NULL);
assert(huffman_tables->curr_segment == NULL);
if (allow_recursion && VP8LReadBits(br, 1)) {
// use meta Huffman codes.
@ -384,10 +392,36 @@ static int ReadHuffmanCodes(VP8LDecoder* const dec, int xsize, int ysize,
// The huffman data is stored in red and green bytes.
const int group = (huffman_image[i] >> 8) & 0xffff;
huffman_image[i] = group;
if (group >= num_htree_groups) {
num_htree_groups = group + 1;
if (group >= num_htree_groups_max) {
num_htree_groups_max = group + 1;
}
}
// Check the validity of num_htree_groups_max. If it seems too big, use a
// smaller value for later. This will prevent big memory allocations to end
// up with a bad bitstream anyway.
// The value of 1000 is totally arbitrary. We know that num_htree_groups_max
// is smaller than (1 << 16) and should be smaller than the number of pixels
// (though the format allows it to be bigger).
if (num_htree_groups_max > 1000 || num_htree_groups_max > xsize * ysize) {
// Create a mapping from the used indices to the minimal set of used
// values [0, num_htree_groups)
mapping = (int*)WebPSafeMalloc(num_htree_groups_max, sizeof(*mapping));
if (mapping == NULL) {
dec->status_ = VP8_STATUS_OUT_OF_MEMORY;
goto Error;
}
// -1 means a value is unmapped, and therefore unused in the Huffman
// image.
memset(mapping, 0xff, num_htree_groups_max * sizeof(*mapping));
for (num_htree_groups = 0, i = 0; i < huffman_pixs; ++i) {
// Get the current mapping for the group and remap the Huffman image.
int* const mapped_group = &mapping[huffman_image[i]];
if (*mapped_group == -1) *mapped_group = num_htree_groups++;
huffman_image[i] = *mapped_group;
}
} else {
num_htree_groups = num_htree_groups_max;
}
}
if (br->eos_) goto Error;
@ -403,83 +437,99 @@ static int ReadHuffmanCodes(VP8LDecoder* const dec, int xsize, int ysize,
}
}
huffman_tables = (HuffmanCode*)WebPSafeMalloc(num_htree_groups * table_size,
sizeof(*huffman_tables));
htree_groups = VP8LHtreeGroupsNew(num_htree_groups);
code_lengths = (int*)WebPSafeCalloc((uint64_t)max_alphabet_size,
sizeof(*code_lengths));
if (htree_groups == NULL || code_lengths == NULL || huffman_tables == NULL) {
if (htree_groups == NULL || code_lengths == NULL ||
!VP8LHuffmanTablesAllocate(num_htree_groups * table_size,
huffman_tables)) {
dec->status_ = VP8_STATUS_OUT_OF_MEMORY;
goto Error;
}
next = huffman_tables;
for (i = 0; i < num_htree_groups; ++i) {
HTreeGroup* const htree_group = &htree_groups[i];
HuffmanCode** const htrees = htree_group->htrees;
int size;
int total_size = 0;
int is_trivial_literal = 1;
int max_bits = 0;
for (j = 0; j < HUFFMAN_CODES_PER_META_CODE; ++j) {
int alphabet_size = kAlphabetSize[j];
htrees[j] = next;
if (j == 0 && color_cache_bits > 0) {
alphabet_size += 1 << color_cache_bits;
}
size = ReadHuffmanCode(alphabet_size, dec, code_lengths, next);
if (size == 0) {
goto Error;
}
if (is_trivial_literal && kLiteralMap[j] == 1) {
is_trivial_literal = (next->bits == 0);
}
total_size += next->bits;
next += size;
if (j <= ALPHA) {
int local_max_bits = code_lengths[0];
int k;
for (k = 1; k < alphabet_size; ++k) {
if (code_lengths[k] > local_max_bits) {
local_max_bits = code_lengths[k];
}
for (i = 0; i < num_htree_groups_max; ++i) {
// If the index "i" is unused in the Huffman image, just make sure the
// coefficients are valid but do not store them.
if (mapping != NULL && mapping[i] == -1) {
for (j = 0; j < HUFFMAN_CODES_PER_META_CODE; ++j) {
int alphabet_size = kAlphabetSize[j];
if (j == 0 && color_cache_bits > 0) {
alphabet_size += (1 << color_cache_bits);
}
// Passing in NULL so that nothing gets filled.
if (!ReadHuffmanCode(alphabet_size, dec, code_lengths, NULL)) {
goto Error;
}
max_bits += local_max_bits;
}
}
htree_group->is_trivial_literal = is_trivial_literal;
htree_group->is_trivial_code = 0;
if (is_trivial_literal) {
const int red = htrees[RED][0].value;
const int blue = htrees[BLUE][0].value;
const int alpha = htrees[ALPHA][0].value;
htree_group->literal_arb =
((uint32_t)alpha << 24) | (red << 16) | blue;
if (total_size == 0 && htrees[GREEN][0].value < NUM_LITERAL_CODES) {
htree_group->is_trivial_code = 1;
htree_group->literal_arb |= htrees[GREEN][0].value << 8;
} else {
HTreeGroup* const htree_group =
&htree_groups[(mapping == NULL) ? i : mapping[i]];
HuffmanCode** const htrees = htree_group->htrees;
int size;
int total_size = 0;
int is_trivial_literal = 1;
int max_bits = 0;
for (j = 0; j < HUFFMAN_CODES_PER_META_CODE; ++j) {
int alphabet_size = kAlphabetSize[j];
if (j == 0 && color_cache_bits > 0) {
alphabet_size += (1 << color_cache_bits);
}
size =
ReadHuffmanCode(alphabet_size, dec, code_lengths, huffman_tables);
htrees[j] = huffman_tables->curr_segment->curr_table;
if (size == 0) {
goto Error;
}
if (is_trivial_literal && kLiteralMap[j] == 1) {
is_trivial_literal = (htrees[j]->bits == 0);
}
total_size += htrees[j]->bits;
huffman_tables->curr_segment->curr_table += size;
if (j <= ALPHA) {
int local_max_bits = code_lengths[0];
int k;
for (k = 1; k < alphabet_size; ++k) {
if (code_lengths[k] > local_max_bits) {
local_max_bits = code_lengths[k];
}
}
max_bits += local_max_bits;
}
}
htree_group->is_trivial_literal = is_trivial_literal;
htree_group->is_trivial_code = 0;
if (is_trivial_literal) {
const int red = htrees[RED][0].value;
const int blue = htrees[BLUE][0].value;
const int alpha = htrees[ALPHA][0].value;
htree_group->literal_arb = ((uint32_t)alpha << 24) | (red << 16) | blue;
if (total_size == 0 && htrees[GREEN][0].value < NUM_LITERAL_CODES) {
htree_group->is_trivial_code = 1;
htree_group->literal_arb |= htrees[GREEN][0].value << 8;
}
}
htree_group->use_packed_table =
!htree_group->is_trivial_code && (max_bits < HUFFMAN_PACKED_BITS);
if (htree_group->use_packed_table) BuildPackedTable(htree_group);
}
htree_group->use_packed_table = !htree_group->is_trivial_code &&
(max_bits < HUFFMAN_PACKED_BITS);
if (htree_group->use_packed_table) BuildPackedTable(htree_group);
}
WebPSafeFree(code_lengths);
ok = 1;
// All OK. Finalize pointers and return.
// All OK. Finalize pointers.
hdr->huffman_image_ = huffman_image;
hdr->num_htree_groups_ = num_htree_groups;
hdr->htree_groups_ = htree_groups;
hdr->huffman_tables_ = huffman_tables;
return 1;
Error:
WebPSafeFree(code_lengths);
WebPSafeFree(huffman_image);
WebPSafeFree(huffman_tables);
VP8LHtreeGroupsFree(htree_groups);
return 0;
WebPSafeFree(mapping);
if (!ok) {
WebPSafeFree(huffman_image);
VP8LHuffmanTablesDeallocate(huffman_tables);
VP8LHtreeGroupsFree(htree_groups);
}
return ok;
}
//------------------------------------------------------------------------------
@ -884,7 +934,11 @@ static WEBP_INLINE void CopyBlock8b(uint8_t* const dst, int dist, int length) {
#endif
break;
case 2:
#if !defined(WORDS_BIGENDIAN)
memcpy(&pattern, src, sizeof(uint16_t));
#else
pattern = ((uint32_t)src[0] << 8) | src[1];
#endif
#if defined(__arm__) || defined(_M_ARM)
pattern |= pattern << 16;
#elif defined(WEBP_USE_MIPS_DSP_R2)
@ -1183,9 +1237,20 @@ static int DecodeImageData(VP8LDecoder* const dec, uint32_t* const data,
}
br->eos_ = VP8LIsEndOfStream(br);
if (dec->incremental_ && br->eos_ && src < src_end) {
// In incremental decoding:
// br->eos_ && src < src_last: if 'br' reached the end of the buffer and
// 'src_last' has not been reached yet, there is not enough data. 'dec' has to
// be reset until there is more data.
// !br->eos_ && src < src_last: this cannot happen as either the buffer is
// fully read, either enough has been read to reach 'src_last'.
// src >= src_last: 'src_last' is reached, all is fine. 'src' can actually go
// beyond 'src_last' in case the image is cropped and an LZ77 goes further.
// The buffer might have been enough or there is some left. 'br->eos_' does
// not matter.
assert(!dec->incremental_ || (br->eos_ && src < src_last) || src >= src_last);
if (dec->incremental_ && br->eos_ && src < src_last) {
RestoreState(dec);
} else if (!br->eos_) {
} else if ((dec->incremental_ && src >= src_last) || !br->eos_) {
// Process the remaining rows corresponding to last row-block.
if (process_func != NULL) {
process_func(dec, row > last_row ? last_row : row);
@ -1304,7 +1369,7 @@ static void ClearMetadata(VP8LMetadata* const hdr) {
assert(hdr != NULL);
WebPSafeFree(hdr->huffman_image_);
WebPSafeFree(hdr->huffman_tables_);
VP8LHuffmanTablesDeallocate(&hdr->huffman_tables_);
VP8LHtreeGroupsFree(hdr->htree_groups_);
VP8LColorCacheClear(&hdr->color_cache_);
VP8LColorCacheClear(&hdr->saved_color_cache_);
@ -1620,7 +1685,7 @@ int VP8LDecodeImage(VP8LDecoder* const dec) {
// Sanity checks.
if (dec == NULL) return 0;
assert(dec->hdr_.huffman_tables_ != NULL);
assert(dec->hdr_.huffman_tables_.root.start != NULL);
assert(dec->hdr_.htree_groups_ != NULL);
assert(dec->hdr_.num_htree_groups_ > 0);

View File

@ -51,7 +51,7 @@ typedef struct {
uint32_t *huffman_image_;
int num_htree_groups_;
HTreeGroup *htree_groups_;
HuffmanCode *huffman_tables_;
HuffmanTables huffman_tables_;
} VP8LMetadata;
typedef struct VP8LDecoder VP8LDecoder;

View File

@ -366,6 +366,16 @@ static WEBP_INLINE uint32_t MakeARGB32(int a, int r, int g, int b) {
return (((uint32_t)a << 24) | (r << 16) | (g << 8) | b);
}
#ifdef WORDS_BIGENDIAN
static void PackARGB_C(const uint8_t* a, const uint8_t* r, const uint8_t* g,
const uint8_t* b, int len, uint32_t* out) {
int i;
for (i = 0; i < len; ++i) {
out[i] = MakeARGB32(a[4 * i], r[4 * i], g[4 * i], b[4 * i]);
}
}
#endif
static void PackRGB_C(const uint8_t* r, const uint8_t* g, const uint8_t* b,
int len, int step, uint32_t* out) {
int i, offset = 0;
@ -381,6 +391,10 @@ int (*WebPDispatchAlpha)(const uint8_t*, int, int, int, uint8_t*, int);
void (*WebPDispatchAlphaToGreen)(const uint8_t*, int, int, int, uint32_t*, int);
int (*WebPExtractAlpha)(const uint8_t*, int, int, int, uint8_t*, int);
void (*WebPExtractGreen)(const uint32_t* argb, uint8_t* alpha, int size);
#ifdef WORDS_BIGENDIAN
void (*WebPPackARGB)(const uint8_t* a, const uint8_t* r, const uint8_t* g,
const uint8_t* b, int, uint32_t*);
#endif
void (*WebPPackRGB)(const uint8_t* r, const uint8_t* g, const uint8_t* b,
int len, int step, uint32_t* out);
@ -405,6 +419,9 @@ WEBP_TSAN_IGNORE_FUNCTION void WebPInitAlphaProcessing(void) {
WebPMultRow = WebPMultRow_C;
WebPApplyAlphaMultiply4444 = ApplyAlphaMultiply_16b_C;
#ifdef WORDS_BIGENDIAN
WebPPackARGB = PackARGB_C;
#endif
WebPPackRGB = PackRGB_C;
#if !WEBP_NEON_OMIT_C_CODE
WebPApplyAlphaMultiply = ApplyAlphaMultiply_C;
@ -451,6 +468,9 @@ WEBP_TSAN_IGNORE_FUNCTION void WebPInitAlphaProcessing(void) {
assert(WebPDispatchAlphaToGreen != NULL);
assert(WebPExtractAlpha != NULL);
assert(WebPExtractGreen != NULL);
#ifdef WORDS_BIGENDIAN
assert(WebPPackARGB != NULL);
#endif
assert(WebPPackRGB != NULL);
assert(WebPHasAlpha8b != NULL);
assert(WebPHasAlpha32b != NULL);

View File

@ -125,6 +125,49 @@ static void MultARGBRow_MIPSdspR2(uint32_t* const ptr, int width,
}
}
#ifdef WORDS_BIGENDIAN
static void PackARGB_MIPSdspR2(const uint8_t* a, const uint8_t* r,
const uint8_t* g, const uint8_t* b, int len,
uint32_t* out) {
int temp0, temp1, temp2, temp3, offset;
const int rest = len & 1;
const uint32_t* const loop_end = out + len - rest;
const int step = 4;
__asm__ volatile (
"xor %[offset], %[offset], %[offset] \n\t"
"beq %[loop_end], %[out], 0f \n\t"
"2: \n\t"
"lbux %[temp0], %[offset](%[a]) \n\t"
"lbux %[temp1], %[offset](%[r]) \n\t"
"lbux %[temp2], %[offset](%[g]) \n\t"
"lbux %[temp3], %[offset](%[b]) \n\t"
"ins %[temp1], %[temp0], 16, 16 \n\t"
"ins %[temp3], %[temp2], 16, 16 \n\t"
"addiu %[out], %[out], 4 \n\t"
"precr.qb.ph %[temp0], %[temp1], %[temp3] \n\t"
"sw %[temp0], -4(%[out]) \n\t"
"addu %[offset], %[offset], %[step] \n\t"
"bne %[loop_end], %[out], 2b \n\t"
"0: \n\t"
"beq %[rest], $zero, 1f \n\t"
"lbux %[temp0], %[offset](%[a]) \n\t"
"lbux %[temp1], %[offset](%[r]) \n\t"
"lbux %[temp2], %[offset](%[g]) \n\t"
"lbux %[temp3], %[offset](%[b]) \n\t"
"ins %[temp1], %[temp0], 16, 16 \n\t"
"ins %[temp3], %[temp2], 16, 16 \n\t"
"precr.qb.ph %[temp0], %[temp1], %[temp3] \n\t"
"sw %[temp0], 0(%[out]) \n\t"
"1: \n\t"
: [temp0]"=&r"(temp0), [temp1]"=&r"(temp1), [temp2]"=&r"(temp2),
[temp3]"=&r"(temp3), [offset]"=&r"(offset), [out]"+&r"(out)
: [a]"r"(a), [r]"r"(r), [g]"r"(g), [b]"r"(b), [step]"r"(step),
[loop_end]"r"(loop_end), [rest]"r"(rest)
: "memory"
);
}
#endif // WORDS_BIGENDIAN
static void PackRGB_MIPSdspR2(const uint8_t* r, const uint8_t* g,
const uint8_t* b, int len, int step,
uint32_t* out) {
@ -172,6 +215,9 @@ extern void WebPInitAlphaProcessingMIPSdspR2(void);
WEBP_TSAN_IGNORE_FUNCTION void WebPInitAlphaProcessingMIPSdspR2(void) {
WebPDispatchAlpha = DispatchAlpha_MIPSdspR2;
WebPMultARGBRow = MultARGBRow_MIPSdspR2;
#ifdef WORDS_BIGENDIAN
WebPPackARGB = PackARGB_MIPSdspR2;
#endif
WebPPackRGB = PackRGB_MIPSdspR2;
}

View File

@ -83,7 +83,7 @@ static void ApplyAlphaMultiply_NEON(uint8_t* rgba, int alpha_first,
static int DispatchAlpha_NEON(const uint8_t* alpha, int alpha_stride,
int width, int height,
uint8_t* dst, int dst_stride) {
uint32_t alpha_mask = 0xffffffffu;
uint32_t alpha_mask = 0xffu;
uint8x8_t mask8 = vdup_n_u8(0xff);
uint32_t tmp[2];
int i, j;
@ -107,6 +107,7 @@ static int DispatchAlpha_NEON(const uint8_t* alpha, int alpha_stride,
dst += dst_stride;
}
vst1_u8((uint8_t*)tmp, mask8);
alpha_mask *= 0x01010101;
alpha_mask &= tmp[0];
alpha_mask &= tmp[1];
return (alpha_mask != 0xffffffffu);
@ -134,7 +135,7 @@ static void DispatchAlphaToGreen_NEON(const uint8_t* alpha, int alpha_stride,
static int ExtractAlpha_NEON(const uint8_t* argb, int argb_stride,
int width, int height,
uint8_t* alpha, int alpha_stride) {
uint32_t alpha_mask = 0xffffffffu;
uint32_t alpha_mask = 0xffu;
uint8x8_t mask8 = vdup_n_u8(0xff);
uint32_t tmp[2];
int i, j;
@ -156,6 +157,7 @@ static int ExtractAlpha_NEON(const uint8_t* argb, int argb_stride,
alpha += alpha_stride;
}
vst1_u8((uint8_t*)tmp, mask8);
alpha_mask *= 0x01010101;
alpha_mask &= tmp[0];
alpha_mask &= tmp[1];
return (alpha_mask == 0xffffffffu);

View File

@ -166,6 +166,13 @@ extern "C" {
#define WEBP_SWAP_16BIT_CSP 0
#endif
// some endian fix (e.g.: mips-gcc doesn't define __BIG_ENDIAN__)
#if !defined(WORDS_BIGENDIAN) && \
(defined(__BIG_ENDIAN__) || defined(_M_PPC) || \
(defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)))
#define WORDS_BIGENDIAN
#endif
typedef enum {
kSSE2,
kSSE3,
@ -578,6 +585,13 @@ void WebPMultRow_C(uint8_t* const ptr, const uint8_t* const alpha,
int width, int inverse);
void WebPMultARGBRow_C(uint32_t* const ptr, int width, int inverse);
#ifdef WORDS_BIGENDIAN
// ARGB packing function: a/r/g/b input is rgba or bgra order.
extern void (*WebPPackARGB)(const uint8_t* a, const uint8_t* r,
const uint8_t* g, const uint8_t* b, int len,
uint32_t* out);
#endif
// RGB packing function. 'step' can be 3 or 4. r/g/b input is rgb or bgr order.
extern void (*WebPPackRGB)(const uint8_t* r, const uint8_t* g, const uint8_t* b,
int len, int step, uint32_t* out);

View File

@ -13,6 +13,7 @@
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include "src/enc/vp8i_enc.h"
#include "src/dsp/dsp.h"
@ -148,6 +149,7 @@ static int EncodeAlphaInternal(const uint8_t* const data, int width, int height,
}
} else {
VP8LBitWriterWipeOut(&tmp_bw);
memset(&result->bw, 0, sizeof(result->bw));
return 0;
}
}
@ -162,7 +164,7 @@ static int EncodeAlphaInternal(const uint8_t* const data, int width, int height,
header = method | (filter << 2);
if (reduce_levels) header |= ALPHA_PREPROCESSED_LEVELS << 4;
VP8BitWriterInit(&result->bw, ALPHA_HEADER_LEN + output_size);
if (!VP8BitWriterInit(&result->bw, ALPHA_HEADER_LEN + output_size)) ok = 0;
ok = ok && VP8BitWriterAppend(&result->bw, &header, ALPHA_HEADER_LEN);
ok = ok && VP8BitWriterAppend(&result->bw, output, output_size);

View File

@ -577,7 +577,7 @@ static int BackwardReferencesHashChainDistanceOnly(
(CostModel*)WebPSafeCalloc(1ULL, cost_model_size);
VP8LColorCache hashers;
CostManager* cost_manager =
(CostManager*)WebPSafeMalloc(1ULL, sizeof(*cost_manager));
(CostManager*)WebPSafeCalloc(1ULL, sizeof(*cost_manager));
int offset_prev = -1, len_prev = -1;
double offset_cost = -1;
int first_offset_is_constant = -1; // initialized with 'impossible' value

View File

@ -910,13 +910,14 @@ static VP8LBackwardRefs* GetBackwardReferences(
quality >= 25) {
const VP8LHashChain* const hash_chain_tmp =
(lz77_type_best == kLZ77Standard) ? hash_chain : &hash_chain_box;
if (VP8LBackwardReferencesTraceBackwards(width, height, argb, *cache_bits,
hash_chain_tmp, best, worst)) {
double bit_cost_trace;
VP8LHistogramCreate(histo, worst, *cache_bits);
bit_cost_trace = VP8LHistogramEstimateBits(histo);
if (bit_cost_trace < bit_cost_best) best = worst;
double bit_cost_trace;
if (!VP8LBackwardReferencesTraceBackwards(width, height, argb, *cache_bits,
hash_chain_tmp, best, worst)) {
goto Error;
}
VP8LHistogramCreate(histo, worst, *cache_bits);
bit_cost_trace = VP8LHistogramEstimateBits(histo);
if (bit_cost_trace < bit_cost_best) best = worst;
}
BackwardReferences2DLocality(width, best);

View File

@ -28,11 +28,11 @@
// If defined, use table to compute x / alpha.
#define USE_INVERSE_ALPHA_TABLE
static const union {
uint32_t argb;
uint8_t bytes[4];
} test_endian = { 0xff000000u };
#define ALPHA_IS_LAST (test_endian.bytes[3] == 0xff)
#ifdef WORDS_BIGENDIAN
#define ALPHA_OFFSET 0 // uint32_t 0xff000000 is 0xff,00,00,00 in memory
#else
#define ALPHA_OFFSET 3 // uint32_t 0xff000000 is 0x00,00,00,ff in memory
#endif
//------------------------------------------------------------------------------
// Detection of non-trivial transparency
@ -61,7 +61,7 @@ int WebPPictureHasTransparency(const WebPPicture* picture) {
return CheckNonOpaque(picture->a, picture->width, picture->height,
1, picture->a_stride);
} else {
const int alpha_offset = ALPHA_IS_LAST ? 3 : 0;
const int alpha_offset = ALPHA_OFFSET;
return CheckNonOpaque((const uint8_t*)picture->argb + alpha_offset,
picture->width, picture->height,
4, picture->argb_stride * sizeof(*picture->argb));
@ -990,10 +990,10 @@ static int PictureARGBToYUVA(WebPPicture* picture, WebPEncCSP colorspace,
return WebPEncodingSetError(picture, VP8_ENC_ERROR_INVALID_CONFIGURATION);
} else {
const uint8_t* const argb = (const uint8_t*)picture->argb;
const uint8_t* const r = ALPHA_IS_LAST ? argb + 2 : argb + 1;
const uint8_t* const g = ALPHA_IS_LAST ? argb + 1 : argb + 2;
const uint8_t* const b = ALPHA_IS_LAST ? argb + 0 : argb + 3;
const uint8_t* const a = ALPHA_IS_LAST ? argb + 3 : argb + 0;
const uint8_t* const a = argb + (0 ^ ALPHA_OFFSET);
const uint8_t* const r = argb + (1 ^ ALPHA_OFFSET);
const uint8_t* const g = argb + (2 ^ ALPHA_OFFSET);
const uint8_t* const b = argb + (3 ^ ALPHA_OFFSET);
picture->colorspace = WEBP_YUV420;
return ImportYUVAFromRGBA(r, g, b, a, 4, 4 * picture->argb_stride,
@ -1044,7 +1044,8 @@ int WebPPictureYUVAToARGB(WebPPicture* picture) {
const int argb_stride = 4 * picture->argb_stride;
uint8_t* dst = (uint8_t*)picture->argb;
const uint8_t *cur_u = picture->u, *cur_v = picture->v, *cur_y = picture->y;
WebPUpsampleLinePairFunc upsample = WebPGetLinePairConverter(ALPHA_IS_LAST);
WebPUpsampleLinePairFunc upsample =
WebPGetLinePairConverter(ALPHA_OFFSET > 0);
// First row, with replicated top samples.
upsample(cur_y, NULL, cur_u, cur_v, cur_u, cur_v, dst, NULL, width);
@ -1087,6 +1088,7 @@ static int Import(WebPPicture* const picture,
const uint8_t* rgb, int rgb_stride,
int step, int swap_rb, int import_alpha) {
int y;
// swap_rb -> b,g,r,a , !swap_rb -> r,g,b,a
const uint8_t* r_ptr = rgb + (swap_rb ? 2 : 0);
const uint8_t* g_ptr = rgb + 1;
const uint8_t* b_ptr = rgb + (swap_rb ? 0 : 2);
@ -1104,16 +1106,25 @@ static int Import(WebPPicture* const picture,
WebPInitAlphaProcessing();
if (import_alpha) {
// dst[] byte order is {a,r,g,b} for big-endian, {b,g,r,a} for little endian
uint32_t* dst = picture->argb;
const int do_copy =
(!swap_rb && !ALPHA_IS_LAST) || (swap_rb && ALPHA_IS_LAST);
const int do_copy = (ALPHA_OFFSET == 3) && swap_rb;
assert(step == 4);
for (y = 0; y < height; ++y) {
if (do_copy) {
memcpy(dst, rgb, width * 4);
} else {
#ifdef WORDS_BIGENDIAN
// BGRA or RGBA input order.
const uint8_t* a_ptr = rgb + 3;
WebPPackARGB(a_ptr, r_ptr, g_ptr, b_ptr, width, dst);
r_ptr += rgb_stride;
g_ptr += rgb_stride;
b_ptr += rgb_stride;
#else
// RGBA input order. Need to swap R and B.
VP8LConvertBGRAToRGBA((const uint32_t*)rgb, width, (uint8_t*)dst);
#endif
}
rgb += rgb_stride;
dst += picture->argb_stride;

View File

@ -1755,11 +1755,16 @@ WebPEncodingError VP8LEncodeStream(const WebPConfig* const config,
const WebPWorkerInterface* const worker_interface = WebPGetWorkerInterface();
int ok_main;
if (enc_main == NULL || !VP8LBitWriterInit(&bw_side, 0)) {
WebPEncodingSetError(picture, VP8_ENC_ERROR_OUT_OF_MEMORY);
VP8LEncoderDelete(enc_main);
return 0;
}
// Analyze image (entropy, num_palettes etc)
if (enc_main == NULL ||
!EncoderAnalyze(enc_main, crunch_configs, &num_crunch_configs_main,
if (!EncoderAnalyze(enc_main, crunch_configs, &num_crunch_configs_main,
&red_and_blue_always_zero) ||
!EncoderInit(enc_main) || !VP8LBitWriterInit(&bw_side, 0)) {
!EncoderInit(enc_main)) {
err = VP8_ENC_ERROR_OUT_OF_MEMORY;
goto Error;
}

View File

@ -14,6 +14,7 @@
#ifndef WEBP_MUX_MUXI_H_
#define WEBP_MUX_MUXI_H_
#include <assert.h>
#include <stdlib.h>
#include "src/dec/vp8i_dec.h"
#include "src/dec/vp8li_dec.h"
@ -143,13 +144,13 @@ void ChunkListDelete(WebPChunk** const chunk_list);
// Returns size of the chunk including chunk header and padding byte (if any).
static WEBP_INLINE size_t SizeWithPadding(size_t chunk_size) {
assert(chunk_size <= MAX_CHUNK_PAYLOAD);
return CHUNK_HEADER_SIZE + ((chunk_size + 1) & ~1U);
}
// Size of a chunk including header and padding.
static WEBP_INLINE size_t ChunkDiskSize(const WebPChunk* chunk) {
const size_t data_size = chunk->data_.size;
assert(data_size < MAX_CHUNK_PAYLOAD);
return SizeWithPadding(data_size);
}

View File

@ -59,6 +59,7 @@ static WebPMuxError ChunkVerifyAndAssign(WebPChunk* chunk,
// Sanity checks.
if (data_size < CHUNK_HEADER_SIZE) return WEBP_MUX_NOT_ENOUGH_DATA;
chunk_size = GetLE32(data + TAG_SIZE);
if (chunk_size > MAX_CHUNK_PAYLOAD) return WEBP_MUX_BAD_DATA;
{
const size_t chunk_disk_size = SizeWithPadding(chunk_size);
@ -137,6 +138,7 @@ static int MuxImageParse(const WebPChunk* const chunk, int copy_data,
wpi->is_partial_ = 1; // Waiting for a VP8 chunk.
break;
case WEBP_CHUNK_IMAGE:
if (wpi->img_ != NULL) goto Fail; // Only 1 image chunk allowed.
if (ChunkSetNth(&subchunk, &wpi->img_, 1) != WEBP_MUX_OK) goto Fail;
if (!MuxImageFinalize(wpi)) goto Fail;
wpi->is_partial_ = 0; // wpi is completely filled.
@ -187,7 +189,7 @@ WebPMux* WebPMuxCreateInternal(const WebPData* bitstream, int copy_data,
size = bitstream->size;
if (data == NULL) return NULL;
if (size < RIFF_HEADER_SIZE) return NULL;
if (size < RIFF_HEADER_SIZE + CHUNK_HEADER_SIZE) return NULL;
if (GetLE32(data + 0) != MKFOURCC('R', 'I', 'F', 'F') ||
GetLE32(data + CHUNK_HEADER_SIZE) != MKFOURCC('W', 'E', 'B', 'P')) {
return NULL;
@ -196,8 +198,6 @@ WebPMux* WebPMuxCreateInternal(const WebPData* bitstream, int copy_data,
mux = WebPMuxNew();
if (mux == NULL) return NULL;
if (size < RIFF_HEADER_SIZE + TAG_SIZE) goto Err;
tag = GetLE32(data + RIFF_HEADER_SIZE);
if (tag != kChunks[IDX_VP8].tag &&
tag != kChunks[IDX_VP8L].tag &&
@ -205,13 +205,17 @@ WebPMux* WebPMuxCreateInternal(const WebPData* bitstream, int copy_data,
goto Err; // First chunk should be VP8, VP8L or VP8X.
}
riff_size = SizeWithPadding(GetLE32(data + TAG_SIZE));
if (riff_size > MAX_CHUNK_PAYLOAD || riff_size > size) {
goto Err;
} else {
if (riff_size < size) { // Redundant data after last chunk.
size = riff_size; // To make sure we don't read any data beyond mux_size.
}
riff_size = GetLE32(data + TAG_SIZE);
if (riff_size > MAX_CHUNK_PAYLOAD) goto Err;
// Note this padding is historical and differs from demux.c which does not
// pad the file size.
riff_size = SizeWithPadding(riff_size);
if (riff_size < CHUNK_HEADER_SIZE) goto Err;
if (riff_size > size) goto Err;
// There's no point in reading past the end of the RIFF chunk.
if (size > riff_size + CHUNK_HEADER_SIZE) {
size = riff_size + CHUNK_HEADER_SIZE;
}
end = data + size;
@ -260,6 +264,7 @@ WebPMux* WebPMuxCreateInternal(const WebPData* bitstream, int copy_data,
chunk_list = MuxGetChunkListFromId(mux, id); // List to add this chunk.
if (ChunkSetNth(&chunk, chunk_list, 0) != WEBP_MUX_OK) goto Err;
if (id == WEBP_CHUNK_VP8X) { // grab global specs
if (data_size < CHUNK_HEADER_SIZE + VP8X_CHUNK_SIZE) goto Err;
mux->canvas_width_ = GetLE24(data + 12) + 1;
mux->canvas_height_ = GetLE24(data + 15) + 1;
}

View File

@ -19,13 +19,6 @@
#include "src/dsp/dsp.h"
#include "src/webp/types.h"
// some endian fix (e.g.: mips-gcc doesn't define __BIG_ENDIAN__)
#if !defined(WORDS_BIGENDIAN) && \
(defined(__BIG_ENDIAN__) || defined(_M_PPC) || \
(defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)))
#define WORDS_BIGENDIAN
#endif
#if defined(WORDS_BIGENDIAN)
#define HToLE32 BSwap32
#define HToLE16 BSwap16

View File

@ -91,7 +91,8 @@ static int BuildHuffmanTable(HuffmanCode* const root_table, int root_bits,
assert(code_lengths_size != 0);
assert(code_lengths != NULL);
assert(root_table != NULL);
assert((root_table != NULL && sorted != NULL) ||
(root_table == NULL && sorted == NULL));
assert(root_bits > 0);
// Build histogram of code lengths.
@ -120,16 +121,22 @@ static int BuildHuffmanTable(HuffmanCode* const root_table, int root_bits,
for (symbol = 0; symbol < code_lengths_size; ++symbol) {
const int symbol_code_length = code_lengths[symbol];
if (code_lengths[symbol] > 0) {
sorted[offset[symbol_code_length]++] = symbol;
if (sorted != NULL) {
sorted[offset[symbol_code_length]++] = symbol;
} else {
offset[symbol_code_length]++;
}
}
}
// Special case code with only one value.
if (offset[MAX_ALLOWED_CODE_LENGTH] == 1) {
HuffmanCode code;
code.bits = 0;
code.value = (uint16_t)sorted[0];
ReplicateValue(table, 1, total_size, code);
if (sorted != NULL) {
HuffmanCode code;
code.bits = 0;
code.value = (uint16_t)sorted[0];
ReplicateValue(table, 1, total_size, code);
}
return total_size;
}
@ -151,6 +158,7 @@ static int BuildHuffmanTable(HuffmanCode* const root_table, int root_bits,
if (num_open < 0) {
return 0;
}
if (root_table == NULL) continue;
for (; count[len] > 0; --count[len]) {
HuffmanCode code;
code.bits = (uint8_t)len;
@ -172,17 +180,21 @@ static int BuildHuffmanTable(HuffmanCode* const root_table, int root_bits,
for (; count[len] > 0; --count[len]) {
HuffmanCode code;
if ((key & mask) != low) {
table += table_size;
if (root_table != NULL) table += table_size;
table_bits = NextTableBitSize(count, len, root_bits);
table_size = 1 << table_bits;
total_size += table_size;
low = key & mask;
root_table[low].bits = (uint8_t)(table_bits + root_bits);
root_table[low].value = (uint16_t)((table - root_table) - low);
if (root_table != NULL) {
root_table[low].bits = (uint8_t)(table_bits + root_bits);
root_table[low].value = (uint16_t)((table - root_table) - low);
}
}
if (root_table != NULL) {
code.bits = (uint8_t)(len - root_bits);
code.value = (uint16_t)sorted[symbol++];
ReplicateValue(&table[key >> root_bits], step, table_size, code);
}
code.bits = (uint8_t)(len - root_bits);
code.value = (uint16_t)sorted[symbol++];
ReplicateValue(&table[key >> root_bits], step, table_size, code);
key = GetNextKey(key, len);
}
}
@ -202,22 +214,83 @@ static int BuildHuffmanTable(HuffmanCode* const root_table, int root_bits,
((1 << MAX_CACHE_BITS) + NUM_LITERAL_CODES + NUM_LENGTH_CODES)
// Cut-off value for switching between heap and stack allocation.
#define SORTED_SIZE_CUTOFF 512
int VP8LBuildHuffmanTable(HuffmanCode* const root_table, int root_bits,
int VP8LBuildHuffmanTable(HuffmanTables* const root_table, int root_bits,
const int code_lengths[], int code_lengths_size) {
int total_size;
const int total_size =
BuildHuffmanTable(NULL, root_bits, code_lengths, code_lengths_size, NULL);
assert(code_lengths_size <= MAX_CODE_LENGTHS_SIZE);
if (total_size == 0 || root_table == NULL) return total_size;
if (root_table->curr_segment->curr_table + total_size >=
root_table->curr_segment->start + root_table->curr_segment->size) {
// If 'root_table' does not have enough memory, allocate a new segment.
// The available part of root_table->curr_segment is left unused because we
// need a contiguous buffer.
const int segment_size = root_table->curr_segment->size;
struct HuffmanTablesSegment* next =
(HuffmanTablesSegment*)WebPSafeMalloc(1, sizeof(*next));
if (next == NULL) return 0;
// Fill the new segment.
// We need at least 'total_size' but if that value is small, it is better to
// allocate a big chunk to prevent more allocations later. 'segment_size' is
// therefore chosen (any other arbitrary value could be chosen).
next->size = total_size > segment_size ? total_size : segment_size;
next->start =
(HuffmanCode*)WebPSafeMalloc(next->size, sizeof(*next->start));
if (next->start == NULL) {
WebPSafeFree(next);
return 0;
}
next->curr_table = next->start;
next->next = NULL;
// Point to the new segment.
root_table->curr_segment->next = next;
root_table->curr_segment = next;
}
if (code_lengths_size <= SORTED_SIZE_CUTOFF) {
// use local stack-allocated array.
uint16_t sorted[SORTED_SIZE_CUTOFF];
total_size = BuildHuffmanTable(root_table, root_bits,
code_lengths, code_lengths_size, sorted);
} else { // rare case. Use heap allocation.
BuildHuffmanTable(root_table->curr_segment->curr_table, root_bits,
code_lengths, code_lengths_size, sorted);
} else { // rare case. Use heap allocation.
uint16_t* const sorted =
(uint16_t*)WebPSafeMalloc(code_lengths_size, sizeof(*sorted));
if (sorted == NULL) return 0;
total_size = BuildHuffmanTable(root_table, root_bits,
code_lengths, code_lengths_size, sorted);
BuildHuffmanTable(root_table->curr_segment->curr_table, root_bits,
code_lengths, code_lengths_size, sorted);
WebPSafeFree(sorted);
}
return total_size;
}
int VP8LHuffmanTablesAllocate(int size, HuffmanTables* huffman_tables) {
// Have 'segment' point to the first segment for now, 'root'.
HuffmanTablesSegment* const root = &huffman_tables->root;
huffman_tables->curr_segment = root;
// Allocate root.
root->start = (HuffmanCode*)WebPSafeMalloc(size, sizeof(*root->start));
if (root->start == NULL) return 0;
root->curr_table = root->start;
root->next = NULL;
root->size = size;
return 1;
}
void VP8LHuffmanTablesDeallocate(HuffmanTables* const huffman_tables) {
HuffmanTablesSegment *current, *next;
if (huffman_tables == NULL) return;
// Free the root node.
current = &huffman_tables->root;
next = current->next;
WebPSafeFree(current->start);
current->start = NULL;
current->next = NULL;
current = next;
// Free the following nodes.
while (current != NULL) {
next = current->next;
WebPSafeFree(current->start);
WebPSafeFree(current);
current = next;
}
}

View File

@ -43,6 +43,29 @@ typedef struct {
// or non-literal symbol otherwise
} HuffmanCode32;
// Contiguous memory segment of HuffmanCodes.
typedef struct HuffmanTablesSegment {
HuffmanCode* start;
// Pointer to where we are writing into the segment. Starts at 'start' and
// cannot go beyond 'start' + 'size'.
HuffmanCode* curr_table;
// Pointer to the next segment in the chain.
struct HuffmanTablesSegment* next;
int size;
} HuffmanTablesSegment;
// Chained memory segments of HuffmanCodes.
typedef struct HuffmanTables {
HuffmanTablesSegment root;
// Currently processed segment. At first, this is 'root'.
HuffmanTablesSegment* curr_segment;
} HuffmanTables;
// Allocates a HuffmanTables with 'size' contiguous HuffmanCodes. Returns 0 on
// memory allocation error, 1 otherwise.
int VP8LHuffmanTablesAllocate(int size, HuffmanTables* huffman_tables);
void VP8LHuffmanTablesDeallocate(HuffmanTables* const huffman_tables);
#define HUFFMAN_PACKED_BITS 6
#define HUFFMAN_PACKED_TABLE_SIZE (1u << HUFFMAN_PACKED_BITS)
@ -78,7 +101,7 @@ void VP8LHtreeGroupsFree(HTreeGroup* const htree_groups);
// the huffman table.
// Returns built table size or 0 in case of error (invalid tree or
// memory error).
int VP8LBuildHuffmanTable(HuffmanCode* const root_table, int root_bits,
int VP8LBuildHuffmanTable(HuffmanTables* const root_table, int root_bits,
const int code_lengths[], int code_lengths_size);
#ifdef __cplusplus

View File

@ -261,9 +261,15 @@ static void CleanupParams(SmoothParams* const p) {
int WebPDequantizeLevels(uint8_t* const data, int width, int height, int stride,
int strength) {
const int radius = 4 * strength / 100;
int radius = 4 * strength / 100;
if (strength < 0 || strength > 100) return 0;
if (data == NULL || width <= 0 || height <= 0) return 0; // bad params
// limit the filter size to not exceed the image dimensions
if (2 * radius + 1 > width) radius = (width - 1) >> 1;
if (2 * radius + 1 > height) radius = (height - 1) >> 1;
if (radius > 0) {
SmoothParams p;
memset(&p, 0, sizeof(p));