this will avoid the "dec_neon.o has no symbol" warning
no change in binary size observed on linux.
Change-Id: Ifd83dfc6a0c61905481599b06cb5e711f55efa7d
the max wasn't checked leading to a rollover case, possibly exploitable.
additionally check the RIFF size early, to avoid similar issues.
pulled from chromium:
http://codereview.chromium.org/11229048/
Change-Id: Ifebc712bf3d3de0129b76ca4c57c68e062abc429
Query the converter to ensure the format is supported; add BGR formats
as RGBA was failing for PNG on XP
Fixes issue 129
Change-Id: I02e0d74b3b21337bc5fffd6a5dc158b7809b9aa9
This is mostly for experimentation!
Need to define USE_YUVj flag in the code for that.
suggested by benwreder at hotmail dot com
Change-Id: If0b8e2c1863efc08ce097de6de20f4c7efc3f7e8
LSIM stands for "local similarity": before matching
a compressed pixel to the source, we search around in the source
and minimise the squared error. So, this is close to PSNR calculation,
but mitigates some of its limitations (pure translation and noise for instance).
There's a new -print_lsim option to cwebp too.
Change-Id: Ia38561034c7a90e71d2ea0f55bb1de527eda245b
Make the heuristic for combining Histograms a function of compression
quality. This change will speed-up compression time for compression
quality less than 75. The compression time/density remains unchanged
for compression quality 75 and higher.
Change-Id: I94513d51078340fbc0737d459fab2cebdd2d6082
correct has_alpha check; previously it was controlled by keep_alpha,
which overrode the source format check.
fixes issue #127
Change-Id: I949be90419b03610c64900be0fd37f83b70cbe73
the multiplications done for total_size would be done with integers,
possibly overflowing, before being promoted to 64-bit for the addition
Change-Id: Id5c127c8a497ce5de89a276c17f36b59eeb67c21
huff_image_size was a size_t (=32 bits with 32-bit builds) which could
rollover causing an incorrectly sized allocation and a crash in lossless
encoding.
fixes issue #128
Change-Id: I175c8c6132ba9792034807c5c1028dfddfeb4ea5
in debug mode, some float operations see their intermediate
values stored in memory rather than staying in the FPU (which
is 80bit precision).
Several fixes are possible (breaking long calculations into
atomic steps for instance), but simpler of all is just about
turning the cost[] array into float* instead of double*.
The code is a tad faster, and i didn't see any major output
size difference.
Change-Id: Icf1f833e15f8ee4ecc7f9a521d07fdc96ef711aa
Change-Id: I36d3765e94d2b5529b321c186ccee1744785c5b3
fixes:
error: ISO C++ forbids forward references to 'enum' types
since:
28d25c8 replace 'typedef struct {} X;" by "typedef struct X X; struct X {};"
Returning 0 (equal) can lead to undefined behaviour.
And, in our cases we'll never have equal keys (added asserts for that)
Change-Id: Ifaf202df321d3f877ad2a03de42e0d6cdd1b2388
SBITS=8 is reported 20-30% faster on ARM (where 64bit ops
are expensive).
Also use 32bits for i32.
Change-Id: Id6a7197d805061aeb8832f20432512d0d930ebfa
fixes the 'blocky sky problem' (saturation problem: when luma was flat,
chroma noise was taking over, resulting in random segment id assigned.
When just using a common uniform segment was better).
+ side clean-up and readibility/experimentability MACRO'ization
+ added '-map 7' option
Change-Id: I35982a9e43c0fecbfdd7b05e4813e8ba8c121d71
this will avoid the "dec_neon.o has no symbol" warning
no change in binary size observed on linux.
Change-Id: Ia27ae2bc5a03d714afa7e46671fdcf4cb630784d