doing so is not part of ISO C; removes some pedantic warnings
Change-Id: I739ad8c5cacc133e2546e9f45c0db9d92fb93d7e
(cherry picked from commit 96e948d7b0)
The output surface CAN be changed inbetween calls to
WebPIUpdate() or WebPIAppend(), but with precautions.
Change-Id: I899afbd95738a6a8e0e7000f8daef3e74c99ddd8
(cherry picked from commit ff885bfe1f)
This applies to images with optional chunks (e.g. images with ALPH
chunk,
ICCP chunk etc). Before this, the incremental decoding used to work like
non-incremental decoding for such files, that is, no rows were decoded
until
all data was available.
The change is in 2 parts:
- During optional chunk parsing, don't wait for the full VP8/VP8L chunk.
- Remap 'alpha_data' pointer whenever a new buffer is allocated/used in
WebPIAppend() and WebPIUpdate().
Change-Id: I6cfd6ca1f334b9c6610fcbf662cd85fa494f2a91
(cherry picked from commit ead4d47859)
Start VP8EncLoop/VP8EncTokenLoop only if VP8EncStartAlpha succeeded.
Change-Id: Id1faca3e6def88102329ae2b4974bd4d6d4c4a7a
(cherry picked from commit 67708d6701)
new option: -blend_alpha 0xrrggbb
also: don't force picture.use_argb value for lossless. Instead,
delay the YUVA<->ARGB conversion till WebPEncode() is called.
This make the blending more accurate when source is ARGB
and lossy compression is used (YUVA).
This has an effect on cropping/rescaling. E.g. for PNG, these
are now done in ARGB colorspace instead of YUV when lossy compression
is used.
Change-Id: I18571f1b1179881737a8dbd23ad0aa8cddae3c6b
(cherry picked from commit e7d9548c9b)
Tuned the cross_color transform parameter (step) for lower quality
levels. This change gives speedup of 20% at lower qualities (25) and 10% at
moderate quality level (50) with a loss of 0.25% in compression density.
Also removed TODO for cross_color transform. Observed good correlation of
this with the predict transform.
Change-Id: I8a1044e9f24e6a5f84295c030fd444d0eec7d154
rather than symlink the webm/vpx terms, use the same header as libvpx to
reference in-tree files
based on the discussion in:
https://codereview.chromium.org/12771026/
Change-Id: Ia3067ecddefaa7ee01550136e00f7b3f086d4af4
The auto-infer logic of detecting the 'Alpha' use case
(via check '(palette[i] & 0x00ff00ffu) != 0' is failing
for this corner case image with all black pixels (rgb = 0)
and different Alpha values.
-> switch generic use-LUT detection
Change-Id: I982a8b28c8bcc43e3dc68ac358f978a4bcc14c36
Added 1 pixel cache for palette colors for faster lookup.
This will speedup images that require ApplyPalette by 6.5% for lossless
compression.
Change-Id: Id0c5174d797ffabdb09905c2ba76e60601b686f8
'mem' was being offset once by DO_ALIGN() then shifted 'nz_size' which
would end up accounting for more than ALIGN_CST and exceed the allocation.
broken since:
9bf3129 align VP8Encoder::nz_ allocation
Change-Id: I04a4e0bbf80d909253ce057f8550ed98e0cf1054
* "declaration of ‘index’ shadows a global declaration [-Wshadow]"
* "signed and unsigned type in conditional expression [-Wsign-compare]"
Change-Id: I891182d919b18b6c84048486e0385027bd93b57d
Earlier such images were using roughly 9 * width * height bytes for
decoding. Now, they take 6 * width * height memory.
Change-Id: Ie4a681ca5074d96d64f30b2597fafdca648dd8f7
Simply get rid of an intermediate buffer of size width x height, by
using the fact that stride == width in this case.
Change-Id: I92376a2561a3beb6e723e8bcf7340c7f348e02c2
no precision loss observed
speed is not really faster (0.5% at max), as forward-WHT isn't called often.
also: replaced a "int << 3" (undefined by C-spec) by a "int * 8"
( supersedes https://gerrit.chromium.org/gerrit/#/c/48739/ )
Change-Id: I2d980ec2f20f4ff6be5636105ff4f1c70ffde401
it's not often the case, but could happen, that chroma has non-zero
coeff but luma hasn't. In such case, we should skip luma right away
Change-Id: I9515573ffaec8aad8b069d2c02ffbda4a6eff97c
Removed a call to WebPParseHeaders() inside VP8GetHeaders(). This was not needed
anyway, as all call flows already call WebPParseHeaders() before calling
VP8GetHeaders().
This avoids duplicate calls to WebPParseHeaders().
Change-Id: Icb2d618bd26c44220d956c17a69c9c45a62d5237
The output surface CAN be changed inbetween calls to
WebPIUpdate() or WebPIAppend(), but with precautions.
Change-Id: I899afbd95738a6a8e0e7000f8daef3e74c99ddd8
This applies to images with optional chunks (e.g. images with ALPH
chunk,
ICCP chunk etc). Before this, the incremental decoding used to work like
non-incremental decoding for such files, that is, no rows were decoded
until
all data was available.
The change is in 2 parts:
- During optional chunk parsing, don't wait for the full VP8/VP8L chunk.
- Remap 'alpha_data' pointer whenever a new buffer is allocated/used in
WebPIAppend() and WebPIUpdate().
Change-Id: I6cfd6ca1f334b9c6610fcbf662cd85fa494f2a91
new option: -blend_alpha 0xrrggbb
also: don't force picture.use_argb value for lossless. Instead,
delay the YUVA<->ARGB conversion till WebPEncode() is called.
This make the blending more accurate when source is ARGB
and lossy compression is used (YUVA).
This has an effect on cropping/rescaling. E.g. for PNG, these
are now done in ARGB colorspace instead of YUV when lossy compression
is used.
Change-Id: I18571f1b1179881737a8dbd23ad0aa8cddae3c6b
user can now call WebPEncode() with any YUVA or ARGB format, for
lossy or lossless compression
also: simplified error reporting, which is done in WebPPictureARGBToYUVA()
and WebPPictureYUVAToARGB()
Change-Id: Ifb68909217175bcf5a050e5c68d06de9849468f7
(cherry picked from commit 07d87bda1b)
transfer ownership of chunk passed in ChunkSetNth(). prevents freeing
the chunk in e.g., MuxImageParse() when a partial assignment (alpha but
no image) has occurred.
Change-Id: Ia69656b04fdf50f098f3816b54abd4e191248de3
Saturation was done on input coeff, not quantized one.
This saturation is not absolutely needed: output of FTransformWHT
is in range [-16320, 16321]. At quality 100, max quantization steps is 8,
so the maximal range used by QuantizeBlock() is [-2040, 2040].
But there's some extra bias (mtx->bias_[] and mtx->sharpen_[]) so
it's better to leave this saturation check for now.
addresses issue #145
Change-Id: I4b14f71cdc80c46f9eaadb2a4e8e03d396879d28
* merge cost calculation functions (BitsEntropy() and HuffmanCost())
* have HistogramAdd() specialized into separate functions
* use threshold to bail-out early
* revamp code a bit
* also: save memory by freeing free(histogram_image)
Change-Id: I8ee5d2cfa1462d5d6ea6361f5c89925a3720ef55
* VP8L shouldn't have an alpha chunk
* expect an animation to only contain frames, not a mix of image chunks
* enforce ANIM/ANMF order
* expect a full frame in a complete file
Change-Id: I953a8b6058f9bc00f1d042635548f158abdf6fce
subdirectories with more than one target can have the install targets
run in parallel with make -jN. group the shared headers in one place to
produce a common install target.
Change-Id: I1f3aa338a8ee6d681de1e5d0b2c6244d2c3d5451
Reported to eventually be 4% on ARM
(see https://code.google.com/p/webp/issues/detail?id=134 for details)
We might activate it selectively later...
Output values is not bitwise the same as the LUT-based
version, but difference is only +/-1 at max.
Change-Id: I1cc790ff4459885ed2ae2e72f31c5f3740095f07
this allows DLLs to be built under mingw and sets up a more obvious
dependency in the shared objects
Change-Id: I33e8a6132a16ca49563492438a1b3b74be9ed6a1
expands to e.g., -lpthread/-pthread, etc. if --enable-threading is used
and additional libs/flags are required
Change-Id: I8e8a7b44450bee32ddc58097e1e309d972b1092a
This is required for WebP lossy+Alpha images, where Alpha channel is taking
60-70% of the compression (CPU) cycles.
Also evaluated on 1000 PNG corpus and overall compression speed
is 15-40% better for lossy (PNG+Alpha) compression.
The pure lossless compression numbers are almost same (or little
better) with this change.
Change-Id: I9e5ae7372ed6227a9a5b64cd9cff84c747195a57
using token-buffer (that is: slightly more memory. O(output_size))
This change is ON by default. To return to previous behaviour, use
'cwebp -low_memory' or set config.low_memory to true.
Side-effect of this new mode: it forces 1 partition only (which was
default anyway), and makes some statistics about the bitstream
no longer available. cwebp will no longer report 'intra4-coeffs', etc.
This mode also doesn't work (yet) with multi-pass, and -low_memory
is currently forced for multi-pass.
also: reversed the flag: USE_TOKEN_BUFFER -> DISABLE_TOKEN_BUFFER
also: fixed the kAverageBytesPerMB estimate
Change-Id: I4ea80382038d6df4309663e0cb7bd88d9bca9cf1
When a ANMF/FRGM chunk size (read from file) is smaller than ANMF/FRGM header
size (which is constant and implicit), the parser should report an error.
Change-Id: I91d71889937f5133a97f1e83d5254cb2d7f37028
broken since:
ad25032 Merge "multi-threaded alpha encoding for lossy"
this produced an error due to an empty VP8TBuffer struct.
Change-Id: I640809d07d20092c1d660e2b59b58a62a12e4371
Also tweak the code for single image and fragmented image cases, so that
dmux->num_frames_ is always correct.
Change-Id: I31e6904222e4d96a54a0d8c8aa73d43b7a9094e7
new option: 'cwebp -mt ...'
new config flag: config.thread_level
(allowed thread_level are 0 or 1 for now. Maybe more later...)
If -mt is activated (and WEBP_USE_THREAD is used for compile), the alpha-compression
will be done in parallel to RGB coding for lossy. Can save quite a bit of latency...
Has no effect for lossless encoding.
Change-Id: I769d0bf90e7380cf99344ad62cd77277f4df5a46
It now returns ALPHA_FLAG for lossless images with alpha, that don't
have a VP8X chunk.
This is consistent with similar methods WebPMuxGetFeatures() and
WebPGetFeatures().
Change-Id: Ia3a4ca8f3e0f102f478bd33e0727ca5be98593df
larger values are still dealt with in the .cc
~5% faster encoding
Output size is slightly different (variably), because of
different floating-point calculation ordering.
Change-Id: I6ede18b09c753997cf78aa1199a807d9ddb5d4b4
-> split libraries further into decoder / encoder
-> add libwebpdecoder.a in Makefile.unix
-> make dwebp link against libwebpdecoder.a in Makefile.unix
also: in makefile.unix, pass EXTRA_FLAGS to LDFLAGS too
(otherwise, -m32 wouldn't work, e.g.)
Change-Id: Ief3da02a729dd86bbaf949ed048836716941657f
signed integer overflow behavior is undefined, split PrefixEncode() to
two branches to avoid this.
Change-Id: I6e2761d0d77f0aaceafdc4e07232e089c22beb64
* add SSE2 variant for lossless
* speed-up TransformColor calls using specialized TransformColorBlue/Red
* Fuse the Shannon Entropy calls to compute it for X and X+Y simultaneously.
This latter changes the output size a little bit.
Change-Id: Ie5df94da78bf51a58da859c9099b56340da9ec89
Simplify and re-organize the VP8L bit-reader functions
(e.g.: the 40-bit look-ahead code was helping much)
Speed-up with LBITS=64, on arm7-a:
=> before:
./dwebp_justify_24_neon -v bryce_ll.webp
Time to decode picture: 11.393s
File bryce_ll.webp can be decoded (dimensions: 11158 x 2156).
...
=> after (LBITS=64): Time to decode picture: 9.953s
making the VP8L bit-reader in 32 bit mode is going to be
harder (because we need to be able to read two symbols
at a time, each with max length 15 bits)
Change-Id: I89746fb103b87b5e2fd40a3208a6fbc584b88297
This flag will make the code use no uint64, no asm, and no fancy
trick, but instead aim at being as simple and straightforward as
possible.
Main use is to help emscripten generate proper JS code.
More code needs to be simplified later.
Also: tune the BITS values to be 24 and make use of WEBP_RIGHT_JUSTIFY
Here are the typical timing for decoding a large image:
ARM7-a:
dwebp_justify_32_neon Time to decode picture: 3.280s
dwebp_justify_24_neon Time to decode picture: 2.640s
dwebp_justify_16_neon Time to decode picture: 2.723s
dwebp_justify_8_neon Time to decode picture: 2.802s
dwebp_justify_32 Time to decode picture: 4.264s
dwebp_justify_24 Time to decode picture: 3.696s
dwebp_justify_16 Time to decode picture: 3.779s
dwebp_justify_8 Time to decode picture: 3.834s
dwebp_32_neon Time to decode picture: 4.010s
dwebp_24_neon Time to decode picture: 2.725s
dwebp_16_neon Time to decode picture: 2.852s
dwebp_8_neon Time to decode picture: 2.778s
dwebp_32 Time to decode picture: 4.587s
dwebp_24 Time to decode picture: 3.800s
dwebp_16 Time to decode picture: 3.902s
dwebp_8 Time to decode picture: 3.815s
REFERENCE (HEAD) Time to decode picture: 3.818s
x86_64:
dwebp_justify_32 Time to decode picture: 0.473s
dwebp_justify_24 Time to decode picture: 0.434s
dwebp_justify_16 Time to decode picture: 0.450s
dwebp_justify_8 Time to decode picture: 0.467s
dwebp_32 Time to decode picture: 0.474s
dwebp_24 Time to decode picture: 0.468s
dwebp_16 Time to decode picture: 0.468s
dwebp_8 Time to decode picture: 0.481s
REFERENCE (HEAD) Time to decode picture: 0.436s
i386:
dwebp_justify_32 Time to decode picture: 0.723s
dwebp_justify_24 Time to decode picture: 0.618s
dwebp_justify_16 Time to decode picture: 0.626s
dwebp_justify_8 Time to decode picture: 0.651s
dwebp_32 Time to decode picture: 0.744s
dwebp_24 Time to decode picture: 0.627s
dwebp_16 Time to decode picture: 0.642s
dwebp_8 Time to decode picture: 0.642s
Change-Id: Ie56c7235733a24f94fbfc2e4351aae36ec39c225
This option remaps internal parameters to better match
the expected compression curve of JPEG and produce output files
of similar size, but with better quality.
Change-Id: I96a1cbb480b1f6a0c6845a23c33dfd63f197b689
If it's not a partial file and parser returns PARSE_NEED_MORE_DATA, then
consider it to be PARSE_ERROR.
Change-Id: Id652a345bd2a9f574970272dd0a00517de113215
If a NULL pre-allocated buffer is passed, a buffer will be automatically
allocated.
+ add some parameter checks.
reported in http://code.google.com/p/webp/issues/detail?id=139
Change-Id: I9e14ed97db30ee12e46b5e92aac7eeaaeb99bfd5
store values to a temporary variable before calling functions that take
vector types.
removes non-standard constructs such as:
(uint8x8x2_t){{ a, b }}
fixing:
src/dsp/upsampling_neon.c:69:32: error: macro "vst2_u8" passed 3
arguments, but takes just 2
Change-Id: Ib4368e16e3a3efac18024f02be94e76243ade2dc
Fixes: https://code.google.com/p/webp/issues/detail?id=140
- along the lines of the SSE chroma upsampling.
Total speedup is ~30%.
4% speed loss on YuvToRgbXX conversion using tables instead
of 14-bit fixed precision. TODO(later): investigate, and compare
to x86.
see http://code.google.com/p/webp/issues/detail?id=134
Change-Id: Idc2261037cd13b4553ca20ecc4c4007099c37009
When the config option '--enable-libwebpdecoder' is specified, the
lean decoder library 'libwebpdecoder' will be created in addition to
libwebp. Also dwebp binary will be linked to libwebpdecoder, if this
config option is specified.
Change-Id: I9de3e149b59c9a8390fae2ba660941749640e54a
Actually, it turns out we now should never call these functions
with a zero size, otherwise something is wrong in the logic.
Change-Id: Ie414fcbec95486c169190470a71f2cff0843782a
also change lossless encoder logic, which was relying on explicit
NULL return from WebPSafeMalloc(0)
renamed function to CheckSizeArgumentsOverflow() explicitly
addresses issue #138
Change-Id: Ibbd51cc0281e60e86dfd4c5496274399e4c0f7f3
- precompute filtering strength once for all at the beginning
instead of per-macroblock
- reduce size of VP8MB struct from 8 bytes to 4.
- removed VP8StoreBlock() accordingly
Change-Id: Icf3d329473e21c464770be3d72a04c9ee4c321f2
GetCoeffs is (by far) the most consuming function of the decoder.
No speed change (unfortunately), but the main loop is somehow clearer.
Change-Id: I78f1c10cadc2c8696c041f5cbda86cab92cc6598
* treat the last coeff as a special case
* re-arrange the inner code to be shorter
* replace some VP8EncBands[n] by n, for n = 0 or 1
Change-Id: I71e17b014cffad7b073e787fde06260905a6953f
The main advantage is that you can avoid the use of uint64_t
some times, sticking to 32bit only.
Default still is BITS=32, this is mainly "in case".
Change-Id: Id694028793117ba822c37d46ef6c52fa0afed4ac
- Separate out mux.h and demux.h
- muxtypes.h: new header for data types common to mux/demux
- Move some misc read/write utilities to utils/utils.h
- Remove some duplicate methods.
- Separate out mux/demux libraries
Change-Id: If9b9569b10d55d922ad9317ef51710544315d6de
We don't need to use the exact forward transform,
since it's only a rough evaluation.
-> Removed some shifts and rounding constants.
Change-Id: I3fdf8b4fe9720473894155e1ad0345f4d1fd9a33
In particular, this removes any unnecessary FRGM/ANMF/ANIM chunks, and
indirectly leads to removal of unnecessary VP8X chunks as well.
This is especially useful for GIF to WebP conversion - it saves 56 bytes
(ANMF: 16+8 bytes, ANIM: 6+8 bytes, VP8X: 10+8 bytes) for non-animated GIFs.
Change-Id: I3b50a96ca585844c421b0fa4cd8593e52c3f95c5
This is a correction to the following change:
a00a3daf5b Use 'frgm' instead of 'tile' in
webpmux parameters
Change-Id: I8fa0bce98efdde38827fd25712017a98a6ea7388
- Allow a duration of 0
- Rename LOOP chunk to ANIM and add the background color field to it.
- Add a disposal method field for each animation frame.
- Modify webpmux.c binary interface to allow the input of background color
and disposal methods. Also make '-loop' and '-bgcolor' arguments optional
with some default values.
Change-Id: I807372a61cdb8a0d3080ae3552caf2848070bf4d
10-15% faster encoding.
Almost same output, binary wise. The main difference is
that we can't compute uv_alpha susceptibility, means there
can be subtle differences with different -sns values.
Change-Id: Id1b1a50929bf125b6372212fee1ed75a3bed975f
- Also, use the term 'fragments' instead of 'tiling' in code
- This makes code consistent with the spec.
Change-Id: Ibeccffc35db23bbedb88cc5e18e29e51621931f8
- Make ANMF and FRGM chunks hierarchical so that they encompass all chunks of
that frame.
- Use this in demuxer: stop parsing a frame if all image data for it isn't
available yet. Thus, we have a frame-level incremental support; that is,
all frames that are fully available can be parsed.
- Note: We still keep incremental support for single images - so that they can
be decoded with incremental decoding.
Change-Id: Id1585b16b06caee1d84009c42a25d2de29fa6135
Use separate fourCCs "XMP " and "EXIF" instead of a common "META"
Also, some refactorization in webpmux.c
Change-Id: Iad3337e5c1b81e785c60670ce28b1f536dd7ee31
Number of pairs selected are limited between 25% of histogram
images (at start) and number of histogram images left at any iteration.
Increase the range of iter_mult.
Removed min_cluster_size as parameter for tuning HistogramCombine.
Change-Id: Ia4068cd7af4d0f63c5af9001aceda8a40b9de740
* commit 'v0.2.1':
Update ChangeLog
update NEWS
bump version to 0.2.1
libwebp: validate chunk size in ParseOptionalChunks
cwebp (windows): fix alpha image import on XP
autoconf/libwebp: enable dll builds for mingw
[cd]webp: always output windows errors
fix double to float conversion warning
cwebp: fix jpg encodes on XP
VP8LAllocateHistogramSet: fix overflow in size calculation
GetHistoBits: fix integer overflow
EncodeImageInternal: fix uninitialized free
fix the -g/O3 discrepancy for 32bit compile
fix the BITS=8 case
Make *InitSSE2() functions be empty on non-SSE2 platform
make *InitSSE2() functions be empty on non-SSE2 platform
make VP8DspInitNEON() public
Conflicts:
src/Makefile.am
src/dsp/dec_neon.c
Change-Id: Iddc5152e4a6892db96c12d7c3f74adbc85fe6178
Contributed by Wayne Chen (datoudatou at gmail dot com)
+ some header cleanup
+ remove the NEON suffix in static functions
Change-Id: I75bf5e9b54cf5e1acc53764c6f081d61690f8e3d
(implements the backward and forward transforms in the encoder)
original patch by Wayne Chen (datoudatou at gmail dot com)
Change-Id: Ic00f3bffcdf7a924f043006728735c810ee47a57
- Changed the dynamic range where more aggressive
(BackwardReferencesTraceBackward) heuristic is run from quality > 10
(instead of quality > 25).
- Limit the backward-ref Window size to 16*width & 256*width for lower
qualities ([0, 25[ & [25, 50[) respectively, instead of 1M window.
- Evaluate the params for HashChainFindCopy outside this function call
and pass it, instead of recomputing them for every call.
Change-Id: If9eedfc14b978e7632d7cf69c96186e2910b0554
the max wasn't checked leading to a rollover case, possibly exploitable.
additionally check the RIFF size early, to avoid similar issues.
pulled from chromium:
http://codereview.chromium.org/11229048/
Change-Id: I4050b13a7e61ec023c0ef50958c45f651cf34c49
the multiplications done for total_size would be done with integers,
possibly overflowing, before being promoted to 64-bit for the addition
Change-Id: I32c3a6400fc2ef120c38e01a8693f4cb1727234d
huff_image_size was a size_t (=32 bits with 32-bit builds) which could
rollover causing an incorrectly sized allocation and a crash in lossless
encoding.
fixes issue #128
Change-Id: I0f20cee98c29b2b40b02607930b6b7a7ca56996d
in debug mode, some float operations see their intermediate
values stored in memory rather than staying in the FPU (which
is 80bit precision).
Several fixes are possible (breaking long calculations into
atomic steps for instance), but simpler of all is just about
turning the cost[] array into float* instead of double*.
The code is a tad faster, and i didn't see any major output
size difference.
Change-Id: I053e6d340850f02761687e072b0782c6734d4bf8
this will avoid the "dec_neon.o has no symbol" warning
no change in binary size observed on linux.
Change-Id: Ifd83dfc6a0c61905481599b06cb5e711f55efa7d
the max wasn't checked leading to a rollover case, possibly exploitable.
additionally check the RIFF size early, to avoid similar issues.
pulled from chromium:
http://codereview.chromium.org/11229048/
Change-Id: Ifebc712bf3d3de0129b76ca4c57c68e062abc429
This is mostly for experimentation!
Need to define USE_YUVj flag in the code for that.
suggested by benwreder at hotmail dot com
Change-Id: If0b8e2c1863efc08ce097de6de20f4c7efc3f7e8
LSIM stands for "local similarity": before matching
a compressed pixel to the source, we search around in the source
and minimise the squared error. So, this is close to PSNR calculation,
but mitigates some of its limitations (pure translation and noise for instance).
There's a new -print_lsim option to cwebp too.
Change-Id: Ia38561034c7a90e71d2ea0f55bb1de527eda245b
Make the heuristic for combining Histograms a function of compression
quality. This change will speed-up compression time for compression
quality less than 75. The compression time/density remains unchanged
for compression quality 75 and higher.
Change-Id: I94513d51078340fbc0737d459fab2cebdd2d6082
the multiplications done for total_size would be done with integers,
possibly overflowing, before being promoted to 64-bit for the addition
Change-Id: Id5c127c8a497ce5de89a276c17f36b59eeb67c21
huff_image_size was a size_t (=32 bits with 32-bit builds) which could
rollover causing an incorrectly sized allocation and a crash in lossless
encoding.
fixes issue #128
Change-Id: I175c8c6132ba9792034807c5c1028dfddfeb4ea5
in debug mode, some float operations see their intermediate
values stored in memory rather than staying in the FPU (which
is 80bit precision).
Several fixes are possible (breaking long calculations into
atomic steps for instance), but simpler of all is just about
turning the cost[] array into float* instead of double*.
The code is a tad faster, and i didn't see any major output
size difference.
Change-Id: Icf1f833e15f8ee4ecc7f9a521d07fdc96ef711aa
Change-Id: I36d3765e94d2b5529b321c186ccee1744785c5b3
fixes:
error: ISO C++ forbids forward references to 'enum' types
since:
28d25c8 replace 'typedef struct {} X;" by "typedef struct X X; struct X {};"
Returning 0 (equal) can lead to undefined behaviour.
And, in our cases we'll never have equal keys (added asserts for that)
Change-Id: Ifaf202df321d3f877ad2a03de42e0d6cdd1b2388
SBITS=8 is reported 20-30% faster on ARM (where 64bit ops
are expensive).
Also use 32bits for i32.
Change-Id: Id6a7197d805061aeb8832f20432512d0d930ebfa
fixes the 'blocky sky problem' (saturation problem: when luma was flat,
chroma noise was taking over, resulting in random segment id assigned.
When just using a common uniform segment was better).
+ side clean-up and readibility/experimentability MACRO'ization
+ added '-map 7' option
Change-Id: I35982a9e43c0fecbfdd7b05e4813e8ba8c121d71
this will avoid the "dec_neon.o has no symbol" warning
no change in binary size observed on linux.
Change-Id: Ia27ae2bc5a03d714afa7e46671fdcf4cb630784d
lossy was rounding with a bias toward opaque:
[232+, 8] -> [15, 1]
now both paths use the range:
[240+, 16] -> [15, 1]
Change-Id: I3da2063b4959b9e9f45bae09e640acc1f43470c5
With this, MODE_rgbA can safely be used without speed penalty
even in case of pure-lossy alpha-less input.
It's also an optimization when cropping a fully-opaque region from
an image with alpha: premultiply is then skipped
Change-Id: Ibee28c75744f193dacdfccd5a2e7cd1e44604db6
* green was not descaled properly
* alpha was over-dithered, making the value '0x0f' not be a fixed point
* alpha value was not restored ok.
Change-Id: Ia4a4d75bdad41257f7c07ef76a487065ac36fede
Fix the lossless decoder for the case when it has to apply other
inverse transforms before applying Color indexing inverse transform.
The main idea is to make ColorIndexingInverse virtually in-place: we
use the fact that the argb_cache is allocated to accommodate all
*unpacked* pixels of a macro-row, not just *packed* pixels.
Change-Id: I27f11f3043f863dfd753cc2580bc5b36376800c4
Added a threshold of MAX_COLORS_FOR_GRAPH for color-palettes, above
which the graph hint is ignored.
Change-Id: Ia5d7f45e52731b6eaf2806999d6be82861744fd3
spurious in this case, but addresses e.g.,
... potentially uninitialized local variable 'weighted_average' used
Change-Id: Ib99998bf49e4af7a82ee66f13fb850ca5b17dc71
Order-by-cost mostly unchanged (up to a scaling constant 1/log(2))
(except for few minor diff in < 2% of cases)
+ remove unused field cost_mode->cache_bits_
Change-Id: I714f8ab12f49a23f5d499a64c741382c9b489a3e
no speed diff observed by removing the test before calling BitWriterResize().
+ remove some unnecessary memset() in VP8LBitWriter
+ fix mixed code/variable-decl in BIG_ENDIAN mode
Change-Id: I36be61f83d10a43e4682b680c2dae0e494da4218
* ~1-4% faster
* if it's not used, don't use it
* remove the special handling of cache_bits = 0
* remove some tests in the loops
Change-Id: I19d87c3ca731052ff532ea8b2d8e89816507b75f
For low-color images, it may be better to not use color-palettes.
Users should treat this as one another hint (as with Photo &
Picture) and another parameter for tuning the compression density.
The optimum compression can still be obtained by running (outer loop)
compression with all possible tunable parameters.
Change-Id: Icb1a4face2a84774e16e801aee4a8ae97e232e8a
also relocate user_data from WebPAuxStats to the WebPPicture struct to
make clearing easier while placing it closer to the progress hook with
which it's used.
prior to this change some spurious lossless data could be reported in
the lossy (sans alpha) encoding case. additionally user_data could be
lost during lossless encoding.
Change-Id: I929fae3dfde4d445ff81bbaad51445ea586dd80b
* Extend AuxStats with new fields
it's slightly ABI-incompatible, but i guess it's ok for 0.1.99+
I expect to add more stats later, possibly (predictor stats, etc.)
* Have cwebp report the features used by lossless
compression (either for alpha or full lossless coding)
* Print the PSNR for alpha (useful in case of -alpha_q)
* clean-up alpha.c signatures
+ misc cleanup (added const '* const ptr', etc.)
Change-Id: I157a21581f1793cb0c6cc0882e7b0a2dde68a970
* commit 'v0.1.99': (39 commits)
Update ChangeLog
add extra precision about default values and behaviour
header/doc clean up
Makefile.vc: fix webpmux.exe *-dynamic builds
remove INAM, ICOP, ... chunks from the test webp file.
harmonize authors as "Name (mail@address)"
makefile.unix: provide examples/webpmux target
update NEWS
README: cosmetics
man/cwebp.1: wording, change the date
add a very crude progress report for lossless
rename 'use_argb_input' to 'use_argb'
add some padding bytes areas for later use
fixing the findings by Frederic Kayser to the bitstream spec
add missing ABI compatibility checks
Doc: container spec text tweaks
add ABI compatibility check
mux.h: remove '* const' from function parameters
encode.h: remove '* const' from function parameters
decode.h: remove '* const' from function parameters
...
Conflicts:
src/mux/muxinternal.c
Change-Id: I635d095c451742e878088464fe6232637a331511
Put emphasis on RGBA decoding instead of RGB/BGR
Clarify doc at some rough spots
Misc typo fix and cosmetics
Change-Id: Ic5fcfcc5bf4d612c5de23b0a5499f1fadde55bfe
- WEBP_MUX_INVALID_PARAMETER: was used only at one place, and that too
should actually be an assert().
- WEBP_MUX_ERROR: was never used.
Change-Id: I8883cb4dfae7a7918507501f21fced0c04dda36a
minor revision shouldn't matter, we only check major revision number.
Bumped all version numbers so that incompatibility starts *now*
Change-Id: Id06c20f03039845ae4cfb3fd121807b931d67ee4
the VP8 decode functions do not need to be public; only
GetInfo/CheckSignature need to be marked extern as they're used by
libwebpmux.
Change-Id: Id9ab4d6166b0271cf5d04563c6dac1fcc84adbdc
- Match offsets, duration, width/height for frames/tiles and enforce
some constraints.
- Note that this also means using 'int's instead of 'uint32_t's for
16-bit and 24-bit fields.
Change-Id: If0b229ad9fce296372d961104aa36731a3b1304b
could have led to some negative overflow on 32bit arch
(if it was not for the "total_size == (size_t)total_size" test)
Change-Id: I7640340b605b9c674d30dd58a1e2144707299683
the extended file format is still under development and related
libs/binaries are not fit for release
configure:
--enable/disable-libwebpmux; default is disabled.
makefile.unix:
src/mux/libwebpmux.a and examples/webpmux must be explicitly specified
Makefile.vc:
$(DIRLIB)\libwebpmux.lib and $(DIRBIN)\webpmux.exe must be explicitly
specified
Change-Id: I8246746b256010dd2a2e4de58291222d7eaf0457
The image_info_ was used only in GetImageCanvasWidthHeight(). So, now
we infer it from data there.
This removal fixes a bug: earlier, 'image_info' wasn't initialized in
the WebPMuxCreate() flow, and so the canvas width/height were being
calculated to be zero.
Also, a related refactoring: Combine CreateImageInfo() and
CreateDataFromImageInfo() into a single function CreateFrameTileData().
Change-Id: I7b0afb0d36dc6e13b9d6a1135fb027aa4e03716c
Evaluated the impact of this change over 1000 image corpus.
The compression density is up (on average) by 1.2% and encoding time has
gone down considerably from 716 ms (per file) to 146 ms (per file)
(4.9X improvement in encoding time).
Change-Id: Ida562cc0bfe18c9d6f5f00873c95f8396b480eab
Adds new methods WebPPictureARGBToYUVA() and WebPPictureYUVAToARGB()
Depending on the value of picture->use_argb_input,
the main call WebPEncode() will convert appropriately.
Note that both conversions are lossy, so it's recommended to:
* use YUVA input for lossy compression (picture->use_argb_input=0)
* use ARGB input for lossless compression (picture->use_argb_input=1)
Change-Id: I8269d607723ee8a1136b9f4999f7ff4e657bbb04
* add a real proper pointer for holding memory chunk pointer
(instead of using y and argb fields)
* polish the doc with details
* add a WebPPictureView() that extract a view from a picture
without any copy (kind of a fast-crop).
* properly snap the top-left corner for Crop/View. Previously,
the luma position was not snapped, and was off compared to the chroma.
Change-Id: I8a3620c7f5fc6f7d1f8dd89d9da167c91e237439
'Set' and 'Get' methods for images take/return a bitstream as input,
instead of separate 'image' and 'alpha' arguments.
Also,
- Make WebPDataCopy() a public API
- Use WebPData for storing data in WebPChunk.
- Fix a potential memleak.
Change-Id: I4bf5ee6b39971384cb124b5b43921c27e9aabf3e
Check for valid bounds on the 'dist' in backward reference case.
Clamp it to 1 in case of zero and negative values.
Change-Id: I78e956d4595955efa02b1f9628b475093f6ee001