new option: -blend_alpha 0xrrggbb
also: don't force picture.use_argb value for lossless. Instead,
delay the YUVA<->ARGB conversion till WebPEncode() is called.
This make the blending more accurate when source is ARGB
and lossy compression is used (YUVA).
This has an effect on cropping/rescaling. E.g. for PNG, these
are now done in ARGB colorspace instead of YUV when lossy compression
is used.
Change-Id: I18571f1b1179881737a8dbd23ad0aa8cddae3c6b
user can now call WebPEncode() with any YUVA or ARGB format, for
lossy or lossless compression
also: simplified error reporting, which is done in WebPPictureARGBToYUVA()
and WebPPictureYUVAToARGB()
Change-Id: Ifb68909217175bcf5a050e5c68d06de9849468f7
(cherry picked from commit 07d87bda1b)
Saturation was done on input coeff, not quantized one.
This saturation is not absolutely needed: output of FTransformWHT
is in range [-16320, 16321]. At quality 100, max quantization steps is 8,
so the maximal range used by QuantizeBlock() is [-2040, 2040].
But there's some extra bias (mtx->bias_[] and mtx->sharpen_[]) so
it's better to leave this saturation check for now.
addresses issue #145
Change-Id: I4b14f71cdc80c46f9eaadb2a4e8e03d396879d28
* merge cost calculation functions (BitsEntropy() and HuffmanCost())
* have HistogramAdd() specialized into separate functions
* use threshold to bail-out early
* revamp code a bit
* also: save memory by freeing free(histogram_image)
Change-Id: I8ee5d2cfa1462d5d6ea6361f5c89925a3720ef55
This is required for WebP lossy+Alpha images, where Alpha channel is taking
60-70% of the compression (CPU) cycles.
Also evaluated on 1000 PNG corpus and overall compression speed
is 15-40% better for lossy (PNG+Alpha) compression.
The pure lossless compression numbers are almost same (or little
better) with this change.
Change-Id: I9e5ae7372ed6227a9a5b64cd9cff84c747195a57
using token-buffer (that is: slightly more memory. O(output_size))
This change is ON by default. To return to previous behaviour, use
'cwebp -low_memory' or set config.low_memory to true.
Side-effect of this new mode: it forces 1 partition only (which was
default anyway), and makes some statistics about the bitstream
no longer available. cwebp will no longer report 'intra4-coeffs', etc.
This mode also doesn't work (yet) with multi-pass, and -low_memory
is currently forced for multi-pass.
also: reversed the flag: USE_TOKEN_BUFFER -> DISABLE_TOKEN_BUFFER
also: fixed the kAverageBytesPerMB estimate
Change-Id: I4ea80382038d6df4309663e0cb7bd88d9bca9cf1
broken since:
ad25032 Merge "multi-threaded alpha encoding for lossy"
this produced an error due to an empty VP8TBuffer struct.
Change-Id: I640809d07d20092c1d660e2b59b58a62a12e4371
new option: 'cwebp -mt ...'
new config flag: config.thread_level
(allowed thread_level are 0 or 1 for now. Maybe more later...)
If -mt is activated (and WEBP_USE_THREAD is used for compile), the alpha-compression
will be done in parallel to RGB coding for lossy. Can save quite a bit of latency...
Has no effect for lossless encoding.
Change-Id: I769d0bf90e7380cf99344ad62cd77277f4df5a46
signed integer overflow behavior is undefined, split PrefixEncode() to
two branches to avoid this.
Change-Id: I6e2761d0d77f0aaceafdc4e07232e089c22beb64
This option remaps internal parameters to better match
the expected compression curve of JPEG and produce output files
of similar size, but with better quality.
Change-Id: I96a1cbb480b1f6a0c6845a23c33dfd63f197b689
also change lossless encoder logic, which was relying on explicit
NULL return from WebPSafeMalloc(0)
renamed function to CheckSizeArgumentsOverflow() explicitly
addresses issue #138
Change-Id: Ibbd51cc0281e60e86dfd4c5496274399e4c0f7f3
* treat the last coeff as a special case
* re-arrange the inner code to be shorter
* replace some VP8EncBands[n] by n, for n = 0 or 1
Change-Id: I71e17b014cffad7b073e787fde06260905a6953f
- Separate out mux.h and demux.h
- muxtypes.h: new header for data types common to mux/demux
- Move some misc read/write utilities to utils/utils.h
- Remove some duplicate methods.
- Separate out mux/demux libraries
Change-Id: If9b9569b10d55d922ad9317ef51710544315d6de
10-15% faster encoding.
Almost same output, binary wise. The main difference is
that we can't compute uv_alpha susceptibility, means there
can be subtle differences with different -sns values.
Change-Id: Id1b1a50929bf125b6372212fee1ed75a3bed975f
Number of pairs selected are limited between 25% of histogram
images (at start) and number of histogram images left at any iteration.
Increase the range of iter_mult.
Removed min_cluster_size as parameter for tuning HistogramCombine.
Change-Id: Ia4068cd7af4d0f63c5af9001aceda8a40b9de740
* commit 'v0.2.1':
Update ChangeLog
update NEWS
bump version to 0.2.1
libwebp: validate chunk size in ParseOptionalChunks
cwebp (windows): fix alpha image import on XP
autoconf/libwebp: enable dll builds for mingw
[cd]webp: always output windows errors
fix double to float conversion warning
cwebp: fix jpg encodes on XP
VP8LAllocateHistogramSet: fix overflow in size calculation
GetHistoBits: fix integer overflow
EncodeImageInternal: fix uninitialized free
fix the -g/O3 discrepancy for 32bit compile
fix the BITS=8 case
Make *InitSSE2() functions be empty on non-SSE2 platform
make *InitSSE2() functions be empty on non-SSE2 platform
make VP8DspInitNEON() public
Conflicts:
src/Makefile.am
src/dsp/dec_neon.c
Change-Id: Iddc5152e4a6892db96c12d7c3f74adbc85fe6178
- Changed the dynamic range where more aggressive
(BackwardReferencesTraceBackward) heuristic is run from quality > 10
(instead of quality > 25).
- Limit the backward-ref Window size to 16*width & 256*width for lower
qualities ([0, 25[ & [25, 50[) respectively, instead of 1M window.
- Evaluate the params for HashChainFindCopy outside this function call
and pass it, instead of recomputing them for every call.
Change-Id: If9eedfc14b978e7632d7cf69c96186e2910b0554
the multiplications done for total_size would be done with integers,
possibly overflowing, before being promoted to 64-bit for the addition
Change-Id: I32c3a6400fc2ef120c38e01a8693f4cb1727234d
huff_image_size was a size_t (=32 bits with 32-bit builds) which could
rollover causing an incorrectly sized allocation and a crash in lossless
encoding.
fixes issue #128
Change-Id: I0f20cee98c29b2b40b02607930b6b7a7ca56996d
in debug mode, some float operations see their intermediate
values stored in memory rather than staying in the FPU (which
is 80bit precision).
Several fixes are possible (breaking long calculations into
atomic steps for instance), but simpler of all is just about
turning the cost[] array into float* instead of double*.
The code is a tad faster, and i didn't see any major output
size difference.
Change-Id: I053e6d340850f02761687e072b0782c6734d4bf8
LSIM stands for "local similarity": before matching
a compressed pixel to the source, we search around in the source
and minimise the squared error. So, this is close to PSNR calculation,
but mitigates some of its limitations (pure translation and noise for instance).
There's a new -print_lsim option to cwebp too.
Change-Id: Ia38561034c7a90e71d2ea0f55bb1de527eda245b
Make the heuristic for combining Histograms a function of compression
quality. This change will speed-up compression time for compression
quality less than 75. The compression time/density remains unchanged
for compression quality 75 and higher.
Change-Id: I94513d51078340fbc0737d459fab2cebdd2d6082
the multiplications done for total_size would be done with integers,
possibly overflowing, before being promoted to 64-bit for the addition
Change-Id: Id5c127c8a497ce5de89a276c17f36b59eeb67c21
huff_image_size was a size_t (=32 bits with 32-bit builds) which could
rollover causing an incorrectly sized allocation and a crash in lossless
encoding.
fixes issue #128
Change-Id: I175c8c6132ba9792034807c5c1028dfddfeb4ea5
in debug mode, some float operations see their intermediate
values stored in memory rather than staying in the FPU (which
is 80bit precision).
Several fixes are possible (breaking long calculations into
atomic steps for instance), but simpler of all is just about
turning the cost[] array into float* instead of double*.
The code is a tad faster, and i didn't see any major output
size difference.
Change-Id: Icf1f833e15f8ee4ecc7f9a521d07fdc96ef711aa
Returning 0 (equal) can lead to undefined behaviour.
And, in our cases we'll never have equal keys (added asserts for that)
Change-Id: Ifaf202df321d3f877ad2a03de42e0d6cdd1b2388
fixes the 'blocky sky problem' (saturation problem: when luma was flat,
chroma noise was taking over, resulting in random segment id assigned.
When just using a common uniform segment was better).
+ side clean-up and readibility/experimentability MACRO'ization
+ added '-map 7' option
Change-Id: I35982a9e43c0fecbfdd7b05e4813e8ba8c121d71
Added a threshold of MAX_COLORS_FOR_GRAPH for color-palettes, above
which the graph hint is ignored.
Change-Id: Ia5d7f45e52731b6eaf2806999d6be82861744fd3
spurious in this case, but addresses e.g.,
... potentially uninitialized local variable 'weighted_average' used
Change-Id: Ib99998bf49e4af7a82ee66f13fb850ca5b17dc71
Order-by-cost mostly unchanged (up to a scaling constant 1/log(2))
(except for few minor diff in < 2% of cases)
+ remove unused field cost_mode->cache_bits_
Change-Id: I714f8ab12f49a23f5d499a64c741382c9b489a3e
* ~1-4% faster
* if it's not used, don't use it
* remove the special handling of cache_bits = 0
* remove some tests in the loops
Change-Id: I19d87c3ca731052ff532ea8b2d8e89816507b75f
For low-color images, it may be better to not use color-palettes.
Users should treat this as one another hint (as with Photo &
Picture) and another parameter for tuning the compression density.
The optimum compression can still be obtained by running (outer loop)
compression with all possible tunable parameters.
Change-Id: Icb1a4face2a84774e16e801aee4a8ae97e232e8a
also relocate user_data from WebPAuxStats to the WebPPicture struct to
make clearing easier while placing it closer to the progress hook with
which it's used.
prior to this change some spurious lossless data could be reported in
the lossy (sans alpha) encoding case. additionally user_data could be
lost during lossless encoding.
Change-Id: I929fae3dfde4d445ff81bbaad51445ea586dd80b
* Extend AuxStats with new fields
it's slightly ABI-incompatible, but i guess it's ok for 0.1.99+
I expect to add more stats later, possibly (predictor stats, etc.)
* Have cwebp report the features used by lossless
compression (either for alpha or full lossless coding)
* Print the PSNR for alpha (useful in case of -alpha_q)
* clean-up alpha.c signatures
+ misc cleanup (added const '* const ptr', etc.)
Change-Id: I157a21581f1793cb0c6cc0882e7b0a2dde68a970
Evaluated the impact of this change over 1000 image corpus.
The compression density is up (on average) by 1.2% and encoding time has
gone down considerably from 716 ms (per file) to 146 ms (per file)
(4.9X improvement in encoding time).
Change-Id: Ida562cc0bfe18c9d6f5f00873c95f8396b480eab
Adds new methods WebPPictureARGBToYUVA() and WebPPictureYUVAToARGB()
Depending on the value of picture->use_argb_input,
the main call WebPEncode() will convert appropriately.
Note that both conversions are lossy, so it's recommended to:
* use YUVA input for lossy compression (picture->use_argb_input=0)
* use ARGB input for lossless compression (picture->use_argb_input=1)
Change-Id: I8269d607723ee8a1136b9f4999f7ff4e657bbb04
* add a real proper pointer for holding memory chunk pointer
(instead of using y and argb fields)
* polish the doc with details
* add a WebPPictureView() that extract a view from a picture
without any copy (kind of a fast-crop).
* properly snap the top-left corner for Crop/View. Previously,
the luma position was not snapped, and was off compared to the chroma.
Change-Id: I8a3620c7f5fc6f7d1f8dd89d9da167c91e237439
Adds a test of enc->mb_h_ to VP8IteratorProgress() to avoid a division
by zero. Fixes issue #121.
Original patch from even rouault (even dot rouault at gmail dot com).
Change-Id: Ie5fcc1821860c0a9366d5aa11f3aded4f5b98ed7
This limit corresponds to default histo_bits=3 for images upto sizes 400x400.
Any image higher than this dimension will bump up the histo_bits to 4 internally.
Change-Id: Ic8ba3dcd50e9c588cbbc4a0457289086498ff4ee
Fix inequality assertion on number of palette colors.
Fix inequality assertion test in BackwardReferencesHashChainFollowChosenPath.
Change-Id: Ie3242f1bbeaf96db91b839b6732ccce2634cebf3
Used when we don't code a Huffman Image.
-> Simplify the code quite some because we don't have to
deal with special cases of histo bits
Change-Id: I0c3f46cbf3b501e021c093e07253e7404c01ff4f
Updated histo_bits to 3 from 4 and changed the quality threshold for inner loop for HashChainFindCopy.
Impact: 0.5%-0.8% better bpp with 15%-20% hit on encoding throughput
at default encoding settings.
Change-Id: I316ef88403148b1e19036fa0817d944eb0301255
Ensure that the lossless bit-stream doesn't allow for such cases and
safe-gaurd decoder against indefinite recursion.
Change-Id: Ia6d7f519291de8739f79a977a5800982872aae71
When importing BGRA or RGBA data for encoding, provide variants of
the WEBPImportPicture API for RGBX and BRGX data meaning the alpha
channel should be ignored.
Author: noel@chromium.org
from Chromium patch: https://chromiumcodereview.appspot.com/10496016/
Change-Id: I15fcaa4160c69a2b5549394204b6e6d7a1c5d333
VP8-lossy will now avoid writing an ALPH chunk if the
alpha values are trivial.
+ changed DumpPicture() accordingly in cwebp
+ prevented the -d option to be active with lossless
(DumpPicture wouldn't work).
Change-Id: I34fdb108a2b6207e93fa6cd00b1d2509a8e1dc4b
Change the lossless signature to 0x2f
Add 1 bit indicator for 'droppable (or trivial) alpha)'.
Add 3 bit lossless version (for future extension like yuv support).
Change the sub-resolution information to 3 bits implying range [2 .. 9]
Change-Id: Ic7b8c069240bbcd326cf5d5d4cd2dde8667851e2
Take picture and percent value storage location instead of VP8Encoder.
This will allow reuse by the lossless encoder.
Change-Id: Ic49dbc800cc3e2df60d20f4ebac277f68ed6031b
CostModelBuild() and TrackBackwards() returns weren't checked
+ code clean-up
+ de-inline VP8LBackwardRefs non-critical methods
+ shuffle the .h around to group things together
+ extract some constants as #define's
+ fixed the "if (!(cc_init = ...)) {...}" constructs
+ removed some unneeded VP8L prefixes
Change-Id: Ic634cb87bc6b2033242d3e8e8731fab4c134f327
we vary linearly lossless-method between 0 and 6,
and lossless-quality between 50 and 100, so that encoding
speed can go from 'quite fast' to 'rather slow'.
Impact on size is moderate, but visible.
Change-Id: I0b7917e7170eb50258afb1a4e248028cd9e9207d
This saves ~26 bytes of headers.
* introduce new VP8LDecodeAlphaImageStream() for decoding
* use VP8LEncodeStream() for encoding
* refactor code a bit
still TODO: make the alpha-quality/enc-method user-configurable
Change-Id: I23e599bebe335cfb5868e746e076c3358ef12e71
* RIFF header is omitted
* rename NewVP8LEncoder and DeleteVP8Encoder
* change the signature to take a "const WebPPicture*"
(it was non-const just because we were setting some error potentially)
* made the pic_ field const in VP8LEncoder too.
* trap the bitwriter::error_ too
* simplify some signatures to take WebPPicture* instead
of unneeded VP8LEncoder*
VP8LEncodeStream() will be called directly to compress alpha channel
header-less.
Change-Id: Ibceef63d2b3fbc412f0dffc38dc05c2dee6b6bbf
This is a border-case situation: the picture is not const, because
we're change its error status. But taking it non-const forces
the caller to carry a non-const picture all around the code just
in case (0.00001% of the time?) something bad happen.
This pretty much the same as making all objects non-const because
we'll eventually call delete or free() on them, which is quite a
non-const operation. Well... Better allow constness enforcement for
the remaining 99.9999% of the code.
Change-Id: I9b93892a189a50feaec1a3a518ebf488eb6ff22f
now, we only use 2 bits for the filtering method, and 2 bits
for the compression method.
There's two additional bits which are INFORMATIVE, to specify
whether the source has been pre-processed (level reduction)
during compression. This can be used at decompression time
for some post-processing (see DequantizeLevels()).
New relevant spec excerpt:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ChunkHeader('ALPH') |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|Rsv| P | F | C | Alpha Bitstream... |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Compression method (C): 2 bits
: The compression method used:
* `0`: No compression.
* `1`: Backward reference counts encoded with arithmetic encoder.
Filtering method (F): 2 bits
: The filtering method used:
* `0`: None.
* `1`: Horizontal filter.
* `2`: Vertical filter.
* `3`: Gradient filter.
Pre-processing (P): 2 bits
: These INFORMATIVE bits are used to signal the pre-processing that has
been performed during compression. The decoder can use this information to
e.g. dither the values or smooth the gradients prior to display.
* `0`: no pre-processing
* `1`: level reduction
Decoders are not required to use this information in any specified way.
Reserved (Rsv): 2 bits
: SHOULD be `0`.
Alpha bitstream: _Chunk Size_ - `1` bytes
: Encoded alpha bitstream.
This optional chunk contains encoded alpha data for a single tile.
Either **ALL or NONE** of the tiles must contain this chunk.
The alpha channel data is losslessly stored as raw data (when
compression method is '0') or compressed using the lossless format
(when the compression method is '1').
Change-Id: Ied8f5fb922707a953e6a2b601c69c73e552dda6b
* Method #1 is now calling the lossless encoder on the alpha plane.
Format is not final, it's just a first draft. We need ad-hoc functions.
* removed now useless utils/alpha.*
* added utils/quant_levels.h instead
* removed the TCoder code altogether
Change-Id: I636840b6129a43171b74860e0a0fc5bb1bcffc6a
-> lot of simplifications ensue and we should be able to get rid of
ClearHuffmanTreeIfOnlyOneSymbol() too, in a subsequent patch.
Change-Id: Ic4c51d05e4b1970e37f94ffd85fae6a02e4a6422
Allocate big chunk of memory in GetHuffBitLengthsAndCodes, instead of allocating in a loop.
Also fixed the potential memleak.
Change-Id: Idc23ffa306f76100217304444191a8d2fef9c44a
The size written in VP8L header should be without padding.
(Also clarified this code using consts).
Change-Id: Ic6583d760c0f52ef61924ab0330c65c668a12fdc
if histogram_image_size is reduced in when writing the histogram_image
the bit arrays would leak any remaining elements. store their element
count separately.
Change-Id: I710142a11ebd4325faec7bd65c2d2572aae19307
The current implementation doesn't take care one byte
signature and associated one byte padding (for odd sized chunk).
Change-Id: I35b81d0644818cdba38189aa48c75db5f92e68f4
The new method VP8LGetBackwardReferences hides internal
heuristics used for choosing RLE or LZ77 based refs.
- Tuned VP8LHashChainFindCopy for better compression at higher Q.
- Refactored code.
- Removed the unused method VP8LVerifyBackwardReferences.
Change-Id: Ibb7bb072bab5a49a001577a20d88226f52e6c663
arrays can be passed directly as only their members are being modified.
this also reduces the allocation for bit_codes[] by taking the
sizeof(type)=2 rather than sizeof(ptr)=4/8 in one case.
Change-Id: Idad20cead58c218b58d90b71699374fefd01cad9
* lossless_encoder: (46 commits)
split StoreHuffmanCode() into smaller functions
more consolidation: introduce VP8LHistogramSet
big code clean-up and refactoring and optimization
Some cosmetics in histogram.c
Approximate FastLog between value range [256, 8192]
Forgot to update out_bit_costs to symbol_bit_costs at one instance.
Evaluate output cluster's bit_costs once in HistogramRefine.
Simple Huffman code changes.
Lossless decoder: remove an unneeded param in ReadHuffmanCodeLengths().
Reducing emerging palette size from 11 to 9 bits.
Move GetHistImageSymbols to histogram.c
Improve predict vs no-predict heuristic.
code-moving and clean-up
reduce memory usage by allocating only one histo
Restrict histo_bits to ensure histo_image size is under 32MB
further simplification for the meta-Huffman coding
A quick pass of cleanup in backward reference code
Make transform bits a function of encode method (-m).
introduce -lossless option, protected by USE_LOSSLESS_ENCODER
Run TraceBackwards for higher qualities.
...
Conflicts:
src/enc/webpenc.c
Change-Id: I9a5d98cba0889ea91d10699466939cc283da345a
VP8LHistogramSet is container for pointers to histograms that
we can shuffle around. Allocation is one big chunk of memory.
Downside is that we don't de-allocate memory on-the-go during
HistogramRefine().
+ renamed HistogramRefine() into HistogramRemap(), so we don't
confuse with "HistogramCombine"
+ made VP8LHistogramClear() static.
Change-Id: Idf1a748a871c3b942cca5c8050072ccd82c7511d
* de-inline some function
* make VP8LBackwardRefs be more like a vectorwith max capacity
* add bit_cost_ field to VP8LHistogram
* general code simplifications
* remove some memmov() from HistogramRefine
* simplify HistogramDistance()
...
Change-Id: I16904d9fa2380e1cf4a3fdddf56ed1fcadfa25dc
Avoid bit_costs evaluated every time in function HistogramDistance. Also moved
VP8LInitBackwardRefs and VP8LClearBackwardRefs to backward_references.h
Change-Id: Id507f164d0fc64480aebc4a3ea3e6950ed377a60
No empty trees are codified with the simple Huffman code. The simple Huffman
code is simplified to be either a 1-bit code or 8-bit code for symbols.
Change-Id: I3e2813027b5a643862729339303d80197c497aff
This improves compression density. For example, at quality 95 on 1000 PNGs:
bpp(before) = 2.447 and bpp(after) = 2.412
Change-Id: I19c343ba05cca48a6940293721066502a5c3d693
* removed use_lz77_ field, and added cache_bits_ one.
* use more BackwardRefs params
* move code around to organize more logically
* reduce memory use on histo
...
Change-Id: I833217a1b950189cf486704049e3fe28382ce335
instead of lz77+rle
* introduce VP8LBackwardRefs structure and simplify the code by not passing
around {PixOrCopy/int} pairs.
More functions should be turned into using this struct (TODO(later)).
Change-Id: I69c5c9fa61dddd61a2abc2824d70b8606a1c55b6
* don't transmit the number of Huffman tree group explicitly
* move color-cache information before the meta-Huffman block
* also add a check that color_cache_bits is in [1..11] range, as per spec.
Change-Id: I81d7711068653b509cdbc1151d93e229c4254580
(Essentially, there was no need of a separate 'argb_palette' array. And
argb_palette[0] was never being set).
Change-Id: Id0a8c7e063d3af41e39fc9b8661611b51ccc55cd
so that it uses original values of left, top etc for prediction rather than the
predicted values of the same. Also, do some renaming in the same to make it
more readable.
Change-Id: I2fe94e35a6700bd437f5c601e2af12323bf32445
- VP8LEncAnalyze, EvalAndApplySubtractGreen, ApplyPredictFilter,
ApplyCrossColorFilter
- Added palette handling and transform buffer management in VP8LEncodeImage()
- Add Transforms (subtract Green, Predict, cross_color) to dsp/lossless.c.
These are more-or-less copied from src/lossless code.
After this Change, will implement the EncodeImageInternal() method.
Change-Id: Idf71f803c24b3b5ae3b5079b15e019721784611d
src/dec/Makefile.am: add missing reference to vp8li.h
src/{dec,dsp,enc}/Makefile.am: move some headers to noinst_
Change-Id: I0e2bc69980bd8175d99ad0ab63f537ef9e425b77
- use common file organization across subdir makefiles
- append lib/source/header list variables and sort
Change-Id: I0653e1c73a4552b0c43d21f321b22b4972d6e87b
- remove some unused functions
- move global arrays from data to read only section
- explicitly cast malloc returns; not specifically necessary, but helps
show intent
- miscellaneous formatting
Change-Id: Ib15fe5b37fe6c29c369ad928bdc3a7290cd13c84
Add a dirty_ flag to keep track of updated probabilities and the need to
recompute the level costs.
This only makes a difference for "-m 2" method which was sub-optimal.
But it's overall cleaner to have this flag.
Change-Id: I21c71201e1d07a923d97a3adf2fbbd7d67d35433
This proved being ok, even for large pictures, provided one
takes care of overflow. When an overflow is bound to occur, the
counters are renormalized.
Overall, shaves ~12k of memory.
Change-Id: I2ba21a407964fe1a34c352371cba15166e0c4548
the p0 proba was incorrectly accumulated. Merging its contribution into
the LevelCost[] was creating more problems than anything (esp. with trellis)
so let's just not.
Change-Id: I4c07bfee471085df901228d97b20a4d9606ba44e
These will report the 7x7-averaged PSNR or SSIM, using the
new internal function WebPPictureDistortion().
This is for information only. These flags have no encoding impact.
+misc opportunistic cosmetics
Change-Id: I64c0a7eca679134d39062e438886274b22bb643f
to 'clean up' the fully-transparent area and make it more compressible
new cwebp flags: -alpha_cleanup (off by default, since gain is not 100% guaranteed)
Change-Id: I74d77e1915eee146584cd61c9c1132a41db922eb
.. where only 2 filtering modes are potentially
tried, instead of all of them. This is fast than the exhaustive 'best'
mode, and not much worse.
Options for cwebp are:
-alpha_filter none
-alpha_filter fast (<- default)
-alpha_filter best (<- slow)
Change-Id: I8cb90ee11b8f981811e013ea4ad5bf72ba3ea7d4
Add predictive filtering option for Alpha plane.
Valid range for filter option is [0, 5] corresponding to prediction
methods none, horizontal, vertical, gradient & paeth filter.
The prediction method 5 will try all the prediction methods (0 to 4)
and pick the prediction method that gives best compression.
Change-Id: I9244d4a9c5017501a9696c7cec5045f04c16d49b
Extend WebP Encode functionality to encode Alpha data and produce
bit-stream (RIFF+VP8X+ALPH+VP8) corresponding to WebP-Alpha.
Change-Id: I983b4cd97be94a86a8e6d03b3b9c728db851bf48
This has been pointed as a useful information to have in the header (for
the non VP8-specs savvy ones)
Change-Id: I494b1da41dfafce882a94e3677d1cd6206bc504b
Was getting compiler error when I included bit_writer.h from non libwebp
directory. bit_writer.h includes vp8enci.h and that uses VP8BitWriter without
having it's definition.
Change-Id: I1ca82594292979b9eb7e60e2fffb22c16768dd30
Gathers all DSP-related function (and SSE2 implementations).
Clean-up some unwanted symbolic dependencies so that webp_encode,
webp_decode and webp_dsp are truly independent libraries.
+ opportunistic clean-up:
* remove unneeded VP8DspInitTables(), now integrated in VP8DspInit()
* make consistent use of VP8GetCPUInfo() in the various DspInit() funcs
* change OUT macro to DST
uint8_t is a gcc extension which msvc similarly supports, but for
greater compatibility, and to match the change already made in
dec/vp8i.h, update the remaining bitfield to use unsigned int.
Change-Id: Id9dca470345871e00e82893255a306dfe5d3fa29
Although it degrades quality, this option is useful to avoid the 512k
limit for partition #0.
If not enough to reach the lower bound of 4bits per macroblock header,
one should also limit the number of segments used (down to -segments 1)
See the man file for extra details.
Change-Id: Ia59ffac13176c85b809ddd6340d37b54ee9487ea
was missing from the RD-computation of intra-4x4 score.
Doesn't change anything significantly, it's just More Correct.
Change-Id: I25c5b53a810d97e6fb7f98c549fd23bbe55e1bf4
~5-10% faster.
Heavy 8bit arithmetic trickery!
Patch by Somnath Banerjee (somnath at google dot com)
Change-Id: I9fd2c511d9f631e9cf4b008c46127b49fb527b47
Use _MSC_VER as the intrinsics compile without /arch:SSE2 on x86.
Also avoids applying the same flag to all files which defeated the
purpose of the runtime cpu-detection.
Thanks to Frank B. for the suggestion!
Change-Id: Iae9933a3cee704e663d9bbd53d0fa68e8c025425
picture->error_code can be looked up for finer error diagnose.
Added readable error messages to cwebp too.
Should close bug #75 (http://code.google.com/p/webp/issues/detail?id=75)
Change-Id: I8889d06642d90702f698cd5c27441a058ddb3636
sometimes, gcc insert sse2 storeu instructions (like in VP8InitFilter())
with aligment requirements.
Bug was visible 'sometimes' in non-debug mode, when trying to use -af.
Change-Id: If3ec282bbbb9f9d0d33ca4b2c4bed46cd26fe495
we were reading past the end of the dqs[] array.
reported by Mathias Schindler (on cygwin only)
http://code.google.com/p/webp/issues/detail?id=71
Change-Id: Ib38c4c139e3cac3e8915626d63e16b403d6bbd63
+ add a simple rescaling function: WebPPictureRescale() for encoding
+ clean-up the memory managment around the alpha plane
+ fix some includes path by using "../webp/xxx.h" instead of "webp/xxx.h"
New flags for 'cwebp':
-resize <width> <height>
-444 (no effect)
-422 (no effect)
-400
Change-Id: I25a95f901493f939c2dd789e658493b83bd1abfa
This is a (minor) bitstream change: if the 'color_space' bit is set to '1'
(which is normally an undefined/invalid behaviour), we add extra data at the
end of partition #0 (so-called 'extensions')
Namely, we add the size of the extension data as 3 bytes (little-endian),
followed by a set of bits telling which extensions we're incorporating.
The data then _preceeds_ this trailing tags.
This is all experimental, and you'll need to have
'#define WEBP_EXPERIMENTAL_FEATURES' in webp/types.h to enable this code
(at your own risk! :))
Still, this hack produces almost-valid WebP file for decoders that don't
check this color_space bit. In particular, previous 'dwebp' (and for instance
Chrome) will recognize this files and decode them, but without the alpha
of course. Other decoder will just see random extra stuff at the end of
partition #0.
To experiment with the alpha-channel, you need to compile on Unix platform
and use PNGs for input/output.
If 'alpha.png' is a source with alpha channel, then you can try (on Unix):
cwebp alpha.png -o alpha.webp
dwebp alpha.webp -o test.png
cwebp now has a '-noalpha' flag to ignore any alpha information from the
source, if present.
More hacking and experimenting welcome!
Change-Id: I3c7b1fd8411c9e7a9f77690e898479ad85c52f3e
For now, SSE2 functions are compiled a-minima: only on platforms
where __SSE2__ is defined. Let's later add some autoconf-based
config to enable/disable at will.
One can disable SSE2 at run-time by hooking-up VP8GetInfo.
There is a new option "-noasm" in cwebp for that.
Output should be binary the same between C and SSE2 version. If not,
that's a bug!
patch by Christian Duvivier (cduvivier at google dot com)
Change-Id: Iae006c3cdcb7e8280e846cedb94d239dab1e42ae
to return the sum directly.
output is bitwise the same, speed up 1-2%. This is preparatory to a
more efficient SSE2 implementation.
Change-Id: I0bcdf05808c93420fbe9dcb75e5e7e55a4ae5b89
Makes things lighter at the expense of requiring the user
to be up-to-date for autotools.
patch by Jan Engelhardt (jengelh at medozas dot de)
Change-Id: Icfcab2d899828a213d9fade0dab350dacd0c070a
going down to strict -ansi c89 is quite overkill (no 'inline',
and /* */-style comments).
But with these fixes, the code compiles with the stringent flags:
-Wextra -Wold-style-definition -Wmissing-prototypes
-Wmissing-declarations and -Wdeclaration-after-statement
Change-Id: I36222f8f505bcba3d9d1309ad98b5ccb04ec17e3
WebPGetDecoderVersion() and WebPGetEncoderVersion()
will not return 0.1.2 encoded as 0x000102
dwebp and cwebp also have a new "-version" flag
Change-Id: I4fb4b5a8fc4e53681a386ff4b74fffb639fa237a
use top_srcdir rather than top_builddir for AM_CPPFLAGS
add EXTRA_DIST to man Makefile. fixes distcheck target.
Change-Id: I308dc1c98f096de1efe188f63d040ef953598e78
we'll always encode using absolute value, not relative ones.
Both methods use the same number of bits, so we'll go for the
simpler and most robust one.
+ add some extra checks about pic->u/v being NULL.
Change-Id: I98ea01a1a6b133ab3c816c0fbc50e18269bd2098
converts PNG & JPEG to WebP
This is an experimental early version, with lot of room
of later optimizations in both speed and quality.
Compile with the usual `./configure && make`
Command line example is examples/cwebp
Usage:
cwebp [options] -q quality input.png -o output.webp
where 'quality' is between 0 (poor) to 100 (very good).
Typical value is around 80.
More encoding options with 'cwebp -longhelp'
Change-Id: I577a94f6f622a0c44bdfa9daf1086ace89d45539