-> lot of simplifications ensue and we should be able to get rid of
ClearHuffmanTreeIfOnlyOneSymbol() too, in a subsequent patch.
Change-Id: Ic4c51d05e4b1970e37f94ffd85fae6a02e4a6422
Allocate big chunk of memory in GetHuffBitLengthsAndCodes, instead of allocating in a loop.
Also fixed the potential memleak.
Change-Id: Idc23ffa306f76100217304444191a8d2fef9c44a
The size written in VP8L header should be without padding.
(Also clarified this code using consts).
Change-Id: Ic6583d760c0f52ef61924ab0330c65c668a12fdc
When we are at end-of-stream, but haven't decoded all pixels, we should
return an error.
Also remove an obsolete TODO.
Change-Id: I3fb1646136e706da536d537a54d1fa487a890630
- Symbols added to the tree are valid inside HuffmanTreeBuildExplicit().
- In HuffmanTreeBuildImplicit(), make sure 'root_symbol' is
valid in case of a single symbol tree.
Change-Id: I7de5de71ff28f41e2d6228b29ed8dd4a20813e99
if histogram_image_size is reduced in when writing the histogram_image
the bit arrays would leak any remaining elements. store their element
count separately.
Change-Id: I710142a11ebd4325faec7bd65c2d2572aae19307
[Basically, the condition "src - dist < data" can be wrongly evaluated
to be false if "src < dist" due to underflow. Instead, "src - data <
dist" is the correct condition, as "src > data" is always true and so
there would never be an underflow].
Change-Id: Ic9f64bfe76a9acae97abc1fb7c1f4868e81f1eb8
CompareHuffmanTrees() and SetBitDepths()):
- Move 'tree_size' initialization and malloc for 'tree + tree_pool'
outside the loop.
- Some renames/tweaks for readability.
Change-Id: I5cb3cc942afac6e9f51a0b97c57ee897677a48a2
The current implementation doesn't take care one byte
signature and associated one byte padding (for odd sized chunk).
Change-Id: I35b81d0644818cdba38189aa48c75db5f92e68f4
The new method VP8LGetBackwardReferences hides internal
heuristics used for choosing RLE or LZ77 based refs.
- Tuned VP8LHashChainFindCopy for better compression at higher Q.
- Refactored code.
- Removed the unused method VP8LVerifyBackwardReferences.
Change-Id: Ibb7bb072bab5a49a001577a20d88226f52e6c663
arrays can be passed directly as only their members are being modified.
this also reduces the allocation for bit_codes[] by taking the
sizeof(type)=2 rather than sizeof(ptr)=4/8 in one case.
Change-Id: Idad20cead58c218b58d90b71699374fefd01cad9
add proper cpu-detection for Android targets
Fixes issue #118 (and is a better solution for #117).
based on patch by pepijn vaneeckhoudt
Change-Id: I6b00ea6d51ca658ccf6a3d55b87b99c01c6805be
* lossless_encoder: (46 commits)
split StoreHuffmanCode() into smaller functions
more consolidation: introduce VP8LHistogramSet
big code clean-up and refactoring and optimization
Some cosmetics in histogram.c
Approximate FastLog between value range [256, 8192]
Forgot to update out_bit_costs to symbol_bit_costs at one instance.
Evaluate output cluster's bit_costs once in HistogramRefine.
Simple Huffman code changes.
Lossless decoder: remove an unneeded param in ReadHuffmanCodeLengths().
Reducing emerging palette size from 11 to 9 bits.
Move GetHistImageSymbols to histogram.c
Improve predict vs no-predict heuristic.
code-moving and clean-up
reduce memory usage by allocating only one histo
Restrict histo_bits to ensure histo_image size is under 32MB
further simplification for the meta-Huffman coding
A quick pass of cleanup in backward reference code
Make transform bits a function of encode method (-m).
introduce -lossless option, protected by USE_LOSSLESS_ENCODER
Run TraceBackwards for higher qualities.
...
Conflicts:
src/enc/webpenc.c
Change-Id: I9a5d98cba0889ea91d10699466939cc283da345a
VP8LHistogramSet is container for pointers to histograms that
we can shuffle around. Allocation is one big chunk of memory.
Downside is that we don't de-allocate memory on-the-go during
HistogramRefine().
+ renamed HistogramRefine() into HistogramRemap(), so we don't
confuse with "HistogramCombine"
+ made VP8LHistogramClear() static.
Change-Id: Idf1a748a871c3b942cca5c8050072ccd82c7511d
* de-inline some function
* make VP8LBackwardRefs be more like a vectorwith max capacity
* add bit_cost_ field to VP8LHistogram
* general code simplifications
* remove some memmov() from HistogramRefine
* simplify HistogramDistance()
...
Change-Id: I16904d9fa2380e1cf4a3fdddf56ed1fcadfa25dc
Profiled data: Profiled few images and found that in the function VP8LFastLog,
90% of time table lookup is performed, while rest of time (10%) call to log
function is made. Typical lookup accounts for 10 CPU instructions and call to
log 200 instruction counts. The weighted average comes out to be 30
instructions per call. For mid qualities (25-75), this function (VP8LFastLog)
accounts for 30-50% of total CPU cycles (via call path: VP8LCOlorSpaceTransform
-> PredictionCostCrossColor -> ShannonEntropy). After this change, the log is
called less that 1% of time, with average instructions being 15 per call.
Measured the performance over 1000 files for various qualities and found
overall compression speedup between 10-15% (in quality range [0, 75]). The
compression density loss is around 0.5% (though at some qualities, compression
is little better as well).
Change-Id: I247bc6a8d4351819c871f19d65455dc23aea8650
Avoid bit_costs evaluated every time in function HistogramDistance. Also moved
VP8LInitBackwardRefs and VP8LClearBackwardRefs to backward_references.h
Change-Id: Id507f164d0fc64480aebc4a3ea3e6950ed377a60
No empty trees are codified with the simple Huffman code. The simple Huffman
code is simplified to be either a 1-bit code or 8-bit code for symbols.
Change-Id: I3e2813027b5a643862729339303d80197c497aff