Mostly: avoid doing calculation like: ptr + j * stride
when stride is 'int'. Rather use size_t, or pointer increments (ptr += stride)
when possible.
BUG=webp:314
Change-Id: I81c684b515dd1ec4f601f32d50a6e821c4e46e20
(cherry picked from commit e2affacc35)
DGifGetExtension() may successfully return, but the data pointer should
still be validated
BUG=webp:310
Change-Id: I6cfe617871fef2fe07887e5f48bb20f7ab7cfb35
(cherry picked from commit 806f6279ae)
This is essentially a revert of a3611513d2
and cfbcc5ece0.
Here is what happened: there was a corruption bug that eventually
got fixed by 0174d18d8b.
But before finding the root, a3611513d2
and cfbcc5ece0 hid the bug
by not imposing length of 1 when it was actually 2 or 3 (which does help
compression as a litteral is more efficient than an offset and a length
of size 2 or 3).
Change-Id: I6f18fc1f583a51ac9d8aab2508458264047cd493
convert the assert() to an error check to avoid crashing when reading
malformed files.
BUG=webp:302
Change-Id: I25eed9cab5c0a439bd3411beacc83f3a27af2bbf
this function can be called not to decode pixels, but simply
to finish processing (through process_func()) the already decoded
pixels.
Change-Id: I80485e92e3c47f0aa3389476dcb82745a243fc4a
The optimization for (len != MIN_LENGTH) actually only holds for
(len > MIN_LENGTH) but (len < MIN_LENGTH) can now happen as len can
be changed in the loop before.
Change-Id: I3f9f91a540206c80385c5fba96c3d64ab9536752
On the non-fast path (use_8b_decode_=0) for decoding the alpha-mask,
we could end up requesting ApplyInverseTransform() with more rows
to process than NUM_ARGB_CACHE_ROWS. This could only happen on the
very last bottom rows of the image.
* ProcessRows() doesn't need to be fixed, since we never request more
than NUM_ARGB_CACHE_ROWS rows. Added an assert for that.
* the use_8b_decode_=1 case doesn't use argb_cache_, but rather does
the palette-decoding call directly. So, no problem here too.
Only the generic (and rather rare) case of calling ExtractAlphaRows()
was affected.
Change-Id: I58e28d590dcc08c24d237429b79614abcef1db7c
This is getting back to the old behavior which is actually better for
compression and speed with the latest patches.
Change-Id: I35884bab02589297c25d6e1e66dc5f13e05f7aa7
This was defined (slightly differently) at two places. Created a common
method and moved to utils/utils.[hc].
Change-Id: I66c3ac6dea24e0cd2c0eaa5440f3142b4dbbe23b
we don't need to store the resulting histogram, so no need to
call HistogramAddEval().
Allows some signature simplifications...
Change-Id: I3fff6c45f4a7c6179499c6078ff159df4ca0ac53
In case where the same offset is found in consecutive pixels,
the cost computation from one pixel can be re-used for the next.
Change-Id: Ic03c7d4ab95f3612eafc703349cfefd75273c3d7
and also recycle the malloc'd intervals
This avoids quite some malloc/free cycles during interval managment.
Change-Id: Ic2892e7c0260d0fca0e455d4728f261fb4c3800e