**What is the goal of this PR?** Compress reader font bitmaps to reduce flash usage by 30.7%. **What changes are included?** - New `EpdFontGroup` struct and extended `EpdFontData` with `groups`/`groupCount` fields - `--compress` flag in `fontconvert.py`: groups glyphs (ASCII base group + groups of 8) and compresses each with raw DEFLATE - `FontDecompressor` class with 4-slot LRU cache for on-demand decompression during rendering - `GfxRenderer` transparently routes bitmap access through `getGlyphBitmap()` (compressed or direct flash) - Uses `uzlib` for decompression with minimal heap overhead. - 48 reader fonts (Bookerly, NotoSans 12-18pt, OpenDyslexic) regenerated with compression; 5 UI fonts unchanged - Round-trip verification script (`verify_compression.py`) runs as part of font generation | | baseline | font-compression | Difference | |--|--------|-----------------|------------| | Flash (ELF) | 6,302,476 B (96.2%) | 4,365,022 B (66.6%) | -1,937,454 B (-30.7%) | | firmware.bin | 6,468,192 B | 4,531,008 B | -1,937,184 B (-29.9%) | | RAM | 101,700 B (31.0%) | 103,076 B (31.5%) | +1,376 B (+0.5%) | Comparison of uncompressed baseline vs script-based group compression (4-slot LRU cache, cleared each page). Glyphs are grouped by Unicode block (ASCII, Latin-1, Latin Extended-A, Combining Marks, Cyrillic, General Punctuation, etc.) instead of sequential groups of 8. | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Median** | 414.9 ms | 431.6 ms | +16.7 ms (+4.0%) | | **Pages** | 37 | 37 | | | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Heap free (median)** | 187.0 KB | 176.3 KB | -10.7 KB | | **Heap free (min)** | 186.0 KB | 166.5 KB | -19.5 KB | | **Largest block (median)** | 148.0 KB | 128.0 KB | -20.0 KB | | **Largest block (min)** | 148.0 KB | 120.0 KB | -28.0 KB | | | Misses/page | Hit rate | |---|---|---| | **Compressed (cold cache)** | 2.1 | 99.85% | ------ While CrossPoint doesn't have restrictions on AI tools in contributing, please be transparent about their usage as it helps set the right context for reviewers. Did you use AI tools to help write this code? _**YES**_ Implementation was done by Claude Code (Opus 4.6) based on a plan developed collaboratively. All generated font headers were verified with an automated round-trip decompression test. The firmware was compiled successfully but has not yet been tested on-device. --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
33 lines
1.0 KiB
C
33 lines
1.0 KiB
C
/*
|
|
* uzlib - tiny deflate/inflate library (deflate, gzip, zlib)
|
|
*
|
|
* Copyright (c) 2014-2018 by Paul Sokolovsky
|
|
*/
|
|
|
|
#ifndef UZLIB_CONF_H_INCLUDED
|
|
#define UZLIB_CONF_H_INCLUDED
|
|
|
|
#ifndef UZLIB_CONF_DEBUG_LOG
|
|
/* Debug logging level 0, 1, 2, etc. */
|
|
#define UZLIB_CONF_DEBUG_LOG 0
|
|
#endif
|
|
|
|
#ifndef UZLIB_CONF_PARANOID_CHECKS
|
|
/* Perform extra checks on the input stream, even if they aren't proven
|
|
to be strictly required (== lack of them wasn't proven to lead to
|
|
crashes). */
|
|
#define UZLIB_CONF_PARANOID_CHECKS 0
|
|
#endif
|
|
|
|
#ifndef UZLIB_CONF_USE_MEMCPY
|
|
/* Use memcpy() for copying data out of LZ window or uncompressed blocks,
|
|
instead of doing this byte by byte. For well-compressed data, this
|
|
may noticeably increase decompression speed. But for less compressed,
|
|
it can actually deteriorate it (due to the fact that many memcpy()
|
|
implementations are optimized for large blocks of data, and have
|
|
too much overhead for short strings of just a few bytes). */
|
|
#define UZLIB_CONF_USE_MEMCPY 0
|
|
#endif
|
|
|
|
#endif /* UZLIB_CONF_H_INCLUDED */
|