perf: font-compression improvements (#1056)

## Purpose

This PR includes some preparatory changes that are needed for an
upcoming performant CJK font feature. The changes have no impact on
render time and heap allocation for latin text. **Despite this, I think
these changes stand on their own as a better font
compression/decompression implementation.**

## Summary

- Font decompressor rewrite: Replaced the 4-slot LRU group cache with a
two-tier system — a page buffer (glyphs prewarmed before rendering
begins) and a hot-group fallback (last decompressed group retained for
non-prewarmed
  glyphs). 
- Byte-aligned compressed bitmap format: Glyph bitmaps within compressed
groups are now stored row-padded rather than tightly packed before
DEFLATE compression, improving compression ratios by making identical
pixel rows produce
identical byte patterns. Glyphs are compacted back to packed format on
demand at render time. Reduces flash size by 155 KB.
- Page prewarm system: Added `Page::collectText` and
`Page::getDominantStyle` to extract per-style glyph requirements before
rendering, and `GfxRenderer::prewarmFontCache` to pre-decompress only
the groups needed for the dominant style
   — eliminating mid-render decompression for the common case.
- UTF-8 robustness fixes: `utf8NextCodepoint` now validates continuation
bytes and returns a replacement glyph on malformed input;
`ChapterHtmlSlimParser` correctly preserves incomplete multi-byte
sequences across word-buffer flush
  boundaries rather than splitting them.

---

### AI Usage

While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.

Did you use AI tools to help write this code? _**YES**_ Architecture and
design was done by me, refined a bit by Claude. Code mostly by Claude,
but not entirely.
This commit is contained in:
Adrian Wilkins-Caruana
2026-03-12 07:05:46 +11:00
committed by GitHub
parent b467ea7973
commit f1e9dc7f30
70 changed files with 104438 additions and 120059 deletions

View File

@@ -2,6 +2,7 @@
#include <HalStorage.h>
#include <algorithm>
#include <string>
#include <utility>
#include <vector>

View File

@@ -4,6 +4,7 @@
#include <GfxRenderer.h>
#include <HalStorage.h>
#include <Logging.h>
#include <Utf8.h>
#include <expat.h>
#include "../../Epub.h"
@@ -758,9 +759,30 @@ void XMLCALL ChapterHtmlSlimParser::characterData(void* userData, const XML_Char
}
}
// If we're about to run out of space, then cut the word off and start a new one
// If we're about to run out of space, then cut the word off and start a new one.
// For CJK text (no spaces), this is the primary word-breaking mechanism.
// We must avoid splitting multi-byte UTF-8 sequences across word boundaries,
// otherwise the trailing bytes become orphaned continuation bytes that the
// decoder can't interpret.
if (self->partWordBufferIndex >= MAX_WORD_SIZE) {
self->flushPartWordBuffer();
int safeLen = utf8SafeTruncateBuffer(self->partWordBuffer, self->partWordBufferIndex);
if (safeLen < self->partWordBufferIndex && safeLen > 0) {
// Incomplete UTF-8 sequence at the end — save it before flushing
int overflow = self->partWordBufferIndex - safeLen;
char saved[4];
for (int j = 0; j < overflow; j++) {
saved[j] = self->partWordBuffer[safeLen + j];
}
self->partWordBufferIndex = safeLen;
self->flushPartWordBuffer();
for (int j = 0; j < overflow; j++) {
self->partWordBuffer[j] = saved[j];
}
self->partWordBufferIndex = overflow;
} else {
self->flushPartWordBuffer();
}
}
self->partWordBuffer[self->partWordBufferIndex++] = s[i];