Files
crosspoint-reader-mod/lib/ZipFile/ZipFile.cpp

636 lines
16 KiB
C++
Raw Normal View History

2025-12-03 22:00:29 +11:00
#include "ZipFile.h"
#include <HalStorage.h>
#include <InflateReader.h>
#include <Logging.h>
2025-12-03 22:00:29 +11:00
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458) ## Summary Optimizes EPUB metadata indexing for large books (2000+ chapters) from ~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n) hash-indexed lookups. Fixes #134 ## Problem Three phases had O(n²) complexity due to nested loops: | Phase | Operation | Before (2768 chapters) | |-------|-----------|------------------------| | OPF Pass | For each spine ref, scan all manifest items | ~25 min | | TOC Pass | For each TOC entry, scan all spine items | ~5 min | | buildBookBin | For each spine item, scan ZIP central directory | ~8.4 min | Total: **~30+ minutes** for first-time indexing of large EPUBs. ## Solution Replace linear scans with sorted hash indexes + binary search: - **OPF Pass**: Build `{hash(id), len, offset}` index from manifest, binary search for each spine ref - **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine, binary search for each TOC entry - **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single ZIP central directory scan with batch hash matching All indexes use FNV-1a hashing with length as secondary key to minimize collisions. Indexes are freed immediately after each phase. ## Results **Shadow Slave EPUB (2768 chapters):** | Phase | Before | After | Speedup | |-------|--------|-------|---------| | OPF pass | ~25 min | 10.8 sec | ~140x | | TOC pass | ~5 min | 4.7 sec | ~60x | | buildBookBin | 506 sec | 34.6 sec | ~15x | | **Total** | **~30+ min** | **~50 sec** | **~36x** | **Normal EPUB (87 chapters):** 1.7 sec - no regression. ## Memory Peak temporary memory during indexing: - OPF index: ~33KB (2770 items × 12 bytes) - TOC index: ~33KB (2768 items × 12 bytes) - ZIP batch: ~44KB (targets + sizes arrays) All indexes cleared immediately after each phase. No OOM risk on ESP32-C3. ## Note on Threshold All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve existing behavior for small books. However, the algorithms work correctly for any book size and are faster even for small books: | Book Size | Old O(n²) | New O(n log n) | Improvement | |-----------|-----------|----------------|-------------| | 10 ch | 100 ops | 50 ops | 2x | | 100 ch | 10K ops | 800 ops | 12x | | 400 ch | 160K ops | 4K ops | 40x | If preferred, the threshold could be removed to use the optimized path universally. ## Testing - [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and navigates correctly - [x] Normal book (87 chapters): 1.7s indexing, no regression - [x] Build passes - [x] clang-format passes ## Files Changed - `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index - `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size lookup - `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API - `lib/Epub/Epub.cpp` - Timing logs <details> <summary><b>Algorithm Details</b> (click to expand)</summary> ### Phase 1: OPF Pass - Manifest to Spine Lookup **Problem**: Each `<itemref idref="ch001">` in spine must find matching `<item id="ch001" href="...">` in manifest. ``` OLD: For each of 2768 spine refs, scan all 2770 manifest items = 7.6M string comparisons NEW: While parsing manifest, build index: { hash("ch001"), len=5, file_offset=120 } Sort index, then binary search for each spine ref: 2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons ``` ### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup **Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find its spine index. ``` OLD: For each of 2768 TOC entries, scan all 2768 spine entries = 7.6M string comparisons NEW: At beginTocPass(), read spine once and build index: { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 } Sort index, binary search for each TOC entry: 2768 × log₂(2768) ≈ 30K comparisons Clear index at endTocPass() to free memory. ``` ### Phase 3: buildBookBin - ZIP Size Lookup **Problem**: Need uncompressed file size for each spine item (for reading progress). Sizes are in ZIP central directory. ``` OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries) = 7.6M filename reads + string comparisons Time: 506 seconds NEW: Step 1: Build targets from spine { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 } Sort by (hash, len) Step 2: Single pass through ZIP central directory For each entry: - Compute hash ON THE FLY (no string allocation) - Binary search targets - If match: sizes[target.index] = uncompressedSize Step 3: Use sizes array directly (O(1) per spine item) Total: 2773 entries × log₂(2768) ≈ 33K comparisons Time: 35 seconds ``` ### Why Hash + Length? Using 64-bit FNV-1a hash + string length as a composite key: - Collision probability: ~1 in 2⁶⁴ × typical_path_lengths - No string storage needed in index (just 12-16 bytes per entry) - Integer comparisons are faster than string comparisons - Verification on match handles the rare collision case </details> --- _AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
#include <algorithm>
struct ZipInflateCtx {
InflateReader reader; // Must be first — callback casts uzlib_uncomp* to ZipInflateCtx*
FsFile* file = nullptr;
size_t fileRemaining = 0;
uint8_t* readBuf = nullptr;
size_t readBufSize = 0;
};
2025-12-03 22:00:29 +11:00
namespace {
constexpr uint16_t ZIP_METHOD_STORED = 0;
constexpr uint16_t ZIP_METHOD_DEFLATED = 8;
perf: Reduce overall flash usage by 30.7% by compressing built-in fonts (#831) ## Summary **What is the goal of this PR?** Compress reader font bitmaps to reduce flash usage by 30.7%. **What changes are included?** - New `EpdFontGroup` struct and extended `EpdFontData` with `groups`/`groupCount` fields - `--compress` flag in `fontconvert.py`: groups glyphs (ASCII base group + groups of 8) and compresses each with raw DEFLATE - `FontDecompressor` class with 4-slot LRU cache for on-demand decompression during rendering - `GfxRenderer` transparently routes bitmap access through `getGlyphBitmap()` (compressed or direct flash) - Uses `uzlib` for decompression with minimal heap overhead. - 48 reader fonts (Bookerly, NotoSans 12-18pt, OpenDyslexic) regenerated with compression; 5 UI fonts unchanged - Round-trip verification script (`verify_compression.py`) runs as part of font generation ## Additional Context ## Flash & RAM | | baseline | font-compression | Difference | |--|--------|-----------------|------------| | Flash (ELF) | 6,302,476 B (96.2%) | 4,365,022 B (66.6%) | -1,937,454 B (-30.7%) | | firmware.bin | 6,468,192 B | 4,531,008 B | -1,937,184 B (-29.9%) | | RAM | 101,700 B (31.0%) | 103,076 B (31.5%) | +1,376 B (+0.5%) | ## Script-Based Grouping (Cold Cache) Comparison of uncompressed baseline vs script-based group compression (4-slot LRU cache, cleared each page). Glyphs are grouped by Unicode block (ASCII, Latin-1, Latin Extended-A, Combining Marks, Cyrillic, General Punctuation, etc.) instead of sequential groups of 8. ### Render Time | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Median** | 414.9 ms | 431.6 ms | +16.7 ms (+4.0%) | | **Pages** | 37 | 37 | | ### Memory Usage | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Heap free (median)** | 187.0 KB | 176.3 KB | -10.7 KB | | **Heap free (min)** | 186.0 KB | 166.5 KB | -19.5 KB | | **Largest block (median)** | 148.0 KB | 128.0 KB | -20.0 KB | | **Largest block (min)** | 148.0 KB | 120.0 KB | -28.0 KB | ### Cache Effectiveness | | Misses/page | Hit rate | |---|---|---| | **Compressed (cold cache)** | 2.1 | 99.85% | ------ ### AI Usage While CrossPoint doesn't have restrictions on AI tools in contributing, please be transparent about their usage as it helps set the right context for reviewers. Did you use AI tools to help write this code? _**YES**_ Implementation was done by Claude Code (Opus 4.6) based on a plan developed collaboratively. All generated font headers were verified with an automated round-trip decompression test. The firmware was compiled successfully but has not yet been tested on-device. --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 20:30:15 +11:00
int zipReadCallback(uzlib_uncomp* uncomp) {
auto* ctx = reinterpret_cast<ZipInflateCtx*>(uncomp);
if (ctx->fileRemaining == 0) return -1;
2025-12-03 22:00:29 +11:00
const size_t toRead = ctx->fileRemaining < ctx->readBufSize ? ctx->fileRemaining : ctx->readBufSize;
const size_t bytesRead = ctx->file->read(ctx->readBuf, toRead);
ctx->fileRemaining -= bytesRead;
2025-12-03 22:00:29 +11:00
if (bytesRead == 0) return -1;
uncomp->source = ctx->readBuf + 1;
uncomp->source_limit = ctx->readBuf + bytesRead;
return ctx->readBuf[0];
2025-12-03 22:00:29 +11:00
}
} // namespace
2025-12-03 22:00:29 +11:00
bool ZipFile::loadAllFileStatSlims() {
const bool wasOpen = isOpen();
if (!wasOpen && !open()) {
return false;
}
if (!loadZipDetails()) {
if (!wasOpen) {
close();
}
return false;
}
2025-12-03 22:00:29 +11:00
file.seek(zipDetails.centralDirOffset);
uint32_t sig;
char itemName[256];
fileStatSlimCache.clear();
fileStatSlimCache.reserve(zipDetails.totalEntries);
while (file.available()) {
file.read(&sig, 4);
if (sig != 0x02014b50) break; // End of list
FileStatSlim fileStat = {};
file.seekCur(6);
file.read(&fileStat.method, 2);
file.seekCur(8);
file.read(&fileStat.compressedSize, 4);
file.read(&fileStat.uncompressedSize, 4);
uint16_t nameLen, m, k;
file.read(&nameLen, 2);
file.read(&m, 2);
file.read(&k, 2);
file.seekCur(8);
file.read(&fileStat.localHeaderOffset, 4);
if (nameLen < sizeof(itemName)) {
file.read(itemName, nameLen);
itemName[nameLen] = '\0';
fileStatSlimCache.emplace(itemName, fileStat);
} else {
// Skip over oversized entry names to avoid writing past fixed buffer.
file.seekCur(nameLen);
}
// Skip the rest of this entry (extra field + comment)
file.seekCur(m + k);
2025-12-03 22:00:29 +11:00
}
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458) ## Summary Optimizes EPUB metadata indexing for large books (2000+ chapters) from ~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n) hash-indexed lookups. Fixes #134 ## Problem Three phases had O(n²) complexity due to nested loops: | Phase | Operation | Before (2768 chapters) | |-------|-----------|------------------------| | OPF Pass | For each spine ref, scan all manifest items | ~25 min | | TOC Pass | For each TOC entry, scan all spine items | ~5 min | | buildBookBin | For each spine item, scan ZIP central directory | ~8.4 min | Total: **~30+ minutes** for first-time indexing of large EPUBs. ## Solution Replace linear scans with sorted hash indexes + binary search: - **OPF Pass**: Build `{hash(id), len, offset}` index from manifest, binary search for each spine ref - **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine, binary search for each TOC entry - **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single ZIP central directory scan with batch hash matching All indexes use FNV-1a hashing with length as secondary key to minimize collisions. Indexes are freed immediately after each phase. ## Results **Shadow Slave EPUB (2768 chapters):** | Phase | Before | After | Speedup | |-------|--------|-------|---------| | OPF pass | ~25 min | 10.8 sec | ~140x | | TOC pass | ~5 min | 4.7 sec | ~60x | | buildBookBin | 506 sec | 34.6 sec | ~15x | | **Total** | **~30+ min** | **~50 sec** | **~36x** | **Normal EPUB (87 chapters):** 1.7 sec - no regression. ## Memory Peak temporary memory during indexing: - OPF index: ~33KB (2770 items × 12 bytes) - TOC index: ~33KB (2768 items × 12 bytes) - ZIP batch: ~44KB (targets + sizes arrays) All indexes cleared immediately after each phase. No OOM risk on ESP32-C3. ## Note on Threshold All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve existing behavior for small books. However, the algorithms work correctly for any book size and are faster even for small books: | Book Size | Old O(n²) | New O(n log n) | Improvement | |-----------|-----------|----------------|-------------| | 10 ch | 100 ops | 50 ops | 2x | | 100 ch | 10K ops | 800 ops | 12x | | 400 ch | 160K ops | 4K ops | 40x | If preferred, the threshold could be removed to use the optimized path universally. ## Testing - [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and navigates correctly - [x] Normal book (87 chapters): 1.7s indexing, no regression - [x] Build passes - [x] clang-format passes ## Files Changed - `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index - `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size lookup - `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API - `lib/Epub/Epub.cpp` - Timing logs <details> <summary><b>Algorithm Details</b> (click to expand)</summary> ### Phase 1: OPF Pass - Manifest to Spine Lookup **Problem**: Each `<itemref idref="ch001">` in spine must find matching `<item id="ch001" href="...">` in manifest. ``` OLD: For each of 2768 spine refs, scan all 2770 manifest items = 7.6M string comparisons NEW: While parsing manifest, build index: { hash("ch001"), len=5, file_offset=120 } Sort index, then binary search for each spine ref: 2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons ``` ### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup **Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find its spine index. ``` OLD: For each of 2768 TOC entries, scan all 2768 spine entries = 7.6M string comparisons NEW: At beginTocPass(), read spine once and build index: { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 } Sort index, binary search for each TOC entry: 2768 × log₂(2768) ≈ 30K comparisons Clear index at endTocPass() to free memory. ``` ### Phase 3: buildBookBin - ZIP Size Lookup **Problem**: Need uncompressed file size for each spine item (for reading progress). Sizes are in ZIP central directory. ``` OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries) = 7.6M filename reads + string comparisons Time: 506 seconds NEW: Step 1: Build targets from spine { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 } Sort by (hash, len) Step 2: Single pass through ZIP central directory For each entry: - Compute hash ON THE FLY (no string allocation) - Binary search targets - If match: sizes[target.index] = uncompressedSize Step 3: Use sizes array directly (O(1) per spine item) Total: 2773 entries × log₂(2768) ≈ 33K comparisons Time: 35 seconds ``` ### Why Hash + Length? Using 64-bit FNV-1a hash + string length as a composite key: - Collision probability: ~1 in 2⁶⁴ × typical_path_lengths - No string storage needed in index (just 12-16 bytes per entry) - Integer comparisons are faster than string comparisons - Verification on match handles the rare collision case </details> --- _AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
// Set cursor to start of central directory for sequential access
lastCentralDirPos = zipDetails.centralDirOffset;
lastCentralDirPosValid = true;
if (!wasOpen) {
close();
}
return true;
}
2025-12-03 22:00:29 +11:00
bool ZipFile::loadFileStatSlim(const char* filename, FileStatSlim* fileStat) {
if (!fileStatSlimCache.empty()) {
const auto it = fileStatSlimCache.find(filename);
if (it != fileStatSlimCache.end()) {
*fileStat = it->second;
return true;
}
return false;
2025-12-03 22:00:29 +11:00
}
const bool wasOpen = isOpen();
if (!wasOpen && !open()) {
return false;
2025-12-03 22:00:29 +11:00
}
if (!loadZipDetails()) {
if (!wasOpen) {
close();
}
return false;
}
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458) ## Summary Optimizes EPUB metadata indexing for large books (2000+ chapters) from ~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n) hash-indexed lookups. Fixes #134 ## Problem Three phases had O(n²) complexity due to nested loops: | Phase | Operation | Before (2768 chapters) | |-------|-----------|------------------------| | OPF Pass | For each spine ref, scan all manifest items | ~25 min | | TOC Pass | For each TOC entry, scan all spine items | ~5 min | | buildBookBin | For each spine item, scan ZIP central directory | ~8.4 min | Total: **~30+ minutes** for first-time indexing of large EPUBs. ## Solution Replace linear scans with sorted hash indexes + binary search: - **OPF Pass**: Build `{hash(id), len, offset}` index from manifest, binary search for each spine ref - **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine, binary search for each TOC entry - **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single ZIP central directory scan with batch hash matching All indexes use FNV-1a hashing with length as secondary key to minimize collisions. Indexes are freed immediately after each phase. ## Results **Shadow Slave EPUB (2768 chapters):** | Phase | Before | After | Speedup | |-------|--------|-------|---------| | OPF pass | ~25 min | 10.8 sec | ~140x | | TOC pass | ~5 min | 4.7 sec | ~60x | | buildBookBin | 506 sec | 34.6 sec | ~15x | | **Total** | **~30+ min** | **~50 sec** | **~36x** | **Normal EPUB (87 chapters):** 1.7 sec - no regression. ## Memory Peak temporary memory during indexing: - OPF index: ~33KB (2770 items × 12 bytes) - TOC index: ~33KB (2768 items × 12 bytes) - ZIP batch: ~44KB (targets + sizes arrays) All indexes cleared immediately after each phase. No OOM risk on ESP32-C3. ## Note on Threshold All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve existing behavior for small books. However, the algorithms work correctly for any book size and are faster even for small books: | Book Size | Old O(n²) | New O(n log n) | Improvement | |-----------|-----------|----------------|-------------| | 10 ch | 100 ops | 50 ops | 2x | | 100 ch | 10K ops | 800 ops | 12x | | 400 ch | 160K ops | 4K ops | 40x | If preferred, the threshold could be removed to use the optimized path universally. ## Testing - [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and navigates correctly - [x] Normal book (87 chapters): 1.7s indexing, no regression - [x] Build passes - [x] clang-format passes ## Files Changed - `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index - `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size lookup - `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API - `lib/Epub/Epub.cpp` - Timing logs <details> <summary><b>Algorithm Details</b> (click to expand)</summary> ### Phase 1: OPF Pass - Manifest to Spine Lookup **Problem**: Each `<itemref idref="ch001">` in spine must find matching `<item id="ch001" href="...">` in manifest. ``` OLD: For each of 2768 spine refs, scan all 2770 manifest items = 7.6M string comparisons NEW: While parsing manifest, build index: { hash("ch001"), len=5, file_offset=120 } Sort index, then binary search for each spine ref: 2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons ``` ### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup **Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find its spine index. ``` OLD: For each of 2768 TOC entries, scan all 2768 spine entries = 7.6M string comparisons NEW: At beginTocPass(), read spine once and build index: { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 } Sort index, binary search for each TOC entry: 2768 × log₂(2768) ≈ 30K comparisons Clear index at endTocPass() to free memory. ``` ### Phase 3: buildBookBin - ZIP Size Lookup **Problem**: Need uncompressed file size for each spine item (for reading progress). Sizes are in ZIP central directory. ``` OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries) = 7.6M filename reads + string comparisons Time: 506 seconds NEW: Step 1: Build targets from spine { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 } Sort by (hash, len) Step 2: Single pass through ZIP central directory For each entry: - Compute hash ON THE FLY (no string allocation) - Binary search targets - If match: sizes[target.index] = uncompressedSize Step 3: Use sizes array directly (O(1) per spine item) Total: 2773 entries × log₂(2768) ≈ 33K comparisons Time: 35 seconds ``` ### Why Hash + Length? Using 64-bit FNV-1a hash + string length as a composite key: - Collision probability: ~1 in 2⁶⁴ × typical_path_lengths - No string storage needed in index (just 12-16 bytes per entry) - Integer comparisons are faster than string comparisons - Verification on match handles the rare collision case </details> --- _AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
// Phase 1: Try scanning from cursor position first
uint32_t startPos = lastCentralDirPosValid ? lastCentralDirPos : zipDetails.centralDirOffset;
bool wrapped = false;
bool found = false;
file.seek(startPos);
uint32_t sig;
char itemName[256];
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458) ## Summary Optimizes EPUB metadata indexing for large books (2000+ chapters) from ~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n) hash-indexed lookups. Fixes #134 ## Problem Three phases had O(n²) complexity due to nested loops: | Phase | Operation | Before (2768 chapters) | |-------|-----------|------------------------| | OPF Pass | For each spine ref, scan all manifest items | ~25 min | | TOC Pass | For each TOC entry, scan all spine items | ~5 min | | buildBookBin | For each spine item, scan ZIP central directory | ~8.4 min | Total: **~30+ minutes** for first-time indexing of large EPUBs. ## Solution Replace linear scans with sorted hash indexes + binary search: - **OPF Pass**: Build `{hash(id), len, offset}` index from manifest, binary search for each spine ref - **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine, binary search for each TOC entry - **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single ZIP central directory scan with batch hash matching All indexes use FNV-1a hashing with length as secondary key to minimize collisions. Indexes are freed immediately after each phase. ## Results **Shadow Slave EPUB (2768 chapters):** | Phase | Before | After | Speedup | |-------|--------|-------|---------| | OPF pass | ~25 min | 10.8 sec | ~140x | | TOC pass | ~5 min | 4.7 sec | ~60x | | buildBookBin | 506 sec | 34.6 sec | ~15x | | **Total** | **~30+ min** | **~50 sec** | **~36x** | **Normal EPUB (87 chapters):** 1.7 sec - no regression. ## Memory Peak temporary memory during indexing: - OPF index: ~33KB (2770 items × 12 bytes) - TOC index: ~33KB (2768 items × 12 bytes) - ZIP batch: ~44KB (targets + sizes arrays) All indexes cleared immediately after each phase. No OOM risk on ESP32-C3. ## Note on Threshold All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve existing behavior for small books. However, the algorithms work correctly for any book size and are faster even for small books: | Book Size | Old O(n²) | New O(n log n) | Improvement | |-----------|-----------|----------------|-------------| | 10 ch | 100 ops | 50 ops | 2x | | 100 ch | 10K ops | 800 ops | 12x | | 400 ch | 160K ops | 4K ops | 40x | If preferred, the threshold could be removed to use the optimized path universally. ## Testing - [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and navigates correctly - [x] Normal book (87 chapters): 1.7s indexing, no regression - [x] Build passes - [x] clang-format passes ## Files Changed - `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index - `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size lookup - `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API - `lib/Epub/Epub.cpp` - Timing logs <details> <summary><b>Algorithm Details</b> (click to expand)</summary> ### Phase 1: OPF Pass - Manifest to Spine Lookup **Problem**: Each `<itemref idref="ch001">` in spine must find matching `<item id="ch001" href="...">` in manifest. ``` OLD: For each of 2768 spine refs, scan all 2770 manifest items = 7.6M string comparisons NEW: While parsing manifest, build index: { hash("ch001"), len=5, file_offset=120 } Sort index, then binary search for each spine ref: 2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons ``` ### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup **Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find its spine index. ``` OLD: For each of 2768 TOC entries, scan all 2768 spine entries = 7.6M string comparisons NEW: At beginTocPass(), read spine once and build index: { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 } Sort index, binary search for each TOC entry: 2768 × log₂(2768) ≈ 30K comparisons Clear index at endTocPass() to free memory. ``` ### Phase 3: buildBookBin - ZIP Size Lookup **Problem**: Need uncompressed file size for each spine item (for reading progress). Sizes are in ZIP central directory. ``` OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries) = 7.6M filename reads + string comparisons Time: 506 seconds NEW: Step 1: Build targets from spine { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 } Sort by (hash, len) Step 2: Single pass through ZIP central directory For each entry: - Compute hash ON THE FLY (no string allocation) - Binary search targets - If match: sizes[target.index] = uncompressedSize Step 3: Use sizes array directly (O(1) per spine item) Total: 2773 entries × log₂(2768) ≈ 33K comparisons Time: 35 seconds ``` ### Why Hash + Length? Using 64-bit FNV-1a hash + string length as a composite key: - Collision probability: ~1 in 2⁶⁴ × typical_path_lengths - No string storage needed in index (just 12-16 bytes per entry) - Integer comparisons are faster than string comparisons - Verification on match handles the rare collision case </details> --- _AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
while (true) {
uint32_t entryStart = file.position();
if (file.read(&sig, 4) != 4 || sig != 0x02014b50) {
// End of central directory
if (!wrapped && lastCentralDirPosValid && startPos != zipDetails.centralDirOffset) {
// Wrap around to beginning
file.seek(zipDetails.centralDirOffset);
wrapped = true;
continue;
}
break;
}
// If we've wrapped and reached our start position, stop
if (wrapped && entryStart >= startPos) {
break;
}
file.seekCur(6);
file.read(&fileStat->method, 2);
file.seekCur(8);
file.read(&fileStat->compressedSize, 4);
file.read(&fileStat->uncompressedSize, 4);
uint16_t nameLen, m, k;
file.read(&nameLen, 2);
file.read(&m, 2);
file.read(&k, 2);
file.seekCur(8);
file.read(&fileStat->localHeaderOffset, 4);
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458) ## Summary Optimizes EPUB metadata indexing for large books (2000+ chapters) from ~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n) hash-indexed lookups. Fixes #134 ## Problem Three phases had O(n²) complexity due to nested loops: | Phase | Operation | Before (2768 chapters) | |-------|-----------|------------------------| | OPF Pass | For each spine ref, scan all manifest items | ~25 min | | TOC Pass | For each TOC entry, scan all spine items | ~5 min | | buildBookBin | For each spine item, scan ZIP central directory | ~8.4 min | Total: **~30+ minutes** for first-time indexing of large EPUBs. ## Solution Replace linear scans with sorted hash indexes + binary search: - **OPF Pass**: Build `{hash(id), len, offset}` index from manifest, binary search for each spine ref - **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine, binary search for each TOC entry - **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single ZIP central directory scan with batch hash matching All indexes use FNV-1a hashing with length as secondary key to minimize collisions. Indexes are freed immediately after each phase. ## Results **Shadow Slave EPUB (2768 chapters):** | Phase | Before | After | Speedup | |-------|--------|-------|---------| | OPF pass | ~25 min | 10.8 sec | ~140x | | TOC pass | ~5 min | 4.7 sec | ~60x | | buildBookBin | 506 sec | 34.6 sec | ~15x | | **Total** | **~30+ min** | **~50 sec** | **~36x** | **Normal EPUB (87 chapters):** 1.7 sec - no regression. ## Memory Peak temporary memory during indexing: - OPF index: ~33KB (2770 items × 12 bytes) - TOC index: ~33KB (2768 items × 12 bytes) - ZIP batch: ~44KB (targets + sizes arrays) All indexes cleared immediately after each phase. No OOM risk on ESP32-C3. ## Note on Threshold All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve existing behavior for small books. However, the algorithms work correctly for any book size and are faster even for small books: | Book Size | Old O(n²) | New O(n log n) | Improvement | |-----------|-----------|----------------|-------------| | 10 ch | 100 ops | 50 ops | 2x | | 100 ch | 10K ops | 800 ops | 12x | | 400 ch | 160K ops | 4K ops | 40x | If preferred, the threshold could be removed to use the optimized path universally. ## Testing - [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and navigates correctly - [x] Normal book (87 chapters): 1.7s indexing, no regression - [x] Build passes - [x] clang-format passes ## Files Changed - `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index - `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size lookup - `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API - `lib/Epub/Epub.cpp` - Timing logs <details> <summary><b>Algorithm Details</b> (click to expand)</summary> ### Phase 1: OPF Pass - Manifest to Spine Lookup **Problem**: Each `<itemref idref="ch001">` in spine must find matching `<item id="ch001" href="...">` in manifest. ``` OLD: For each of 2768 spine refs, scan all 2770 manifest items = 7.6M string comparisons NEW: While parsing manifest, build index: { hash("ch001"), len=5, file_offset=120 } Sort index, then binary search for each spine ref: 2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons ``` ### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup **Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find its spine index. ``` OLD: For each of 2768 TOC entries, scan all 2768 spine entries = 7.6M string comparisons NEW: At beginTocPass(), read spine once and build index: { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 } Sort index, binary search for each TOC entry: 2768 × log₂(2768) ≈ 30K comparisons Clear index at endTocPass() to free memory. ``` ### Phase 3: buildBookBin - ZIP Size Lookup **Problem**: Need uncompressed file size for each spine item (for reading progress). Sizes are in ZIP central directory. ``` OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries) = 7.6M filename reads + string comparisons Time: 506 seconds NEW: Step 1: Build targets from spine { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 } Sort by (hash, len) Step 2: Single pass through ZIP central directory For each entry: - Compute hash ON THE FLY (no string allocation) - Binary search targets - If match: sizes[target.index] = uncompressedSize Step 3: Use sizes array directly (O(1) per spine item) Total: 2773 entries × log₂(2768) ≈ 33K comparisons Time: 35 seconds ``` ### Why Hash + Length? Using 64-bit FNV-1a hash + string length as a composite key: - Collision probability: ~1 in 2⁶⁴ × typical_path_lengths - No string storage needed in index (just 12-16 bytes per entry) - Integer comparisons are faster than string comparisons - Verification on match handles the rare collision case </details> --- _AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
if (nameLen < 256) {
file.read(itemName, nameLen);
itemName[nameLen] = '\0';
if (strcmp(itemName, filename) == 0) {
// Found it! Update cursor to next entry
file.seekCur(m + k);
lastCentralDirPos = file.position();
lastCentralDirPosValid = true;
found = true;
break;
}
} else {
// Name too long, skip it
file.seekCur(nameLen);
}
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458) ## Summary Optimizes EPUB metadata indexing for large books (2000+ chapters) from ~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n) hash-indexed lookups. Fixes #134 ## Problem Three phases had O(n²) complexity due to nested loops: | Phase | Operation | Before (2768 chapters) | |-------|-----------|------------------------| | OPF Pass | For each spine ref, scan all manifest items | ~25 min | | TOC Pass | For each TOC entry, scan all spine items | ~5 min | | buildBookBin | For each spine item, scan ZIP central directory | ~8.4 min | Total: **~30+ minutes** for first-time indexing of large EPUBs. ## Solution Replace linear scans with sorted hash indexes + binary search: - **OPF Pass**: Build `{hash(id), len, offset}` index from manifest, binary search for each spine ref - **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine, binary search for each TOC entry - **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single ZIP central directory scan with batch hash matching All indexes use FNV-1a hashing with length as secondary key to minimize collisions. Indexes are freed immediately after each phase. ## Results **Shadow Slave EPUB (2768 chapters):** | Phase | Before | After | Speedup | |-------|--------|-------|---------| | OPF pass | ~25 min | 10.8 sec | ~140x | | TOC pass | ~5 min | 4.7 sec | ~60x | | buildBookBin | 506 sec | 34.6 sec | ~15x | | **Total** | **~30+ min** | **~50 sec** | **~36x** | **Normal EPUB (87 chapters):** 1.7 sec - no regression. ## Memory Peak temporary memory during indexing: - OPF index: ~33KB (2770 items × 12 bytes) - TOC index: ~33KB (2768 items × 12 bytes) - ZIP batch: ~44KB (targets + sizes arrays) All indexes cleared immediately after each phase. No OOM risk on ESP32-C3. ## Note on Threshold All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve existing behavior for small books. However, the algorithms work correctly for any book size and are faster even for small books: | Book Size | Old O(n²) | New O(n log n) | Improvement | |-----------|-----------|----------------|-------------| | 10 ch | 100 ops | 50 ops | 2x | | 100 ch | 10K ops | 800 ops | 12x | | 400 ch | 160K ops | 4K ops | 40x | If preferred, the threshold could be removed to use the optimized path universally. ## Testing - [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and navigates correctly - [x] Normal book (87 chapters): 1.7s indexing, no regression - [x] Build passes - [x] clang-format passes ## Files Changed - `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index - `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size lookup - `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API - `lib/Epub/Epub.cpp` - Timing logs <details> <summary><b>Algorithm Details</b> (click to expand)</summary> ### Phase 1: OPF Pass - Manifest to Spine Lookup **Problem**: Each `<itemref idref="ch001">` in spine must find matching `<item id="ch001" href="...">` in manifest. ``` OLD: For each of 2768 spine refs, scan all 2770 manifest items = 7.6M string comparisons NEW: While parsing manifest, build index: { hash("ch001"), len=5, file_offset=120 } Sort index, then binary search for each spine ref: 2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons ``` ### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup **Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find its spine index. ``` OLD: For each of 2768 TOC entries, scan all 2768 spine entries = 7.6M string comparisons NEW: At beginTocPass(), read spine once and build index: { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 } Sort index, binary search for each TOC entry: 2768 × log₂(2768) ≈ 30K comparisons Clear index at endTocPass() to free memory. ``` ### Phase 3: buildBookBin - ZIP Size Lookup **Problem**: Need uncompressed file size for each spine item (for reading progress). Sizes are in ZIP central directory. ``` OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries) = 7.6M filename reads + string comparisons Time: 506 seconds NEW: Step 1: Build targets from spine { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 } Sort by (hash, len) Step 2: Single pass through ZIP central directory For each entry: - Compute hash ON THE FLY (no string allocation) - Binary search targets - If match: sizes[target.index] = uncompressedSize Step 3: Use sizes array directly (O(1) per spine item) Total: 2773 entries × log₂(2768) ≈ 33K comparisons Time: 35 seconds ``` ### Why Hash + Length? Using 64-bit FNV-1a hash + string length as a composite key: - Collision probability: ~1 in 2⁶⁴ × typical_path_lengths - No string storage needed in index (just 12-16 bytes per entry) - Integer comparisons are faster than string comparisons - Verification on match handles the rare collision case </details> --- _AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
// Skip extra field + comment
file.seekCur(m + k);
}
if (!wasOpen) {
close();
}
return found;
}
2025-12-03 22:00:29 +11:00
long ZipFile::getDataOffset(const FileStatSlim& fileStat) {
const bool wasOpen = isOpen();
if (!wasOpen && !open()) {
return -1;
}
constexpr auto localHeaderSize = 30;
2025-12-03 22:00:29 +11:00
uint8_t pLocalHeader[localHeaderSize];
const uint64_t fileOffset = fileStat.localHeaderOffset;
file.seek(fileOffset);
const size_t read = file.read(pLocalHeader, localHeaderSize);
if (!wasOpen) {
close();
}
2025-12-03 22:00:29 +11:00
if (read != localHeaderSize) {
LOG_ERR("ZIP", "Something went wrong reading the local header");
return -1;
2025-12-03 22:00:29 +11:00
}
if (pLocalHeader[0] + (pLocalHeader[1] << 8) + (pLocalHeader[2] << 16) + (pLocalHeader[3] << 24) !=
0x04034b50 /* ZIP local file header signature */) {
LOG_ERR("ZIP", "Not a valid zip file header");
return -1;
2025-12-03 22:00:29 +11:00
}
const uint16_t filenameLength = pLocalHeader[26] + (pLocalHeader[27] << 8);
const uint16_t extraOffset = pLocalHeader[28] + (pLocalHeader[29] << 8);
return fileOffset + localHeaderSize + filenameLength + extraOffset;
}
bool ZipFile::loadZipDetails() {
if (zipDetails.isSet) {
return true;
}
const bool wasOpen = isOpen();
if (!wasOpen && !open()) {
return false;
}
const size_t fileSize = file.size();
if (fileSize < 22) {
LOG_ERR("ZIP", "File too small to be a valid zip");
if (!wasOpen) {
close();
}
return false; // Minimum EOCD size is 22 bytes
}
// We scan the last 1KB (or the whole file if smaller) for the EOCD signature
// 0x06054b50 is stored as 0x50, 0x4b, 0x05, 0x06 in little-endian
const int scanRange = fileSize > 1024 ? 1024 : fileSize;
const auto buffer = static_cast<uint8_t*>(malloc(scanRange));
if (!buffer) {
LOG_ERR("ZIP", "Failed to allocate memory for EOCD scan buffer");
if (!wasOpen) {
close();
}
return false;
}
file.seek(fileSize - scanRange);
file.read(buffer, scanRange);
// Scan backwards for the signature
int foundOffset = -1;
for (int i = scanRange - 22; i >= 0; i--) {
constexpr uint32_t signature = 0x06054b50;
if (*reinterpret_cast<uint32_t*>(&buffer[i]) == signature) {
foundOffset = i;
break;
}
}
if (foundOffset == -1) {
LOG_ERR("ZIP", "EOCD signature not found in zip file");
free(buffer);
if (!wasOpen) {
close();
}
return false;
}
// Now extract the values we need from the EOCD record
// Relative positions within EOCD:
// Offset 10: Total number of entries (2 bytes)
// Offset 16: Offset of start of central directory with respect to the starting disk number (4 bytes)
zipDetails.totalEntries = *reinterpret_cast<uint16_t*>(&buffer[foundOffset + 10]);
zipDetails.centralDirOffset = *reinterpret_cast<uint32_t*>(&buffer[foundOffset + 16]);
zipDetails.isSet = true;
free(buffer);
if (!wasOpen) {
close();
}
return true;
}
bool ZipFile::open() {
if (!Storage.openFileForRead("ZIP", filePath, file)) {
return false;
}
return true;
}
bool ZipFile::close() {
if (file) {
file.close();
}
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458) ## Summary Optimizes EPUB metadata indexing for large books (2000+ chapters) from ~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n) hash-indexed lookups. Fixes #134 ## Problem Three phases had O(n²) complexity due to nested loops: | Phase | Operation | Before (2768 chapters) | |-------|-----------|------------------------| | OPF Pass | For each spine ref, scan all manifest items | ~25 min | | TOC Pass | For each TOC entry, scan all spine items | ~5 min | | buildBookBin | For each spine item, scan ZIP central directory | ~8.4 min | Total: **~30+ minutes** for first-time indexing of large EPUBs. ## Solution Replace linear scans with sorted hash indexes + binary search: - **OPF Pass**: Build `{hash(id), len, offset}` index from manifest, binary search for each spine ref - **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine, binary search for each TOC entry - **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single ZIP central directory scan with batch hash matching All indexes use FNV-1a hashing with length as secondary key to minimize collisions. Indexes are freed immediately after each phase. ## Results **Shadow Slave EPUB (2768 chapters):** | Phase | Before | After | Speedup | |-------|--------|-------|---------| | OPF pass | ~25 min | 10.8 sec | ~140x | | TOC pass | ~5 min | 4.7 sec | ~60x | | buildBookBin | 506 sec | 34.6 sec | ~15x | | **Total** | **~30+ min** | **~50 sec** | **~36x** | **Normal EPUB (87 chapters):** 1.7 sec - no regression. ## Memory Peak temporary memory during indexing: - OPF index: ~33KB (2770 items × 12 bytes) - TOC index: ~33KB (2768 items × 12 bytes) - ZIP batch: ~44KB (targets + sizes arrays) All indexes cleared immediately after each phase. No OOM risk on ESP32-C3. ## Note on Threshold All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve existing behavior for small books. However, the algorithms work correctly for any book size and are faster even for small books: | Book Size | Old O(n²) | New O(n log n) | Improvement | |-----------|-----------|----------------|-------------| | 10 ch | 100 ops | 50 ops | 2x | | 100 ch | 10K ops | 800 ops | 12x | | 400 ch | 160K ops | 4K ops | 40x | If preferred, the threshold could be removed to use the optimized path universally. ## Testing - [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and navigates correctly - [x] Normal book (87 chapters): 1.7s indexing, no regression - [x] Build passes - [x] clang-format passes ## Files Changed - `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index - `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size lookup - `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API - `lib/Epub/Epub.cpp` - Timing logs <details> <summary><b>Algorithm Details</b> (click to expand)</summary> ### Phase 1: OPF Pass - Manifest to Spine Lookup **Problem**: Each `<itemref idref="ch001">` in spine must find matching `<item id="ch001" href="...">` in manifest. ``` OLD: For each of 2768 spine refs, scan all 2770 manifest items = 7.6M string comparisons NEW: While parsing manifest, build index: { hash("ch001"), len=5, file_offset=120 } Sort index, then binary search for each spine ref: 2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons ``` ### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup **Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find its spine index. ``` OLD: For each of 2768 TOC entries, scan all 2768 spine entries = 7.6M string comparisons NEW: At beginTocPass(), read spine once and build index: { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 } Sort index, binary search for each TOC entry: 2768 × log₂(2768) ≈ 30K comparisons Clear index at endTocPass() to free memory. ``` ### Phase 3: buildBookBin - ZIP Size Lookup **Problem**: Need uncompressed file size for each spine item (for reading progress). Sizes are in ZIP central directory. ``` OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries) = 7.6M filename reads + string comparisons Time: 506 seconds NEW: Step 1: Build targets from spine { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 } Sort by (hash, len) Step 2: Single pass through ZIP central directory For each entry: - Compute hash ON THE FLY (no string allocation) - Binary search targets - If match: sizes[target.index] = uncompressedSize Step 3: Use sizes array directly (O(1) per spine item) Total: 2773 entries × log₂(2768) ≈ 33K comparisons Time: 35 seconds ``` ### Why Hash + Length? Using 64-bit FNV-1a hash + string length as a composite key: - Collision probability: ~1 in 2⁶⁴ × typical_path_lengths - No string storage needed in index (just 12-16 bytes per entry) - Integer comparisons are faster than string comparisons - Verification on match handles the rare collision case </details> --- _AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
lastCentralDirPos = 0;
lastCentralDirPosValid = false;
return true;
}
bool ZipFile::getInflatedFileSize(const char* filename, size_t* size) {
FileStatSlim fileStat = {};
if (!loadFileStatSlim(filename, &fileStat)) {
return false;
}
*size = static_cast<size_t>(fileStat.uncompressedSize);
return true;
}
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458) ## Summary Optimizes EPUB metadata indexing for large books (2000+ chapters) from ~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n) hash-indexed lookups. Fixes #134 ## Problem Three phases had O(n²) complexity due to nested loops: | Phase | Operation | Before (2768 chapters) | |-------|-----------|------------------------| | OPF Pass | For each spine ref, scan all manifest items | ~25 min | | TOC Pass | For each TOC entry, scan all spine items | ~5 min | | buildBookBin | For each spine item, scan ZIP central directory | ~8.4 min | Total: **~30+ minutes** for first-time indexing of large EPUBs. ## Solution Replace linear scans with sorted hash indexes + binary search: - **OPF Pass**: Build `{hash(id), len, offset}` index from manifest, binary search for each spine ref - **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine, binary search for each TOC entry - **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single ZIP central directory scan with batch hash matching All indexes use FNV-1a hashing with length as secondary key to minimize collisions. Indexes are freed immediately after each phase. ## Results **Shadow Slave EPUB (2768 chapters):** | Phase | Before | After | Speedup | |-------|--------|-------|---------| | OPF pass | ~25 min | 10.8 sec | ~140x | | TOC pass | ~5 min | 4.7 sec | ~60x | | buildBookBin | 506 sec | 34.6 sec | ~15x | | **Total** | **~30+ min** | **~50 sec** | **~36x** | **Normal EPUB (87 chapters):** 1.7 sec - no regression. ## Memory Peak temporary memory during indexing: - OPF index: ~33KB (2770 items × 12 bytes) - TOC index: ~33KB (2768 items × 12 bytes) - ZIP batch: ~44KB (targets + sizes arrays) All indexes cleared immediately after each phase. No OOM risk on ESP32-C3. ## Note on Threshold All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve existing behavior for small books. However, the algorithms work correctly for any book size and are faster even for small books: | Book Size | Old O(n²) | New O(n log n) | Improvement | |-----------|-----------|----------------|-------------| | 10 ch | 100 ops | 50 ops | 2x | | 100 ch | 10K ops | 800 ops | 12x | | 400 ch | 160K ops | 4K ops | 40x | If preferred, the threshold could be removed to use the optimized path universally. ## Testing - [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and navigates correctly - [x] Normal book (87 chapters): 1.7s indexing, no regression - [x] Build passes - [x] clang-format passes ## Files Changed - `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index - `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size lookup - `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API - `lib/Epub/Epub.cpp` - Timing logs <details> <summary><b>Algorithm Details</b> (click to expand)</summary> ### Phase 1: OPF Pass - Manifest to Spine Lookup **Problem**: Each `<itemref idref="ch001">` in spine must find matching `<item id="ch001" href="...">` in manifest. ``` OLD: For each of 2768 spine refs, scan all 2770 manifest items = 7.6M string comparisons NEW: While parsing manifest, build index: { hash("ch001"), len=5, file_offset=120 } Sort index, then binary search for each spine ref: 2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons ``` ### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup **Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find its spine index. ``` OLD: For each of 2768 TOC entries, scan all 2768 spine entries = 7.6M string comparisons NEW: At beginTocPass(), read spine once and build index: { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 } Sort index, binary search for each TOC entry: 2768 × log₂(2768) ≈ 30K comparisons Clear index at endTocPass() to free memory. ``` ### Phase 3: buildBookBin - ZIP Size Lookup **Problem**: Need uncompressed file size for each spine item (for reading progress). Sizes are in ZIP central directory. ``` OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries) = 7.6M filename reads + string comparisons Time: 506 seconds NEW: Step 1: Build targets from spine { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 } Sort by (hash, len) Step 2: Single pass through ZIP central directory For each entry: - Compute hash ON THE FLY (no string allocation) - Binary search targets - If match: sizes[target.index] = uncompressedSize Step 3: Use sizes array directly (O(1) per spine item) Total: 2773 entries × log₂(2768) ≈ 33K comparisons Time: 35 seconds ``` ### Why Hash + Length? Using 64-bit FNV-1a hash + string length as a composite key: - Collision probability: ~1 in 2⁶⁴ × typical_path_lengths - No string storage needed in index (just 12-16 bytes per entry) - Integer comparisons are faster than string comparisons - Verification on match handles the rare collision case </details> --- _AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
int ZipFile::fillUncompressedSizes(std::vector<SizeTarget>& targets, std::vector<uint32_t>& sizes) {
if (targets.empty()) {
return 0;
}
const bool wasOpen = isOpen();
if (!wasOpen && !open()) {
return 0;
}
if (!loadZipDetails()) {
if (!wasOpen) {
close();
}
return 0;
}
file.seek(zipDetails.centralDirOffset);
int matched = 0;
uint32_t sig;
char itemName[256];
while (file.available()) {
file.read(&sig, 4);
if (sig != 0x02014b50) break;
file.seekCur(6);
uint16_t method;
file.read(&method, 2);
file.seekCur(8);
uint32_t compressedSize, uncompressedSize;
file.read(&compressedSize, 4);
file.read(&uncompressedSize, 4);
uint16_t nameLen, m, k;
file.read(&nameLen, 2);
file.read(&m, 2);
file.read(&k, 2);
file.seekCur(8);
uint32_t localHeaderOffset;
file.read(&localHeaderOffset, 4);
if (nameLen < 256) {
file.read(itemName, nameLen);
itemName[nameLen] = '\0';
uint64_t hash = fnvHash64(itemName, nameLen);
SizeTarget key = {hash, nameLen, 0};
auto it = std::lower_bound(targets.begin(), targets.end(), key, [](const SizeTarget& a, const SizeTarget& b) {
return a.hash < b.hash || (a.hash == b.hash && a.len < b.len);
});
while (it != targets.end() && it->hash == hash && it->len == nameLen) {
if (it->index < sizes.size()) {
sizes[it->index] = uncompressedSize;
matched++;
}
++it;
}
} else {
file.seekCur(nameLen);
}
file.seekCur(m + k);
}
if (!wasOpen) {
close();
}
return matched;
}
uint8_t* ZipFile::readFileToMemory(const char* filename, size_t* size, const bool trailingNullByte) {
const bool wasOpen = isOpen();
if (!wasOpen && !open()) {
return nullptr;
}
FileStatSlim fileStat = {};
if (!loadFileStatSlim(filename, &fileStat)) {
if (!wasOpen) {
close();
}
return nullptr;
}
const long fileOffset = getDataOffset(fileStat);
if (fileOffset < 0) {
if (!wasOpen) {
close();
}
return nullptr;
}
2025-12-03 22:00:29 +11:00
file.seek(fileOffset);
const auto deflatedDataSize = fileStat.compressedSize;
const auto inflatedDataSize = fileStat.uncompressedSize;
2025-12-03 22:00:29 +11:00
const auto dataSize = trailingNullByte ? inflatedDataSize + 1 : inflatedDataSize;
const auto data = static_cast<uint8_t*>(malloc(dataSize));
if (data == nullptr) {
LOG_ERR("ZIP", "Failed to allocate memory for output buffer (%zu bytes)", dataSize);
if (!wasOpen) {
close();
}
return nullptr;
}
2025-12-03 22:00:29 +11:00
if (fileStat.method == ZIP_METHOD_STORED) {
2025-12-03 22:00:29 +11:00
// no deflation, just read content
const size_t dataRead = file.read(data, inflatedDataSize);
if (!wasOpen) {
close();
}
2025-12-03 22:00:29 +11:00
if (dataRead != inflatedDataSize) {
LOG_ERR("ZIP", "Failed to read data");
free(data);
2025-12-03 22:00:29 +11:00
return nullptr;
}
// Continue out of block with data set
} else if (fileStat.method == ZIP_METHOD_DEFLATED) {
2025-12-03 22:00:29 +11:00
// Read out deflated content from file
const auto deflatedData = static_cast<uint8_t*>(malloc(deflatedDataSize));
if (deflatedData == nullptr) {
LOG_ERR("ZIP", "Failed to allocate memory for decompression buffer");
if (!wasOpen) {
close();
}
2025-12-03 22:00:29 +11:00
return nullptr;
}
const size_t dataRead = file.read(deflatedData, deflatedDataSize);
if (!wasOpen) {
close();
}
2025-12-03 22:00:29 +11:00
if (dataRead != deflatedDataSize) {
LOG_ERR("ZIP", "Failed to read data, expected %d got %d", deflatedDataSize, dataRead);
2025-12-03 22:00:29 +11:00
free(deflatedData);
free(data);
2025-12-03 22:00:29 +11:00
return nullptr;
}
bool success = false;
{
InflateReader r;
r.init(false);
r.setSource(deflatedData, deflatedDataSize);
success = r.read(data, inflatedDataSize);
}
2025-12-03 22:00:29 +11:00
free(deflatedData);
if (!success) {
LOG_ERR("ZIP", "Failed to inflate file");
free(data);
2025-12-03 22:00:29 +11:00
return nullptr;
}
// Continue out of block with data set
} else {
LOG_ERR("ZIP", "Unsupported compression method");
if (!wasOpen) {
close();
}
return nullptr;
2025-12-03 22:00:29 +11:00
}
if (trailingNullByte) data[inflatedDataSize] = '\0';
if (size) *size = inflatedDataSize;
return data;
}
bool ZipFile::readFileToStream(const char* filename, Print& out, const size_t chunkSize) {
const bool wasOpen = isOpen();
if (!wasOpen && !open()) {
return false;
2025-12-03 22:00:29 +11:00
}
FileStatSlim fileStat = {};
if (!loadFileStatSlim(filename, &fileStat)) {
return false;
2025-12-03 22:00:29 +11:00
}
const long fileOffset = getDataOffset(fileStat);
if (fileOffset < 0) {
return false;
}
file.seek(fileOffset);
const auto deflatedDataSize = fileStat.compressedSize;
const auto inflatedDataSize = fileStat.uncompressedSize;
if (fileStat.method == ZIP_METHOD_STORED) {
// no deflation, just read content
const auto buffer = static_cast<uint8_t*>(malloc(chunkSize));
if (!buffer) {
LOG_ERR("ZIP", "Failed to allocate memory for buffer");
if (!wasOpen) {
close();
}
return false;
}
size_t remaining = inflatedDataSize;
while (remaining > 0) {
const size_t dataRead = file.read(buffer, remaining < chunkSize ? remaining : chunkSize);
if (dataRead == 0) {
LOG_ERR("ZIP", "Could not read more bytes");
free(buffer);
if (!wasOpen) {
close();
}
return false;
}
out.write(buffer, dataRead);
remaining -= dataRead;
}
if (!wasOpen) {
close();
}
free(buffer);
return true;
}
if (fileStat.method == ZIP_METHOD_DEFLATED) {
auto* fileReadBuffer = static_cast<uint8_t*>(malloc(chunkSize));
if (!fileReadBuffer) {
LOG_ERR("ZIP", "Failed to allocate memory for zip file read buffer");
if (!wasOpen) {
close();
}
return false;
}
auto* outputBuffer = static_cast<uint8_t*>(malloc(chunkSize));
if (!outputBuffer) {
LOG_ERR("ZIP", "Failed to allocate memory for output buffer");
free(fileReadBuffer);
if (!wasOpen) {
close();
}
return false;
}
ZipInflateCtx ctx;
ctx.file = &file;
ctx.fileRemaining = deflatedDataSize;
ctx.readBuf = fileReadBuffer;
ctx.readBufSize = chunkSize;
if (!ctx.reader.init(true)) {
LOG_ERR("ZIP", "Failed to init inflate reader");
free(outputBuffer);
free(fileReadBuffer);
if (!wasOpen) {
close();
}
return false;
}
ctx.reader.setReadCallback(zipReadCallback);
bool success = false;
size_t totalProduced = 0;
while (true) {
size_t produced;
const InflateStatus status = ctx.reader.readAtMost(outputBuffer, chunkSize, &produced);
totalProduced += produced;
if (totalProduced > static_cast<size_t>(inflatedDataSize)) {
LOG_ERR("ZIP", "Decompressed size exceeds expected (%zu > %zu)", totalProduced,
static_cast<size_t>(inflatedDataSize));
break;
}
if (produced > 0) {
if (out.write(outputBuffer, produced) != produced) {
LOG_ERR("ZIP", "Failed to write all output bytes to stream");
break;
}
}
if (status == InflateStatus::Done) {
if (totalProduced != static_cast<size_t>(inflatedDataSize)) {
LOG_ERR("ZIP", "Decompressed size mismatch (expected %zu, got %zu)", static_cast<size_t>(inflatedDataSize),
totalProduced);
break;
}
LOG_DBG("ZIP", "Decompressed %d bytes into %d bytes", deflatedDataSize, inflatedDataSize);
success = true;
break;
}
if (status == InflateStatus::Error) {
LOG_ERR("ZIP", "Decompression failed");
break;
}
// InflateStatus::Ok: output buffer full, continue
}
if (!wasOpen) {
close();
}
free(outputBuffer);
free(fileReadBuffer);
return success; // ctx.reader destructor frees the ring buffer
}
if (!wasOpen) {
close();
}
LOG_ERR("ZIP", "Unsupported compression method");
return false;
2025-12-03 22:00:29 +11:00
}