2025-12-03 22:00:29 +11:00
|
|
|
#pragma once
|
2025-12-30 15:09:30 +10:00
|
|
|
#include <SdFat.h>
|
2025-12-08 00:39:17 +11:00
|
|
|
|
2025-12-03 22:00:29 +11:00
|
|
|
#include <string>
|
2025-12-29 20:17:29 +10:00
|
|
|
#include <unordered_map>
|
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458)
## Summary
Optimizes EPUB metadata indexing for large books (2000+ chapters) from
~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n)
hash-indexed lookups.
Fixes #134
## Problem
Three phases had O(n²) complexity due to nested loops:
| Phase | Operation | Before (2768 chapters) |
|-------|-----------|------------------------|
| OPF Pass | For each spine ref, scan all manifest items | ~25 min |
| TOC Pass | For each TOC entry, scan all spine items | ~5 min |
| buildBookBin | For each spine item, scan ZIP central directory | ~8.4
min |
Total: **~30+ minutes** for first-time indexing of large EPUBs.
## Solution
Replace linear scans with sorted hash indexes + binary search:
- **OPF Pass**: Build `{hash(id), len, offset}` index from manifest,
binary search for each spine ref
- **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine,
binary search for each TOC entry
- **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single
ZIP central directory scan with batch hash matching
All indexes use FNV-1a hashing with length as secondary key to minimize
collisions. Indexes are freed immediately after each phase.
## Results
**Shadow Slave EPUB (2768 chapters):**
| Phase | Before | After | Speedup |
|-------|--------|-------|---------|
| OPF pass | ~25 min | 10.8 sec | ~140x |
| TOC pass | ~5 min | 4.7 sec | ~60x |
| buildBookBin | 506 sec | 34.6 sec | ~15x |
| **Total** | **~30+ min** | **~50 sec** | **~36x** |
**Normal EPUB (87 chapters):** 1.7 sec - no regression.
## Memory
Peak temporary memory during indexing:
- OPF index: ~33KB (2770 items × 12 bytes)
- TOC index: ~33KB (2768 items × 12 bytes)
- ZIP batch: ~44KB (targets + sizes arrays)
All indexes cleared immediately after each phase. No OOM risk on
ESP32-C3.
## Note on Threshold
All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve
existing behavior for small books. However, the algorithms work
correctly for any book size and are faster even for small books:
| Book Size | Old O(n²) | New O(n log n) | Improvement |
|-----------|-----------|----------------|-------------|
| 10 ch | 100 ops | 50 ops | 2x |
| 100 ch | 10K ops | 800 ops | 12x |
| 400 ch | 160K ops | 4K ops | 40x |
If preferred, the threshold could be removed to use the optimized path
universally.
## Testing
- [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and
navigates correctly
- [x] Normal book (87 chapters): 1.7s indexing, no regression
- [x] Build passes
- [x] clang-format passes
## Files Changed
- `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index
- `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size
lookup
- `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API
- `lib/Epub/Epub.cpp` - Timing logs
<details>
<summary><b>Algorithm Details</b> (click to expand)</summary>
### Phase 1: OPF Pass - Manifest to Spine Lookup
**Problem**: Each `<itemref idref="ch001">` in spine must find matching
`<item id="ch001" href="...">` in manifest.
```
OLD: For each of 2768 spine refs, scan all 2770 manifest items
= 7.6M string comparisons
NEW: While parsing manifest, build index:
{ hash("ch001"), len=5, file_offset=120 }
Sort index, then binary search for each spine ref:
2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons
```
### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup
**Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find
its spine index.
```
OLD: For each of 2768 TOC entries, scan all 2768 spine entries
= 7.6M string comparisons
NEW: At beginTocPass(), read spine once and build index:
{ hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 }
Sort index, binary search for each TOC entry:
2768 × log₂(2768) ≈ 30K comparisons
Clear index at endTocPass() to free memory.
```
### Phase 3: buildBookBin - ZIP Size Lookup
**Problem**: Need uncompressed file size for each spine item (for
reading progress). Sizes are in ZIP central directory.
```
OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries)
= 7.6M filename reads + string comparisons
Time: 506 seconds
NEW:
Step 1: Build targets from spine
{ hash("OEBPS/chapter0001.xhtml"), len=25, index=0 }
Sort by (hash, len)
Step 2: Single pass through ZIP central directory
For each entry:
- Compute hash ON THE FLY (no string allocation)
- Binary search targets
- If match: sizes[target.index] = uncompressedSize
Step 3: Use sizes array directly (O(1) per spine item)
Total: 2773 entries × log₂(2768) ≈ 33K comparisons
Time: 35 seconds
```
### Why Hash + Length?
Using 64-bit FNV-1a hash + string length as a composite key:
- Collision probability: ~1 in 2⁶⁴ × typical_path_lengths
- No string storage needed in index (just 12-16 bytes per entry)
- Integer comparisons are faster than string comparisons
- Verification on match handles the rare collision case
</details>
---
_AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
|
|
|
#include <vector>
|
2025-12-08 00:39:17 +11:00
|
|
|
|
2025-12-03 22:00:29 +11:00
|
|
|
class ZipFile {
|
2025-12-29 20:17:29 +10:00
|
|
|
public:
|
|
|
|
|
struct FileStatSlim {
|
|
|
|
|
uint16_t method; // Compression method
|
|
|
|
|
uint32_t compressedSize; // Compressed size
|
|
|
|
|
uint32_t uncompressedSize; // Uncompressed size
|
|
|
|
|
uint32_t localHeaderOffset; // Offset of local file header
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct ZipDetails {
|
|
|
|
|
uint32_t centralDirOffset;
|
|
|
|
|
uint16_t totalEntries;
|
|
|
|
|
bool isSet;
|
|
|
|
|
};
|
|
|
|
|
|
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458)
## Summary
Optimizes EPUB metadata indexing for large books (2000+ chapters) from
~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n)
hash-indexed lookups.
Fixes #134
## Problem
Three phases had O(n²) complexity due to nested loops:
| Phase | Operation | Before (2768 chapters) |
|-------|-----------|------------------------|
| OPF Pass | For each spine ref, scan all manifest items | ~25 min |
| TOC Pass | For each TOC entry, scan all spine items | ~5 min |
| buildBookBin | For each spine item, scan ZIP central directory | ~8.4
min |
Total: **~30+ minutes** for first-time indexing of large EPUBs.
## Solution
Replace linear scans with sorted hash indexes + binary search:
- **OPF Pass**: Build `{hash(id), len, offset}` index from manifest,
binary search for each spine ref
- **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine,
binary search for each TOC entry
- **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single
ZIP central directory scan with batch hash matching
All indexes use FNV-1a hashing with length as secondary key to minimize
collisions. Indexes are freed immediately after each phase.
## Results
**Shadow Slave EPUB (2768 chapters):**
| Phase | Before | After | Speedup |
|-------|--------|-------|---------|
| OPF pass | ~25 min | 10.8 sec | ~140x |
| TOC pass | ~5 min | 4.7 sec | ~60x |
| buildBookBin | 506 sec | 34.6 sec | ~15x |
| **Total** | **~30+ min** | **~50 sec** | **~36x** |
**Normal EPUB (87 chapters):** 1.7 sec - no regression.
## Memory
Peak temporary memory during indexing:
- OPF index: ~33KB (2770 items × 12 bytes)
- TOC index: ~33KB (2768 items × 12 bytes)
- ZIP batch: ~44KB (targets + sizes arrays)
All indexes cleared immediately after each phase. No OOM risk on
ESP32-C3.
## Note on Threshold
All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve
existing behavior for small books. However, the algorithms work
correctly for any book size and are faster even for small books:
| Book Size | Old O(n²) | New O(n log n) | Improvement |
|-----------|-----------|----------------|-------------|
| 10 ch | 100 ops | 50 ops | 2x |
| 100 ch | 10K ops | 800 ops | 12x |
| 400 ch | 160K ops | 4K ops | 40x |
If preferred, the threshold could be removed to use the optimized path
universally.
## Testing
- [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and
navigates correctly
- [x] Normal book (87 chapters): 1.7s indexing, no regression
- [x] Build passes
- [x] clang-format passes
## Files Changed
- `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index
- `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size
lookup
- `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API
- `lib/Epub/Epub.cpp` - Timing logs
<details>
<summary><b>Algorithm Details</b> (click to expand)</summary>
### Phase 1: OPF Pass - Manifest to Spine Lookup
**Problem**: Each `<itemref idref="ch001">` in spine must find matching
`<item id="ch001" href="...">` in manifest.
```
OLD: For each of 2768 spine refs, scan all 2770 manifest items
= 7.6M string comparisons
NEW: While parsing manifest, build index:
{ hash("ch001"), len=5, file_offset=120 }
Sort index, then binary search for each spine ref:
2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons
```
### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup
**Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find
its spine index.
```
OLD: For each of 2768 TOC entries, scan all 2768 spine entries
= 7.6M string comparisons
NEW: At beginTocPass(), read spine once and build index:
{ hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 }
Sort index, binary search for each TOC entry:
2768 × log₂(2768) ≈ 30K comparisons
Clear index at endTocPass() to free memory.
```
### Phase 3: buildBookBin - ZIP Size Lookup
**Problem**: Need uncompressed file size for each spine item (for
reading progress). Sizes are in ZIP central directory.
```
OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries)
= 7.6M filename reads + string comparisons
Time: 506 seconds
NEW:
Step 1: Build targets from spine
{ hash("OEBPS/chapter0001.xhtml"), len=25, index=0 }
Sort by (hash, len)
Step 2: Single pass through ZIP central directory
For each entry:
- Compute hash ON THE FLY (no string allocation)
- Binary search targets
- If match: sizes[target.index] = uncompressedSize
Step 3: Use sizes array directly (O(1) per spine item)
Total: 2773 entries × log₂(2768) ≈ 33K comparisons
Time: 35 seconds
```
### Why Hash + Length?
Using 64-bit FNV-1a hash + string length as a composite key:
- Collision probability: ~1 in 2⁶⁴ × typical_path_lengths
- No string storage needed in index (just 12-16 bytes per entry)
- Integer comparisons are faster than string comparisons
- Verification on match handles the rare collision case
</details>
---
_AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
|
|
|
// Target for batch uncompressed size lookup (sorted by hash, then len)
|
|
|
|
|
struct SizeTarget {
|
|
|
|
|
uint64_t hash; // FNV-1a 64-bit hash of normalized path
|
|
|
|
|
uint16_t len; // Length of path for collision reduction
|
|
|
|
|
uint16_t index; // Caller's index (e.g. spine index)
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
// FNV-1a 64-bit hash computed from char buffer (no std::string allocation)
|
|
|
|
|
static uint64_t fnvHash64(const char* s, size_t len) {
|
|
|
|
|
uint64_t hash = 14695981039346656037ull;
|
|
|
|
|
for (size_t i = 0; i < len; i++) {
|
|
|
|
|
hash ^= static_cast<uint8_t>(s[i]);
|
|
|
|
|
hash *= 1099511628211ull;
|
|
|
|
|
}
|
|
|
|
|
return hash;
|
|
|
|
|
}
|
|
|
|
|
|
2025-12-29 20:17:29 +10:00
|
|
|
private:
|
|
|
|
|
const std::string& filePath;
|
2025-12-30 15:09:30 +10:00
|
|
|
FsFile file;
|
2025-12-29 20:17:29 +10:00
|
|
|
ZipDetails zipDetails = {0, 0, false};
|
|
|
|
|
std::unordered_map<std::string, FileStatSlim> fileStatSlimCache;
|
|
|
|
|
|
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458)
## Summary
Optimizes EPUB metadata indexing for large books (2000+ chapters) from
~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n)
hash-indexed lookups.
Fixes #134
## Problem
Three phases had O(n²) complexity due to nested loops:
| Phase | Operation | Before (2768 chapters) |
|-------|-----------|------------------------|
| OPF Pass | For each spine ref, scan all manifest items | ~25 min |
| TOC Pass | For each TOC entry, scan all spine items | ~5 min |
| buildBookBin | For each spine item, scan ZIP central directory | ~8.4
min |
Total: **~30+ minutes** for first-time indexing of large EPUBs.
## Solution
Replace linear scans with sorted hash indexes + binary search:
- **OPF Pass**: Build `{hash(id), len, offset}` index from manifest,
binary search for each spine ref
- **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine,
binary search for each TOC entry
- **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single
ZIP central directory scan with batch hash matching
All indexes use FNV-1a hashing with length as secondary key to minimize
collisions. Indexes are freed immediately after each phase.
## Results
**Shadow Slave EPUB (2768 chapters):**
| Phase | Before | After | Speedup |
|-------|--------|-------|---------|
| OPF pass | ~25 min | 10.8 sec | ~140x |
| TOC pass | ~5 min | 4.7 sec | ~60x |
| buildBookBin | 506 sec | 34.6 sec | ~15x |
| **Total** | **~30+ min** | **~50 sec** | **~36x** |
**Normal EPUB (87 chapters):** 1.7 sec - no regression.
## Memory
Peak temporary memory during indexing:
- OPF index: ~33KB (2770 items × 12 bytes)
- TOC index: ~33KB (2768 items × 12 bytes)
- ZIP batch: ~44KB (targets + sizes arrays)
All indexes cleared immediately after each phase. No OOM risk on
ESP32-C3.
## Note on Threshold
All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve
existing behavior for small books. However, the algorithms work
correctly for any book size and are faster even for small books:
| Book Size | Old O(n²) | New O(n log n) | Improvement |
|-----------|-----------|----------------|-------------|
| 10 ch | 100 ops | 50 ops | 2x |
| 100 ch | 10K ops | 800 ops | 12x |
| 400 ch | 160K ops | 4K ops | 40x |
If preferred, the threshold could be removed to use the optimized path
universally.
## Testing
- [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and
navigates correctly
- [x] Normal book (87 chapters): 1.7s indexing, no regression
- [x] Build passes
- [x] clang-format passes
## Files Changed
- `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index
- `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size
lookup
- `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API
- `lib/Epub/Epub.cpp` - Timing logs
<details>
<summary><b>Algorithm Details</b> (click to expand)</summary>
### Phase 1: OPF Pass - Manifest to Spine Lookup
**Problem**: Each `<itemref idref="ch001">` in spine must find matching
`<item id="ch001" href="...">` in manifest.
```
OLD: For each of 2768 spine refs, scan all 2770 manifest items
= 7.6M string comparisons
NEW: While parsing manifest, build index:
{ hash("ch001"), len=5, file_offset=120 }
Sort index, then binary search for each spine ref:
2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons
```
### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup
**Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find
its spine index.
```
OLD: For each of 2768 TOC entries, scan all 2768 spine entries
= 7.6M string comparisons
NEW: At beginTocPass(), read spine once and build index:
{ hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 }
Sort index, binary search for each TOC entry:
2768 × log₂(2768) ≈ 30K comparisons
Clear index at endTocPass() to free memory.
```
### Phase 3: buildBookBin - ZIP Size Lookup
**Problem**: Need uncompressed file size for each spine item (for
reading progress). Sizes are in ZIP central directory.
```
OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries)
= 7.6M filename reads + string comparisons
Time: 506 seconds
NEW:
Step 1: Build targets from spine
{ hash("OEBPS/chapter0001.xhtml"), len=25, index=0 }
Sort by (hash, len)
Step 2: Single pass through ZIP central directory
For each entry:
- Compute hash ON THE FLY (no string allocation)
- Binary search targets
- If match: sizes[target.index] = uncompressedSize
Step 3: Use sizes array directly (O(1) per spine item)
Total: 2773 entries × log₂(2768) ≈ 33K comparisons
Time: 35 seconds
```
### Why Hash + Length?
Using 64-bit FNV-1a hash + string length as a composite key:
- Collision probability: ~1 in 2⁶⁴ × typical_path_lengths
- No string storage needed in index (just 12-16 bytes per entry)
- Integer comparisons are faster than string comparisons
- Verification on match handles the rare collision case
</details>
---
_AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
|
|
|
// Cursor for sequential central-dir scanning optimization
|
|
|
|
|
uint32_t lastCentralDirPos = 0;
|
|
|
|
|
bool lastCentralDirPosValid = false;
|
|
|
|
|
|
2025-12-29 20:17:29 +10:00
|
|
|
bool loadFileStatSlim(const char* filename, FileStatSlim* fileStat);
|
|
|
|
|
long getDataOffset(const FileStatSlim& fileStat);
|
|
|
|
|
bool loadZipDetails();
|
2025-12-03 22:00:29 +11:00
|
|
|
|
|
|
|
|
public:
|
2025-12-29 20:17:29 +10:00
|
|
|
explicit ZipFile(const std::string& filePath) : filePath(filePath) {}
|
|
|
|
|
~ZipFile() = default;
|
|
|
|
|
// Zip file can be opened and closed by hand in order to allow for quick calculation of inflated file size
|
|
|
|
|
// It is NOT recommended to pre-open it for any kind of inflation due to memory constraints
|
|
|
|
|
bool isOpen() const { return !!file; }
|
|
|
|
|
bool open();
|
|
|
|
|
bool close();
|
|
|
|
|
bool loadAllFileStatSlims();
|
|
|
|
|
bool getInflatedFileSize(const char* filename, size_t* size);
|
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458)
## Summary
Optimizes EPUB metadata indexing for large books (2000+ chapters) from
~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n)
hash-indexed lookups.
Fixes #134
## Problem
Three phases had O(n²) complexity due to nested loops:
| Phase | Operation | Before (2768 chapters) |
|-------|-----------|------------------------|
| OPF Pass | For each spine ref, scan all manifest items | ~25 min |
| TOC Pass | For each TOC entry, scan all spine items | ~5 min |
| buildBookBin | For each spine item, scan ZIP central directory | ~8.4
min |
Total: **~30+ minutes** for first-time indexing of large EPUBs.
## Solution
Replace linear scans with sorted hash indexes + binary search:
- **OPF Pass**: Build `{hash(id), len, offset}` index from manifest,
binary search for each spine ref
- **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine,
binary search for each TOC entry
- **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single
ZIP central directory scan with batch hash matching
All indexes use FNV-1a hashing with length as secondary key to minimize
collisions. Indexes are freed immediately after each phase.
## Results
**Shadow Slave EPUB (2768 chapters):**
| Phase | Before | After | Speedup |
|-------|--------|-------|---------|
| OPF pass | ~25 min | 10.8 sec | ~140x |
| TOC pass | ~5 min | 4.7 sec | ~60x |
| buildBookBin | 506 sec | 34.6 sec | ~15x |
| **Total** | **~30+ min** | **~50 sec** | **~36x** |
**Normal EPUB (87 chapters):** 1.7 sec - no regression.
## Memory
Peak temporary memory during indexing:
- OPF index: ~33KB (2770 items × 12 bytes)
- TOC index: ~33KB (2768 items × 12 bytes)
- ZIP batch: ~44KB (targets + sizes arrays)
All indexes cleared immediately after each phase. No OOM risk on
ESP32-C3.
## Note on Threshold
All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve
existing behavior for small books. However, the algorithms work
correctly for any book size and are faster even for small books:
| Book Size | Old O(n²) | New O(n log n) | Improvement |
|-----------|-----------|----------------|-------------|
| 10 ch | 100 ops | 50 ops | 2x |
| 100 ch | 10K ops | 800 ops | 12x |
| 400 ch | 160K ops | 4K ops | 40x |
If preferred, the threshold could be removed to use the optimized path
universally.
## Testing
- [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and
navigates correctly
- [x] Normal book (87 chapters): 1.7s indexing, no regression
- [x] Build passes
- [x] clang-format passes
## Files Changed
- `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index
- `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size
lookup
- `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API
- `lib/Epub/Epub.cpp` - Timing logs
<details>
<summary><b>Algorithm Details</b> (click to expand)</summary>
### Phase 1: OPF Pass - Manifest to Spine Lookup
**Problem**: Each `<itemref idref="ch001">` in spine must find matching
`<item id="ch001" href="...">` in manifest.
```
OLD: For each of 2768 spine refs, scan all 2770 manifest items
= 7.6M string comparisons
NEW: While parsing manifest, build index:
{ hash("ch001"), len=5, file_offset=120 }
Sort index, then binary search for each spine ref:
2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons
```
### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup
**Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find
its spine index.
```
OLD: For each of 2768 TOC entries, scan all 2768 spine entries
= 7.6M string comparisons
NEW: At beginTocPass(), read spine once and build index:
{ hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 }
Sort index, binary search for each TOC entry:
2768 × log₂(2768) ≈ 30K comparisons
Clear index at endTocPass() to free memory.
```
### Phase 3: buildBookBin - ZIP Size Lookup
**Problem**: Need uncompressed file size for each spine item (for
reading progress). Sizes are in ZIP central directory.
```
OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries)
= 7.6M filename reads + string comparisons
Time: 506 seconds
NEW:
Step 1: Build targets from spine
{ hash("OEBPS/chapter0001.xhtml"), len=25, index=0 }
Sort by (hash, len)
Step 2: Single pass through ZIP central directory
For each entry:
- Compute hash ON THE FLY (no string allocation)
- Binary search targets
- If match: sizes[target.index] = uncompressedSize
Step 3: Use sizes array directly (O(1) per spine item)
Total: 2773 entries × log₂(2768) ≈ 33K comparisons
Time: 35 seconds
```
### Why Hash + Length?
Using 64-bit FNV-1a hash + string length as a composite key:
- Collision probability: ~1 in 2⁶⁴ × typical_path_lengths
- No string storage needed in index (just 12-16 bytes per entry)
- Integer comparisons are faster than string comparisons
- Verification on match handles the rare collision case
</details>
---
_AI-assisted development. All changes tested on hardware._
2026-01-27 06:29:15 -08:00
|
|
|
// Batch lookup: scan ZIP central dir once and fill sizes for matching targets.
|
|
|
|
|
// targets must be sorted by (hash, len). sizes[target.index] receives uncompressedSize.
|
|
|
|
|
// Returns number of targets matched.
|
|
|
|
|
int fillUncompressedSizes(std::vector<SizeTarget>& targets, std::vector<uint32_t>& sizes);
|
2025-12-29 20:17:29 +10:00
|
|
|
// Due to the memory required to run each of these, it is recommended to not preopen the zip file for multiple
|
|
|
|
|
// These functions will open and close the zip as needed
|
|
|
|
|
uint8_t* readFileToMemory(const char* filename, size_t* size = nullptr, bool trailingNullByte = false);
|
|
|
|
|
bool readFileToStream(const char* filename, Print& out, size_t chunkSize);
|
2025-12-03 22:00:29 +11:00
|
|
|
};
|