fix: prevent OOM crash when loading large EPUBs (2000+ chapters)

Remove the call to loadAllFileStatSlims() which pre-loads all ZIP central
directory entries into memory. For EPUBs with 2000+ chapters (like webnovels),
this exhausts the ESP32-C3's ~380KB RAM and causes abort().

The existing loadFileStatSlim() function already handles individual lookups
by scanning the central directory per-file when the cache is empty. This is
O(n*m) instead of O(n), but prevents memory exhaustion.

Fixes #134
This commit is contained in:
Daniel 2026-01-20 12:34:51 -08:00 committed by cottongin
parent c166b89f7b
commit 481b8210fb
No known key found for this signature in database
GPG Key ID: 0ECC91FE4655C262

View File

@ -143,17 +143,12 @@ bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMeta
tocFile.close();
return false;
}
// TODO: For large ZIPs loading the all localHeaderOffsets will crash.
// However not having them loaded is extremely slow. Need a better solution here.
// Perhaps only a cache of spine items or a better way to speedup lookups?
if (!zip.loadAllFileStatSlims()) {
Serial.printf("[%lu] [BMC] Could not load zip local header offsets for size calculations\n", millis());
bookFile.close();
spineFile.close();
tocFile.close();
zip.close();
return false;
}
// NOTE: We intentionally skip calling loadAllFileStatSlims() here.
// For large EPUBs (2000+ chapters), pre-loading all ZIP central directory entries
// into memory causes OOM crashes on ESP32-C3's limited ~380KB RAM.
// Instead, we let loadFileStatSlim() do individual lookups per spine item.
// This is O(n*m) instead of O(n) for lookups, but avoids memory exhaustion.
// See: https://github.com/crosspoint-reader/crosspoint-reader/issues/134
uint32_t cumSize = 0;
spineFile.seek(0);
int lastSpineTocIndex = -1;