perf: optimize large EPUB indexing from O(n²) to O(n log n)
Three optimizations for EPUBs with many chapters (e.g. 2768 chapters): 1. OPF idref→href lookup: Build sorted hash index during manifest parsing, use binary search during spine resolution. Reduces ~4min to ~30-60s. 2. TOC href→spineIndex lookup: Build sorted hash index in beginTocPass(), use binary search in createTocEntry(). Reduces ~4min to ~30-60s. 3. ZIP central-dir cursor: Resume scanning from last position instead of restarting from beginning. Reduces ~8min to ~1-3min. All optimizations only activate for large EPUBs (≥400 spine items). Small books use unchanged code paths. Memory impact: ~33KB + ~39KB temporary during indexing, freed after. Expected total: ~17min → ~3-5min for Shadow Slave (2768 chapters). Also adds phase timing logs for performance measurement.
This commit is contained in:
@@ -25,6 +25,10 @@ class ZipFile {
|
||||
ZipDetails zipDetails = {0, 0, false};
|
||||
std::unordered_map<std::string, FileStatSlim> fileStatSlimCache;
|
||||
|
||||
// Cursor for sequential central-dir scanning optimization
|
||||
uint32_t lastCentralDirPos = 0;
|
||||
bool lastCentralDirPosValid = false;
|
||||
|
||||
bool loadFileStatSlim(const char* filename, FileStatSlim* fileStat);
|
||||
long getDataOffset(const FileStatSlim& fileStat);
|
||||
bool loadZipDetails();
|
||||
|
||||
Reference in New Issue
Block a user