- Add /api/hash endpoint to compute and cache MD5 hashes on demand
- Extend /api/files response with md5 field for EPUBs (null if not cached)
- Compute and cache MD5 automatically after EPUB uploads
- Add flush() before close() in WebSocket and HTTP upload handlers
- New Md5Utils module using ESP32's mbedtls for chunked hash computation
The MD5 hashes enable the companion app to detect file changes without
downloading content. Hashes are cached in each book's .crosspoint cache
directory and invalidated when file size changes.
Merges progress bar status bar feature from merge-438:
- Added FULL_WITH_PROGRESS_BAR and ONLY_PROGRESS_BAR status bar modes
- Added drawBookProgressBar() to ScreenComponents
Preserves features from current branch:
- Content offset tracking for position restoration (EPUB_PROGRESS_VERSION)
- drawBatteryLarge() function
- Sleep Screen Cover Mode "Actual" option
Adds full support for book lists managed by the Companion App:
- New /list API endpoints (GET/POST) for uploading, retrieving, and deleting lists
- BookListStore for binary serialization of lists to /.lists/ directory
- ListViewActivity for viewing list contents with book thumbnails
- Reading Lists tab in My Library with pin/unpin and delete actions
- Pinnable list shortcut on home screen (split button layout)
- Automatic cleanup of pinned status when lists are deleted
Integrates upstream PR #511 changes while preserving local delete/archive
functionality. Key changes:
- Add RecentBook struct with path, title, author fields
- Update RecentBooksStore to store and serialize metadata
- Implement version migration for existing recent.bin files
- Update MyLibraryActivity to display two-line items (title + author)
- Update EpubReaderActivity and XtcReaderActivity to pass metadata
Maintains backwards compatibility with delete/archive feature from
local commit d5a9873.
Add ZipFile::fillUncompressedSizes() for single-pass ZIP central directory
scan with hash-based target matching.
Also apply clang-format fixes for CI.
Shadow Slave results:
- buildBookBin: 506s → 35s
- Total indexing: 8.7min → 50s
Three optimizations for EPUBs with many chapters (e.g. 2768 chapters):
1. OPF idref→href lookup: Build sorted hash index during manifest parsing,
use binary search during spine resolution. Reduces ~4min to ~30-60s.
2. TOC href→spineIndex lookup: Build sorted hash index in beginTocPass(),
use binary search in createTocEntry(). Reduces ~4min to ~30-60s.
3. ZIP central-dir cursor: Resume scanning from last position instead of
restarting from beginning. Reduces ~8min to ~1-3min.
All optimizations only activate for large EPUBs (≥400 spine items).
Small books use unchanged code paths.
Memory impact: ~33KB + ~39KB temporary during indexing, freed after.
Expected total: ~17min → ~3-5min for Shadow Slave (2768 chapters).
Also adds phase timing logs for performance measurement.
The unordered_map with 2768 string keys (~100KB+) was causing OOM crashes
at beginTocPass() on ESP32-C3's limited ~380KB RAM.
Reverted createTocEntry() to use original O(n) spine file scan instead.
Kept the safe spineToTocIndex vector in buildBookBin() (only ~5.5KB).
Replace O(n²) lookups with O(n) preprocessing:
1. createTocEntry(): Build href->spineIndex map once in beginTocPass()
instead of scanning spine file for every TOC entry
2. buildBookBin(): Build spineIndex->tocIndex vector in single pass
instead of scanning TOC file for every spine entry
For 2768-chapter EPUBs, this reduces:
- TOC pass: from ~7.6M file reads to ~5.5K reads
- buildBookBin: from ~7.6M file reads to ~5.5K reads
Memory impact: ~80KB for href map (acceptable trade-off for 10x+ speedup)
Remove the call to loadAllFileStatSlims() which pre-loads all ZIP central
directory entries into memory. For EPUBs with 2000+ chapters (like webnovels),
this exhausts the ESP32-C3's ~380KB RAM and causes abort().
The existing loadFileStatSlim() function already handles individual lookups
by scanning the central directory per-file when the cache is empty. This is
O(n*m) instead of O(n), but prevents memory exhaustion.
Fixes#134
## Summary
- Rewrite OpdsParser to stream parsing instead of full content
- Fix OOM due to big http xml response
Closes#385
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? _**NO**_
## More detailed documentation
* **What is the goal of this PR?**
Add more information about the exposed webserver.
* **What changes are included?**
Detailed documentation for the webserver endpoints
(`./docs/webserver-endpoints.md`)
Adding a table of content so it is easier to navigate directly to the
section you're interested on (Almost all `.md` files or at least all
those relevant)
## Additional Context
Not sure if this would get accepted but I thought it might be useful for
those trying to create separate apps that would sync files to the
device. It was at least to me trying to upload files using python as
stated
[here](https://github.com/crosspoint-reader/crosspoint-reader/discussions/434#discussioncomment-15545349)
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? _**PARTIALLY**_
## Summary
When uploading or downloading an updated ebook from SD/WebUI/OPDS with
same the filename the `.crosspoint` cache is not cleared. This can lead
to issues with the Table of Contents and hangs when switching between
chapters.
I encountered this issue in two places:
- When I need to do further ePub cleaning using Calibre after I load an
ePub and find that some of its formatting should be cleaned up. When I
reprocess the same book and want to place it back in the same location I
need a way to invalidate the cache.
- When syncing RSS feed generated epubs. I generate news ePubs with
filenames like `news-outlet.epub` and so every day when I fetch new news
the crosspoint cache needs to be cleared to load that file.
This change offers the following features:
- On web uploads, if the file already exists, the cache for that file is
cleared
- On OPDS downloads, if the file already exists, the cache for that file
is cleared
- There's now an action for `Clear Cache` in the Settings page which can
clear the cache for all books
Addresses
https://github.com/crosspoint-reader/crosspoint-reader/issues/281
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? PARTIALLY
---------
Co-authored-by: Dave Allie <dave@daveallie.com>
## Summary
* Disables going to sleep after uploading new firmware
* Makes developer experience easier
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? _**< NO >**_
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
## Summary
Give space to the chapter title if we don't show battery percentage.
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? _**NO**_
---------
Co-authored-by: Dave Allie <dave@daveallie.com>
## Summary
* Include superscripts and subscripts in fonts
## Additional Context
* Original change came from
https://github.com/crosspoint-reader/crosspoint-reader/pull/248
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? No
---------
Co-authored-by: cor <cor@pruijs.dev>