A service that periodically fetches SoundCloud likes from NicktheRat's profile, builds weekly playlists aligned to his Wednesday 22:00 ET show schedule, and exposes them via a JSON API for an IRC bot to query track info by position number (`!1`, `!2`, etc.).
## Architecture
Single Python process with three internal responsibilities:
1.**API server** — FastAPI on a configurable port, serves playlist data as JSON.
2.**Poller** — async background task that fetches Nick's SoundCloud likes every hour.
3.**Supervisor** — monitors the poller task, restarts it on failure without affecting the API.
The poller and API run as independent `asyncio` tasks. If the poller crashes, the supervisor catches the exception, logs it, waits a backoff period, and restarts the poller. The API continues serving from the last-known-good SQLite data.
External process management (systemd with `Restart=on-failure`) handles whole-process crashes. The service does not try to be its own process manager.
### Startup sequence
1. Open/create SQLite database, run migrations.
2. Check if the current week's playlist exists. If not (or stale), do an immediate fetch.
3. Start the API server.
4. Start the poller on its hourly schedule.
## Data Model (SQLite)
### `tracks`
Canonical store of every SoundCloud track Nick has liked.
| `artist` | TEXT | `track.user.username` from the API |
| `permalink_url` | TEXT | Full SoundCloud URL |
| `artwork_url` | TEXT | Nullable |
| `duration_ms` | INTEGER | Duration in milliseconds |
| `license` | TEXT | e.g. `cc-by-sa` |
| `liked_at` | TEXT | ISO 8601 — when Nick liked it |
| `raw_json` | TEXT | Full track JSON blob |
### `shows`
One row per weekly show.
| Column | Type | Notes |
|--------|------|-------|
| `id` | INTEGER PK | Auto-increment |
| `week_start` | TEXT | ISO 8601 UTC of the Wednesday 22:00 ET boundary that opens this week |
| `week_end` | TEXT | ISO 8601 UTC of the next Wednesday 22:00 ET boundary |
| `created_at` | TEXT | When this row was created |
### `show_tracks`
Join table linking tracks to shows with position.
| Column | Type | Notes |
|--------|------|-------|
| `show_id` | INTEGER FK | References `shows.id` |
| `track_id` | INTEGER FK | References `tracks.id` |
| `position` | INTEGER | 1-indexed — maps to `!1`, `!2`, etc. |
| UNIQUE | | `(show_id, track_id)` |
Position assignment: likes sorted by `liked_at` ascending (oldest first), positions assigned 1, 2, 3... New likes mid-week get the next position; existing positions never shift.
If Nick unlikes a track, the poller removes it from the show and re-compacts positions (e.g. if track at position 2 is removed, position 3 becomes 2). The `tracks` table retains the record for historical reference, but the `show_tracks` link is deleted.
The poller does not re-fetch all likes every hour. It uses cursor-seeking:
- **First fetch for a new week**: craft a synthetic cursor at the week's end boundary, paginate backward until hitting the week's start boundary.
- **Subsequent fetches**: craft a cursor at "now", paginate backward until hitting a track already in the database. Most hourly polls fetch a single page or zero pages.
- **Full refresh** (`POST /admin/refresh` with `{"full": true}`): re-fetches the entire week from scratch, same as the first-fetch path.
### `client_id` management
- Extract from `soundcloud.com` HTML (`__sc_hydration` -> `apiClient` -> `id`) on startup.
- Cache in memory (not persisted — rotates too frequently).
- On any 401 response, re-extract and retry.
- If re-extraction fails, log the error and let the next tick retry.
### Retry & backoff
Each SoundCloud HTTP call: 3 attempts, exponential backoff (2s, 4s, 8s). 401s trigger `client_id` refresh before retry (doesn't count against attempts). Request timeout: 15 seconds.
### Error scenarios
| Scenario | Behavior |
|----------|----------|
| SoundCloud 401 | Refresh `client_id`, retry |
| SoundCloud 429 | Back off, retry next tick |
| SoundCloud 5xx | Retry with backoff, skip tick after 3 failures |