Three subsystems were using std::fs (blocking) inside async context,
which stalls the tokio runtime thread during I/O:
- DashboardMetrics::save(): now uses tokio::fs::write + rename
- TokenStore::flush_last_used(): now uses tokio::fs for batch updates
- AuditLog::log(): moved file write to spawn_blocking (fire-and-forget)
The background task and shutdown handler now properly .await the
async save/flush methods. AuditLog writer wrapped in Arc for
cross-thread access from spawn_blocking.
Token verification previously ran Argon2id + disk read on every
authenticated request. Under load this becomes the bottleneck
(~100ms per Argon2 verify on a single core).
Changes:
- Add in-memory cache (SHA256 -> user/role/expiry) with 5 minute TTL
- Defer last_used timestamp writes to batch flush every 30 seconds
- Invalidate cache entry on token revoke
- Background task flushes pending last_used alongside metrics persist
First verify_token call per token: full Argon2 + disk (unchanged).
Subsequent calls within TTL: HashMap lookup only (sub-microsecond).
* docs: add DCO, governance model, roles, vulnerability credit policy
* security: migrate token hashing from SHA256 to Argon2id
- Replace unsalted SHA256 with Argon2id (salted) for API token hashing
- Fix TOCTOU race: replace exists()+read() with read()+match on error
- Set chmod 600 on token files and 700 on token storage directory
- Auto-migrate legacy SHA256 tokens to Argon2id on first verification
- Add regression tests: argon2 format, legacy migration, file permissions
- Add Role enum to tokens: Read, Write, Admin (default: Read)
- Enforce role-based access in auth middleware (read-only tokens blocked from PUT/POST/DELETE)
- Add role field to token create/list/verify API
- Add persistent audit log (append-only JSONL) for all registry operations
- Audit logging across all registries: docker, npm, maven, pypi, cargo, raw
DevITWay