8 Commits

Author SHA1 Message Date
a9455c35b9 chore: bump version to 0.2.27 2026-03-03 22:30:24 +00:00
8278297b4a feat: configurable body limit + Docker delete API
- Add body_limit_mb to ServerConfig (default 2048MB, env NORA_BODY_LIMIT_MB)
- Replace hardcoded 100MB DefaultBodyLimit with config value
- Add DELETE /v2/{name}/manifests/{reference} endpoint (Docker Registry V2 spec)
- Add DELETE /v2/{name}/blobs/{digest} endpoint
- Add namespace-qualified variants for both DELETE endpoints
- Return 202 Accepted on success, 404 with MANIFEST_UNKNOWN/BLOB_UNKNOWN errors
- Audit log integration for delete operations

Fixes: 413 Payload Too Large on Docker push >100MB
2026-03-03 22:25:41 +00:00
8da4c4278a style: cargo fmt
DevITWay
2026-03-03 11:03:40 +00:00
99c1f9b5ec docs: add changelog for v0.2.25 and v0.2.26
DevITWay
2026-03-03 11:01:12 +00:00
07de85d4f8 fix: detect OCI manifest media type for Helm chart support
Distinguish OCI vs Docker manifests by checking config.mediaType
instead of assuming all schemaVersion 2 manifests are Docker.
Enables helm push/pull via OCI protocol.

DevITWay
2026-03-03 10:56:52 +00:00
4c3a9f6bd5 chore: bump version to 0.2.26
DevITWay
2026-03-03 10:41:38 +00:00
402d2321ef feat: add RBAC (read/write/admin) and persistent audit log
- Add Role enum to tokens: Read, Write, Admin (default: Read)
- Enforce role-based access in auth middleware (read-only tokens blocked from PUT/POST/DELETE)
- Add role field to token create/list/verify API
- Add persistent audit log (append-only JSONL) for all registry operations
- Audit logging across all registries: docker, npm, maven, pypi, cargo, raw

DevITWay
2026-03-03 10:40:59 +00:00
f560e5f76b feat: add gc command and fix Docker-Content-Digest for Helm OCI
- Add nora gc --dry-run command for orphaned blob cleanup
- Fix Docker-Content-Digest header in blob upload response (enables Helm OCI push)
- Mark-and-sweep GC: list blobs, parse manifests, find/delete orphans

DevITWay
2026-03-03 10:28:39 +00:00
16 changed files with 993 additions and 26 deletions

View File

@@ -4,6 +4,56 @@ All notable changes to NORA will be documented in this file.
--- ---
## [0.2.26] - 2026-03-03
### Added / Добавлено
- **Helm OCI support**: `helm push` / `helm pull` now works out of the box via OCI protocol
- **Поддержка Helm OCI**: `helm push` / `helm pull` теперь работают из коробки через OCI протокол
- **RBAC**: Token-based role system with three roles — `read`, `write`, `admin` (default: `read`)
- **RBAC**: Ролевая система на основе токенов — `read`, `write`, `admin` (по умолчанию: `read`)
- **Audit log**: Persistent append-only JSONL audit trail for all registry operations (`{storage}/audit.jsonl`)
- **Аудит**: Персистентный append-only JSONL лог всех операций реестра (`{storage}/audit.jsonl`)
- **GC command**: `nora gc --dry-run` — garbage collection for orphaned blobs (mark-and-sweep)
- **Команда GC**: `nora gc --dry-run` — сборка мусора для осиротевших блобов (mark-and-sweep)
### Fixed / Исправлено
- **Helm OCI pull**: Fixed OCI manifest media type detection — manifests with non-Docker `config.mediaType` now correctly return `application/vnd.oci.image.manifest.v1+json`
- **Helm OCI pull**: Исправлено определение media type OCI манифестов — манифесты с не-Docker `config.mediaType` теперь корректно возвращают `application/vnd.oci.image.manifest.v1+json`
- **Docker-Content-Digest**: Added missing header in blob upload response (required by Helm OCI client)
- **Docker-Content-Digest**: Добавлен отсутствующий заголовок в ответе на загрузку blob (требуется клиентом Helm OCI)
### Security / Безопасность
- Read-only tokens (`role: read`) are now blocked from PUT/POST/DELETE/PATCH operations with HTTP 403
- Токены только для чтения (`role: read`) теперь блокируются при PUT/POST/DELETE/PATCH с HTTP 403
---
## [0.2.25] - 2026-03-03
### Fixed / Исправлено
- **Rate limiter fix**: Added `NORA_RATE_LIMIT_ENABLED` env var (default: `true`) to disable rate limiting on internal deployments
- **Исправление rate limiter**: Добавлена переменная `NORA_RATE_LIMIT_ENABLED` (по умолчанию: `true`) для отключения rate limiting на внутренних инсталляциях
- **SmartIpKeyExtractor**: Upload and general routes now use `SmartIpKeyExtractor` (reads `X-Forwarded-For`) instead of `PeerIpKeyExtractor` — fixes 429 errors behind reverse proxy / Docker bridge
- **SmartIpKeyExtractor**: Маршруты upload и general теперь используют `SmartIpKeyExtractor` (читает `X-Forwarded-For`) вместо `PeerIpKeyExtractor` — устраняет ошибки 429 за reverse proxy / Docker bridge
### Dependencies / Зависимости
- `clap` 4.5.56 → 4.5.60
- `uuid` 1.20.0 → 1.21.0
- `tempfile` 3.24.0 → 3.26.0
- `bcrypt` 0.17.1 → 0.18.0
- `indicatif` 0.17.11 → 0.18.4
### CI/CD
- `actions/checkout` 4 → 6
- `actions/upload-artifact` 4 → 7
- `softprops/action-gh-release` 1 → 2
- `aquasecurity/trivy-action` 0.30.0 → 0.34.2
- `docker/build-push-action` 5 → 6
- Move scan/release to self-hosted runner with NORA cache
- Сканирование/релиз перенесены на self-hosted runner с кэшем через NORA
---
## [0.2.24] - 2026-02-24 ## [0.2.24] - 2026-02-24
### Added / Добавлено ### Added / Добавлено

414
CHANGELOG.md.bak Normal file
View File

@@ -0,0 +1,414 @@
# Changelog
All notable changes to NORA will be documented in this file.
---
## [0.2.18] - 2026-01-31
### Changed
- Logo styling refinements
---
## [0.2.17] - 2026-01-31
### Added
- Copyright headers to all source files (Volkov Pavel | DevITWay)
- SPDX-License-Identifier: MIT in all .rs files
---
## [0.2.16] - 2026-01-31
### Changed
- N○RA branding: stylized O logo across dashboard
- Fixed O letter alignment in logo
---
## [0.2.15] - 2026-01-31
### Fixed
- Code formatting (cargo fmt)
---
## [0.2.14] - 2026-01-31
### Fixed
- Docker dashboard now shows actual image size from manifest layers (config + layers sum)
- Previously showed only manifest file size (~500 B instead of actual image size)
---
## [0.2.13] - 2026-01-31
### Fixed
- npm dashboard now shows correct version count and package sizes
- Parses metadata.json for versions, dist.unpackedSize, and time.modified
- Previously showed 0 versions / 0 B for all packages
---
## [0.2.12] - 2026-01-30
### Added
#### Configurable Rate Limiting
- Rate limits now configurable via `config.toml` and environment variables
- New config section `[rate_limit]` with parameters: `auth_rps`, `auth_burst`, `upload_rps`, `upload_burst`, `general_rps`, `general_burst`
- Environment variables: `NORA_RATE_LIMIT_{AUTH|UPLOAD|GENERAL}_{RPS|BURST}`
#### Secrets Provider Architecture
- Trait-based secrets management (`SecretsProvider` trait)
- ENV provider as default (12-Factor App pattern)
- Protected secrets with `zeroize` (memory zeroed on drop)
- Redacted Debug impl prevents secret leakage in logs
- New config section `[secrets]` with `provider` and `clear_env` options
#### Docker Image Metadata
- Support for image metadata retrieval
#### Documentation
- Bilingual onboarding guide (EN/RU)
---
## [0.2.11] - 2026-01-26
### Added
- Internationalization (i18n) support
- PyPI registry proxy
- UI improvements
---
## [0.2.10] - 2026-01-26
### Changed
- Dark theme applied to all UI pages
---
## [0.2.9] - 2026-01-26
### Changed
- Version bump release
---
## [0.2.8] - 2026-01-26
### Added
- Dashboard endpoint added to OpenAPI documentation
---
## [0.2.7] - 2026-01-26
### Added
- Dynamic version display in UI sidebar
---
## [0.2.6] - 2026-01-26
### Added
#### Dashboard Metrics
- Global stats panel: downloads, uploads, artifacts, cache hit rate, storage
- Extended registry cards with artifact count, size, counters
- Activity log (last 20 events)
#### UI
- Dark theme (bg: #0f172a, cards: #1e293b)
---
## [0.2.5] - 2026-01-26
### Fixed
- Docker push/pull: added PATCH endpoint for chunked uploads
---
## [0.2.4] - 2026-01-26
### Fixed
- Rate limiting: health/metrics endpoints now exempt
- Increased upload rate limits for Docker parallel requests
---
## [0.2.0] - 2026-01-25
### Added
#### UI: SVG Brand Icons
- Replaced emoji icons with proper SVG brand icons (Simple Icons style)
- Docker, Maven, npm, Cargo, PyPI icons now render as scalable vector graphics
- Consistent icon styling across dashboard, sidebar, and detail pages
#### Testing Infrastructure
- Unit tests for LocalStorage (8 tests): put/get, list, stat, health_check
- Unit tests for S3Storage with wiremock HTTP mocking (11 tests)
- Integration tests for auth/htpasswd (7 tests)
- Token lifecycle tests (11 tests)
- Validation tests (21 tests)
- **Total: 75 tests passing**
#### Security: Input Validation (`validation.rs`)
- Path traversal protection: rejects `../`, `..\\`, null bytes, absolute paths
- Docker image name validation per OCI distribution spec
- Content digest validation (`sha256:[64 hex]`, `sha512:[128 hex]`)
- Docker tag/reference validation
- Storage key length limits (max 1024 chars)
#### Security: Rate Limiting (`rate_limit.rs`)
- Auth endpoints: 1 req/sec, burst 5 (brute-force protection)
- Upload endpoints: 10 req/sec, burst 20
- General endpoints: 100 req/sec, burst 200
- Uses `tower_governor` 0.8 with `PeerIpKeyExtractor`
#### Observability: Request ID Tracking (`request_id.rs`)
- `X-Request-ID` header added to all responses
- Accepts upstream request ID or generates UUID v4
- Tracing spans include request_id for log correlation
#### CLI: Migrate Command (`migrate.rs`)
- `nora migrate --from local --to s3` - migrate between storage backends
- `--dry-run` flag for preview without copying
- Progress bar with indicatif
- Skips existing files in destination
- Summary statistics (migrated, skipped, failed, bytes)
#### Error Handling (`error.rs`)
- `AppError` enum with `IntoResponse` for Axum
- Automatic conversion from `StorageError` and `ValidationError`
- JSON error responses with request_id support
### Changed
- `StorageError` now uses `thiserror` derive macro
- `TokenError` now uses `thiserror` derive macro
- Storage wrapper validates keys before delegating to backend
- Docker registry handlers validate name, digest, reference inputs
- Body size limit set to 100MB default via `DefaultBodyLimit`
### Dependencies Added
- `thiserror = "2"` - typed error handling
- `tower_governor = "0.8"` - rate limiting
- `governor = "0.10"` - rate limiting backend
- `tempfile = "3"` (dev) - temporary directories for tests
- `wiremock = "0.6"` (dev) - HTTP mocking for S3 tests
### Files Added
- `src/validation.rs` - input validation module
- `src/migrate.rs` - storage migration module
- `src/error.rs` - application error types
- `src/request_id.rs` - request ID middleware
- `src/rate_limit.rs` - rate limiting configuration
---
## [0.1.0] - 2026-01-24
### Added
- Multi-protocol support: Docker Registry v2, Maven, npm, Cargo, PyPI
- Web UI dashboard
- Swagger UI (`/api-docs`)
- Storage backends: Local filesystem, S3-compatible
- Smart proxy/cache for Maven and npm
- Health checks (`/health`, `/ready`)
- Basic authentication (htpasswd with bcrypt)
- API tokens (revocable, per-user)
- Prometheus metrics (`/metrics`)
- JSON structured logging
- Environment variable configuration
- Graceful shutdown (SIGTERM/SIGINT)
- Backup/restore commands
---
# Журнал изменений (RU)
Все значимые изменения NORA документируются в этом файле.
---
## [0.2.12] - 2026-01-30
### Добавлено
#### Настраиваемый Rate Limiting
- Rate limits настраиваются через `config.toml` и переменные окружения
- Новая секция `[rate_limit]` с параметрами: `auth_rps`, `auth_burst`, `upload_rps`, `upload_burst`, `general_rps`, `general_burst`
- Переменные окружения: `NORA_RATE_LIMIT_{AUTH|UPLOAD|GENERAL}_{RPS|BURST}`
#### Архитектура Secrets Provider
- Trait-based управление секретами (`SecretsProvider` trait)
- ENV provider по умолчанию (12-Factor App паттерн)
- Защищённые секреты с `zeroize` (память обнуляется при drop)
- Redacted Debug impl предотвращает утечку секретов в логи
- Новая секция `[secrets]` с опциями `provider` и `clear_env`
#### Docker Image Metadata
- Поддержка получения метаданных образов
#### Документация
- Двуязычный onboarding guide (EN/RU)
---
## [0.2.11] - 2026-01-26
### Добавлено
- Поддержка интернационализации (i18n)
- PyPI registry proxy
- Улучшения UI
---
## [0.2.10] - 2026-01-26
### Изменено
- Тёмная тема применена ко всем страницам UI
---
## [0.2.9] - 2026-01-26
### Изменено
- Релиз с обновлением версии
---
## [0.2.8] - 2026-01-26
### Добавлено
- Dashboard endpoint добавлен в OpenAPI документацию
---
## [0.2.7] - 2026-01-26
### Добавлено
- Динамическое отображение версии в сайдбаре UI
---
## [0.2.6] - 2026-01-26
### Добавлено
#### Dashboard Metrics
- Глобальная панель статистики: downloads, uploads, artifacts, cache hit rate, storage
- Расширенные карточки реестров с количеством артефактов, размером, счётчиками
- Лог активности (последние 20 событий)
#### UI
- Тёмная тема (bg: #0f172a, cards: #1e293b)
---
## [0.2.5] - 2026-01-26
### Исправлено
- Docker push/pull: добавлен PATCH endpoint для chunked uploads
---
## [0.2.4] - 2026-01-26
### Исправлено
- Rate limiting: health/metrics endpoints теперь исключены
- Увеличены лимиты upload для параллельных Docker запросов
---
## [0.2.0] - 2026-01-25
### Добавлено
#### UI: SVG иконки брендов
- Эмоджи заменены на SVG иконки брендов (стиль Simple Icons)
- Docker, Maven, npm, Cargo, PyPI теперь отображаются как векторная графика
- Единый стиль иконок на дашборде, сайдбаре и страницах деталей
#### Тестовая инфраструктура
- Unit-тесты для LocalStorage (8 тестов): put/get, list, stat, health_check
- Unit-тесты для S3Storage с HTTP-мокированием wiremock (11 тестов)
- Интеграционные тесты auth/htpasswd (7 тестов)
- Тесты жизненного цикла токенов (11 тестов)
- Тесты валидации (21 тест)
- **Всего: 75 тестов проходят**
#### Безопасность: Валидация ввода (`validation.rs`)
- Защита от path traversal: отклоняет `../`, `..\\`, null-байты, абсолютные пути
- Валидация имён Docker-образов по спецификации OCI distribution
- Валидация дайджестов (`sha256:[64 hex]`, `sha512:[128 hex]`)
- Валидация тегов и ссылок Docker
- Ограничение длины ключей хранилища (макс. 1024 символа)
#### Безопасность: Rate Limiting (`rate_limit.rs`)
- Auth endpoints: 1 req/sec, burst 5 (защита от брутфорса)
- Upload endpoints: 10 req/sec, burst 20
- Общие endpoints: 100 req/sec, burst 200
- Использует `tower_governor` 0.8 с `PeerIpKeyExtractor`
#### Наблюдаемость: Отслеживание Request ID (`request_id.rs`)
- Заголовок `X-Request-ID` добавляется ко всем ответам
- Принимает upstream request ID или генерирует UUID v4
- Tracing spans включают request_id для корреляции логов
#### CLI: Команда миграции (`migrate.rs`)
- `nora migrate --from local --to s3` - миграция между storage backends
- Флаг `--dry-run` для предпросмотра без копирования
- Прогресс-бар с indicatif
- Пропуск существующих файлов в destination
- Итоговая статистика (migrated, skipped, failed, bytes)
#### Обработка ошибок (`error.rs`)
- Enum `AppError` с `IntoResponse` для Axum
- Автоматическая конверсия из `StorageError` и `ValidationError`
- JSON-ответы об ошибках с поддержкой request_id
### Изменено
- `StorageError` теперь использует макрос `thiserror`
- `TokenError` теперь использует макрос `thiserror`
- Storage wrapper валидирует ключи перед делегированием backend
- Docker registry handlers валидируют name, digest, reference
- Лимит размера body установлен в 100MB через `DefaultBodyLimit`
### Добавлены зависимости
- `thiserror = "2"` - типизированная обработка ошибок
- `tower_governor = "0.8"` - rate limiting
- `governor = "0.10"` - backend для rate limiting
- `tempfile = "3"` (dev) - временные директории для тестов
- `wiremock = "0.6"` (dev) - HTTP-мокирование для S3 тестов
### Добавлены файлы
- `src/validation.rs` - модуль валидации ввода
- `src/migrate.rs` - модуль миграции хранилища
- `src/error.rs` - типы ошибок приложения
- `src/request_id.rs` - middleware для request ID
- `src/rate_limit.rs` - конфигурация rate limiting
---
## [0.1.0] - 2026-01-24
### Добавлено
- Мульти-протокольная поддержка: Docker Registry v2, Maven, npm, Cargo, PyPI
- Web UI дашборд
- Swagger UI (`/api-docs`)
- Storage backends: локальная файловая система, S3-совместимое хранилище
- Умный прокси/кэш для Maven и npm
- Health checks (`/health`, `/ready`)
- Базовая аутентификация (htpasswd с bcrypt)
- API токены (отзываемые, per-user)
- Prometheus метрики (`/metrics`)
- JSON структурированное логирование
- Конфигурация через переменные окружения
- Graceful shutdown (SIGTERM/SIGINT)
- Команды backup/restore

6
Cargo.lock generated
View File

@@ -1247,7 +1247,7 @@ checksum = "38bf9645c8b145698bb0b18a4637dcacbc421ea49bef2317e4fd8065a387cf21"
[[package]] [[package]]
name = "nora-cli" name = "nora-cli"
version = "0.2.25" version = "0.2.27"
dependencies = [ dependencies = [
"clap", "clap",
"flate2", "flate2",
@@ -1261,7 +1261,7 @@ dependencies = [
[[package]] [[package]]
name = "nora-registry" name = "nora-registry"
version = "0.2.25" version = "0.2.27"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"axum", "axum",
@@ -1299,7 +1299,7 @@ dependencies = [
[[package]] [[package]]
name = "nora-storage" name = "nora-storage"
version = "0.2.25" version = "0.2.27"
dependencies = [ dependencies = [
"axum", "axum",
"base64", "base64",

View File

@@ -7,7 +7,7 @@ members = [
] ]
[workspace.package] [workspace.package]
version = "0.2.25" version = "0.2.27"
edition = "2021" edition = "2021"
license = "MIT" license = "MIT"
authors = ["DevITWay <devitway@gmail.com>"] authors = ["DevITWay <devitway@gmail.com>"]

View File

@@ -0,0 +1,73 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
//! Persistent audit log — append-only JSONL file
//!
//! Records who/when/what for every registry operation.
//! File: {storage_path}/audit.jsonl
use chrono::{DateTime, Utc};
use parking_lot::Mutex;
use serde::Serialize;
use std::fs::{self, OpenOptions};
use std::io::Write;
use std::path::PathBuf;
use tracing::{info, warn};
#[derive(Debug, Clone, Serialize)]
pub struct AuditEntry {
pub ts: DateTime<Utc>,
pub action: String,
pub actor: String,
pub artifact: String,
pub registry: String,
pub detail: String,
}
impl AuditEntry {
pub fn new(action: &str, actor: &str, artifact: &str, registry: &str, detail: &str) -> Self {
Self {
ts: Utc::now(),
action: action.to_string(),
actor: actor.to_string(),
artifact: artifact.to_string(),
registry: registry.to_string(),
detail: detail.to_string(),
}
}
}
pub struct AuditLog {
path: PathBuf,
writer: Mutex<Option<fs::File>>,
}
impl AuditLog {
pub fn new(storage_path: &str) -> Self {
let path = PathBuf::from(storage_path).join("audit.jsonl");
let writer = match OpenOptions::new().create(true).append(true).open(&path) {
Ok(f) => {
info!(path = %path.display(), "Audit log initialized");
Mutex::new(Some(f))
}
Err(e) => {
warn!(path = %path.display(), error = %e, "Failed to open audit log, auditing disabled");
Mutex::new(None)
}
};
Self { path, writer }
}
pub fn log(&self, entry: AuditEntry) {
if let Some(ref mut file) = *self.writer.lock() {
if let Ok(json) = serde_json::to_string(&entry) {
let _ = writeln!(file, "{}", json);
let _ = file.flush();
}
}
}
pub fn path(&self) -> &PathBuf {
&self.path
}
}

View File

@@ -13,6 +13,7 @@ use std::collections::HashMap;
use std::path::Path; use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use crate::tokens::Role;
use crate::AppState; use crate::AppState;
/// Htpasswd-based authentication /// Htpasswd-based authentication
@@ -108,7 +109,18 @@ pub async fn auth_middleware(
if let Some(token) = auth_header.strip_prefix("Bearer ") { if let Some(token) = auth_header.strip_prefix("Bearer ") {
if let Some(ref token_store) = state.tokens { if let Some(ref token_store) = state.tokens {
match token_store.verify_token(token) { match token_store.verify_token(token) {
Ok(_user) => return next.run(request).await, Ok((_user, role)) => {
let method = request.method().clone();
if (method == axum::http::Method::PUT
|| method == axum::http::Method::POST
|| method == axum::http::Method::DELETE
|| method == axum::http::Method::PATCH)
&& !role.can_write()
{
return (StatusCode::FORBIDDEN, "Read-only token").into_response();
}
return next.run(request).await;
}
Err(_) => return unauthorized_response("Invalid or expired token"), Err(_) => return unauthorized_response("Invalid or expired token"),
} }
} else { } else {
@@ -175,6 +187,12 @@ pub struct CreateTokenRequest {
#[serde(default = "default_ttl")] #[serde(default = "default_ttl")]
pub ttl_days: u64, pub ttl_days: u64,
pub description: Option<String>, pub description: Option<String>,
#[serde(default = "default_role_str")]
pub role: String,
}
fn default_role_str() -> String {
"read".to_string()
} }
fn default_ttl() -> u64 { fn default_ttl() -> u64 {
@@ -194,6 +212,7 @@ pub struct TokenListItem {
pub expires_at: u64, pub expires_at: u64,
pub last_used: Option<u64>, pub last_used: Option<u64>,
pub description: Option<String>, pub description: Option<String>,
pub role: String,
} }
#[derive(Serialize)] #[derive(Serialize)]
@@ -227,7 +246,19 @@ async fn create_token(
} }
}; };
match token_store.create_token(&req.username, req.ttl_days, req.description) { let role = match req.role.as_str() {
"read" => Role::Read,
"write" => Role::Write,
"admin" => Role::Admin,
_ => {
return (
StatusCode::BAD_REQUEST,
"Invalid role. Use: read, write, admin",
)
.into_response()
}
};
match token_store.create_token(&req.username, req.ttl_days, req.description, role) {
Ok(token) => Json(CreateTokenResponse { Ok(token) => Json(CreateTokenResponse {
token, token,
expires_in_days: req.ttl_days, expires_in_days: req.ttl_days,
@@ -271,6 +302,7 @@ async fn list_tokens(
expires_at: t.expires_at, expires_at: t.expires_at,
last_used: t.last_used, last_used: t.last_used,
description: t.description, description: t.description,
role: t.role.to_string(),
}) })
.collect(); .collect();

View File

@@ -36,6 +36,13 @@ pub struct ServerConfig {
/// Public URL for generating pull commands (e.g., "registry.example.com") /// Public URL for generating pull commands (e.g., "registry.example.com")
#[serde(default)] #[serde(default)]
pub public_url: Option<String>, pub public_url: Option<String>,
/// Maximum request body size in MB (default: 2048 = 2GB)
#[serde(default = "default_body_limit_mb")]
pub body_limit_mb: usize,
}
fn default_body_limit_mb() -> usize {
2048 // 2GB - enough for any Docker image
} }
#[derive(Debug, Clone, Default, Serialize, Deserialize, PartialEq)] #[derive(Debug, Clone, Default, Serialize, Deserialize, PartialEq)]
@@ -330,6 +337,11 @@ impl Config {
if let Ok(val) = env::var("NORA_PUBLIC_URL") { if let Ok(val) = env::var("NORA_PUBLIC_URL") {
self.server.public_url = if val.is_empty() { None } else { Some(val) }; self.server.public_url = if val.is_empty() { None } else { Some(val) };
} }
if let Ok(val) = env::var("NORA_BODY_LIMIT_MB") {
if let Ok(mb) = val.parse() {
self.server.body_limit_mb = mb;
}
}
// Storage config // Storage config
if let Ok(val) = env::var("NORA_STORAGE_MODE") { if let Ok(val) = env::var("NORA_STORAGE_MODE") {
@@ -483,6 +495,7 @@ impl Default for Config {
host: String::from("127.0.0.1"), host: String::from("127.0.0.1"),
port: 4000, port: 4000,
public_url: None, public_url: None,
body_limit_mb: 2048,
}, },
storage: StorageConfig { storage: StorageConfig {
mode: StorageMode::Local, mode: StorageMode::Local,

121
nora-registry/src/gc.rs Normal file
View File

@@ -0,0 +1,121 @@
//! Garbage Collection for orphaned blobs
//!
//! Mark-and-sweep approach:
//! 1. List all blobs across registries
//! 2. Parse all manifests to find referenced blobs
//! 3. Blobs not referenced by any manifest = orphans
//! 4. Delete orphans (with --dry-run support)
use std::collections::HashSet;
use tracing::info;
use crate::storage::Storage;
pub struct GcResult {
pub total_blobs: usize,
pub referenced_blobs: usize,
pub orphaned_blobs: usize,
pub deleted_blobs: usize,
pub orphan_keys: Vec<String>,
}
pub async fn run_gc(storage: &Storage, dry_run: bool) -> GcResult {
info!("Starting garbage collection (dry_run={})", dry_run);
// 1. Collect all blob keys
let all_blobs = collect_all_blobs(storage).await;
info!("Found {} total blobs", all_blobs.len());
// 2. Collect all referenced digests from manifests
let referenced = collect_referenced_digests(storage).await;
info!(
"Found {} referenced digests from manifests",
referenced.len()
);
// 3. Find orphans
let mut orphan_keys: Vec<String> = Vec::new();
for key in &all_blobs {
if let Some(digest) = key.rsplit('/').next() {
if !referenced.contains(digest) {
orphan_keys.push(key.clone());
}
}
}
info!("Found {} orphaned blobs", orphan_keys.len());
let mut deleted = 0;
if !dry_run {
for key in &orphan_keys {
if storage.delete(key).await.is_ok() {
deleted += 1;
info!("Deleted: {}", key);
}
}
info!("Deleted {} orphaned blobs", deleted);
} else {
for key in &orphan_keys {
info!("[dry-run] Would delete: {}", key);
}
}
GcResult {
total_blobs: all_blobs.len(),
referenced_blobs: referenced.len(),
orphaned_blobs: orphan_keys.len(),
deleted_blobs: deleted,
orphan_keys,
}
}
async fn collect_all_blobs(storage: &Storage) -> Vec<String> {
let mut blobs = Vec::new();
let docker_blobs = storage.list("docker/").await;
for key in docker_blobs {
if key.contains("/blobs/") {
blobs.push(key);
}
}
blobs
}
async fn collect_referenced_digests(storage: &Storage) -> HashSet<String> {
let mut referenced = HashSet::new();
let all_keys = storage.list("docker/").await;
for key in &all_keys {
if !key.contains("/manifests/") || !key.ends_with(".json") || key.ends_with(".meta.json") {
continue;
}
if let Ok(data) = storage.get(key).await {
if let Ok(json) = serde_json::from_slice::<serde_json::Value>(&data) {
if let Some(config) = json.get("config") {
if let Some(digest) = config.get("digest").and_then(|v| v.as_str()) {
referenced.insert(digest.to_string());
}
}
if let Some(layers) = json.get("layers").and_then(|v| v.as_array()) {
for layer in layers {
if let Some(digest) = layer.get("digest").and_then(|v| v.as_str()) {
referenced.insert(digest.to_string());
}
}
}
if let Some(manifests) = json.get("manifests").and_then(|v| v.as_array()) {
for m in manifests {
if let Some(digest) = m.get("digest").and_then(|v| v.as_str()) {
referenced.insert(digest.to_string());
}
}
}
}
}
}
referenced
}

View File

@@ -2,11 +2,13 @@
// SPDX-License-Identifier: MIT // SPDX-License-Identifier: MIT
mod activity_log; mod activity_log;
mod audit;
mod auth; mod auth;
mod backup; mod backup;
mod config; mod config;
mod dashboard_metrics; mod dashboard_metrics;
mod error; mod error;
mod gc;
mod health; mod health;
mod metrics; mod metrics;
mod migrate; mod migrate;
@@ -31,6 +33,7 @@ use tracing::{error, info, warn};
use tracing_subscriber::{fmt, prelude::*, EnvFilter}; use tracing_subscriber::{fmt, prelude::*, EnvFilter};
use activity_log::ActivityLog; use activity_log::ActivityLog;
use audit::AuditLog;
use auth::HtpasswdAuth; use auth::HtpasswdAuth;
use config::{Config, StorageMode}; use config::{Config, StorageMode};
use dashboard_metrics::DashboardMetrics; use dashboard_metrics::DashboardMetrics;
@@ -61,6 +64,12 @@ enum Commands {
#[arg(short, long)] #[arg(short, long)]
input: PathBuf, input: PathBuf,
}, },
/// Garbage collect orphaned blobs
Gc {
/// Dry run - show what would be deleted without deleting
#[arg(long, default_value = "false")]
dry_run: bool,
},
/// Migrate artifacts between storage backends /// Migrate artifacts between storage backends
Migrate { Migrate {
/// Source storage: local or s3 /// Source storage: local or s3
@@ -83,6 +92,7 @@ pub struct AppState {
pub tokens: Option<TokenStore>, pub tokens: Option<TokenStore>,
pub metrics: DashboardMetrics, pub metrics: DashboardMetrics,
pub activity: ActivityLog, pub activity: ActivityLog,
pub audit: AuditLog,
pub docker_auth: registry::DockerAuth, pub docker_auth: registry::DockerAuth,
pub repo_index: RepoIndex, pub repo_index: RepoIndex,
pub http_client: reqwest::Client, pub http_client: reqwest::Client,
@@ -143,6 +153,17 @@ async fn main() {
std::process::exit(1); std::process::exit(1);
} }
} }
Some(Commands::Gc { dry_run }) => {
let result = gc::run_gc(&storage, dry_run).await;
println!("GC Summary:");
println!(" Total blobs: {}", result.total_blobs);
println!(" Referenced: {}", result.referenced_blobs);
println!(" Orphaned: {}", result.orphaned_blobs);
println!(" Deleted: {}", result.deleted_blobs);
if dry_run && !result.orphan_keys.is_empty() {
println!("\nRun without --dry-run to delete orphaned blobs.");
}
}
Some(Commands::Migrate { from, to, dry_run }) => { Some(Commands::Migrate { from, to, dry_run }) => {
let source = match from.as_str() { let source = match from.as_str() {
"local" => Storage::new_local(&config.storage.path), "local" => Storage::new_local(&config.storage.path),
@@ -265,6 +286,7 @@ async fn run_server(config: Config, storage: Storage) {
None None
}; };
let storage_path = config.storage.path.clone();
let rate_limit_enabled = config.rate_limit.enabled; let rate_limit_enabled = config.rate_limit.enabled;
// Initialize Docker auth with proxy timeout // Initialize Docker auth with proxy timeout
@@ -316,6 +338,7 @@ async fn run_server(config: Config, storage: Storage) {
tokens, tokens,
metrics: DashboardMetrics::new(), metrics: DashboardMetrics::new(),
activity: ActivityLog::new(50), activity: ActivityLog::new(50),
audit: AuditLog::new(&storage_path),
docker_auth, docker_auth,
repo_index: RepoIndex::new(), repo_index: RepoIndex::new(),
http_client, http_client,
@@ -324,7 +347,9 @@ async fn run_server(config: Config, storage: Storage) {
let app = Router::new() let app = Router::new()
.merge(public_routes) .merge(public_routes)
.merge(app_routes) .merge(app_routes)
.layer(DefaultBodyLimit::max(100 * 1024 * 1024)) // 100MB default body limit .layer(DefaultBodyLimit::max(
state.config.server.body_limit_mb * 1024 * 1024,
))
.layer(middleware::from_fn(request_id::request_id_middleware)) .layer(middleware::from_fn(request_id::request_id_middleware))
.layer(middleware::from_fn(metrics::metrics_middleware)) .layer(middleware::from_fn(metrics::metrics_middleware))
.layer(middleware::from_fn_with_state( .layer(middleware::from_fn_with_state(
@@ -343,6 +368,7 @@ async fn run_server(config: Config, storage: Storage) {
version = env!("CARGO_PKG_VERSION"), version = env!("CARGO_PKG_VERSION"),
storage = state.storage.backend_name(), storage = state.storage.backend_name(),
auth_enabled = state.auth.is_some(), auth_enabled = state.auth.is_some(),
body_limit_mb = state.config.server.body_limit_mb,
"Nora started" "Nora started"
); );

View File

@@ -2,6 +2,7 @@
// SPDX-License-Identifier: MIT // SPDX-License-Identifier: MIT
use crate::activity_log::{ActionType, ActivityEntry}; use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::AppState; use crate::AppState;
use axum::{ use axum::{
extract::{Path, State}, extract::{Path, State},
@@ -50,6 +51,9 @@ async fn download(
"cargo", "cargo",
"LOCAL", "LOCAL",
)); ));
state
.audit
.log(AuditEntry::new("pull", "api", "", "cargo", ""));
(StatusCode::OK, data).into_response() (StatusCode::OK, data).into_response()
} }
Err(_) => StatusCode::NOT_FOUND.into_response(), Err(_) => StatusCode::NOT_FOUND.into_response(),

View File

@@ -2,6 +2,7 @@
// SPDX-License-Identifier: MIT // SPDX-License-Identifier: MIT
use crate::activity_log::{ActionType, ActivityEntry}; use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::docker_auth::DockerAuth; use crate::registry::docker_auth::DockerAuth;
use crate::storage::Storage; use crate::storage::Storage;
use crate::validation::{validate_digest, validate_docker_name, validate_docker_reference}; use crate::validation::{validate_digest, validate_docker_name, validate_docker_reference};
@@ -11,7 +12,7 @@ use axum::{
extract::{Path, State}, extract::{Path, State},
http::{header, HeaderName, StatusCode}, http::{header, HeaderName, StatusCode},
response::{IntoResponse, Response}, response::{IntoResponse, Response},
routing::{get, head, patch, put}, routing::{delete, get, head, patch, put},
Json, Router, Json, Router,
}; };
use parking_lot::RwLock; use parking_lot::RwLock;
@@ -64,6 +65,8 @@ pub fn routes() -> Router<Arc<AppState>> {
) )
.route("/v2/{name}/manifests/{reference}", get(get_manifest)) .route("/v2/{name}/manifests/{reference}", get(get_manifest))
.route("/v2/{name}/manifests/{reference}", put(put_manifest)) .route("/v2/{name}/manifests/{reference}", put(put_manifest))
.route("/v2/{name}/manifests/{reference}", delete(delete_manifest))
.route("/v2/{name}/blobs/{digest}", delete(delete_blob))
.route("/v2/{name}/tags/list", get(list_tags)) .route("/v2/{name}/tags/list", get(list_tags))
// Two-segment name routes (e.g., /v2/library/alpine/...) // Two-segment name routes (e.g., /v2/library/alpine/...)
.route("/v2/{ns}/{name}/blobs/{digest}", head(check_blob_ns)) .route("/v2/{ns}/{name}/blobs/{digest}", head(check_blob_ns))
@@ -84,6 +87,11 @@ pub fn routes() -> Router<Arc<AppState>> {
"/v2/{ns}/{name}/manifests/{reference}", "/v2/{ns}/{name}/manifests/{reference}",
put(put_manifest_ns), put(put_manifest_ns),
) )
.route(
"/v2/{ns}/{name}/manifests/{reference}",
delete(delete_manifest_ns),
)
.route("/v2/{ns}/{name}/blobs/{digest}", delete(delete_blob_ns))
.route("/v2/{ns}/{name}/tags/list", get(list_tags_ns)) .route("/v2/{ns}/{name}/tags/list", get(list_tags_ns))
} }
@@ -307,7 +315,17 @@ async fn upload_blob(
)); ));
state.repo_index.invalidate("docker"); state.repo_index.invalidate("docker");
let location = format!("/v2/{}/blobs/{}", name, digest); let location = format!("/v2/{}/blobs/{}", name, digest);
(StatusCode::CREATED, [(header::LOCATION, location)]).into_response() (
StatusCode::CREATED,
[
(header::LOCATION, location),
(
HeaderName::from_static("docker-content-digest"),
digest.to_string(),
),
],
)
.into_response()
} }
Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(), Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(),
} }
@@ -481,6 +499,13 @@ async fn put_manifest(
"docker", "docker",
"LOCAL", "LOCAL",
)); ));
state.audit.log(AuditEntry::new(
"push",
"api",
&format!("{}:{}", name, reference),
"docker",
"manifest",
));
state.repo_index.invalidate("docker"); state.repo_index.invalidate("docker");
let location = format!("/v2/{}/manifests/{}", name, reference); let location = format!("/v2/{}/manifests/{}", name, reference);
@@ -512,6 +537,109 @@ async fn list_tags(State(state): State<Arc<AppState>>, Path(name): Path<String>)
(StatusCode::OK, Json(json!({"name": name, "tags": tags}))).into_response() (StatusCode::OK, Json(json!({"name": name, "tags": tags}))).into_response()
} }
// ============================================================================
// Delete handlers (Docker Registry V2 spec)
// ============================================================================
async fn delete_manifest(
State(state): State<Arc<AppState>>,
Path((name, reference)): Path<(String, String)>,
) -> Response {
if let Err(e) = validate_docker_name(&name) {
return (StatusCode::BAD_REQUEST, e.to_string()).into_response();
}
if let Err(e) = validate_docker_reference(&reference) {
return (StatusCode::BAD_REQUEST, e.to_string()).into_response();
}
let key = format!("docker/{}/manifests/{}.json", name, reference);
// If reference is a tag, also delete digest-keyed copy
let is_tag = !reference.starts_with("sha256:");
if is_tag {
if let Ok(data) = state.storage.get(&key).await {
use sha2::Digest;
let digest = format!("sha256:{:x}", sha2::Sha256::digest(&data));
let digest_key = format!("docker/{}/manifests/{}.json", name, digest);
let _ = state.storage.delete(&digest_key).await;
let digest_meta = format!("docker/{}/manifests/{}.meta.json", name, digest);
let _ = state.storage.delete(&digest_meta).await;
}
}
// Delete manifest
match state.storage.delete(&key).await {
Ok(()) => {
// Delete associated metadata
let meta_key = format!("docker/{}/manifests/{}.meta.json", name, reference);
let _ = state.storage.delete(&meta_key).await;
state.audit.log(AuditEntry::new(
"delete",
"api",
&format!("{}:{}", name, reference),
"docker",
"manifest",
));
state.repo_index.invalidate("docker");
tracing::info!(name = %name, reference = %reference, "Docker manifest deleted");
StatusCode::ACCEPTED.into_response()
}
Err(crate::storage::StorageError::NotFound) => (
StatusCode::NOT_FOUND,
Json(json!({
"errors": [{
"code": "MANIFEST_UNKNOWN",
"message": "manifest unknown",
"detail": { "name": name, "reference": reference }
}]
})),
)
.into_response(),
Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(),
}
}
async fn delete_blob(
State(state): State<Arc<AppState>>,
Path((name, digest)): Path<(String, String)>,
) -> Response {
if let Err(e) = validate_docker_name(&name) {
return (StatusCode::BAD_REQUEST, e.to_string()).into_response();
}
if let Err(e) = validate_digest(&digest) {
return (StatusCode::BAD_REQUEST, e.to_string()).into_response();
}
let key = format!("docker/{}/blobs/{}", name, digest);
match state.storage.delete(&key).await {
Ok(()) => {
state.audit.log(AuditEntry::new(
"delete",
"api",
&format!("{}@{}", name, &digest[..19.min(digest.len())]),
"docker",
"blob",
));
state.repo_index.invalidate("docker");
tracing::info!(name = %name, digest = %digest, "Docker blob deleted");
StatusCode::ACCEPTED.into_response()
}
Err(crate::storage::StorageError::NotFound) => (
StatusCode::NOT_FOUND,
Json(json!({
"errors": [{
"code": "BLOB_UNKNOWN",
"message": "blob unknown to registry",
"detail": { "digest": digest }
}]
})),
)
.into_response(),
Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(),
}
}
// ============================================================================ // ============================================================================
// Namespace handlers (for two-segment names like library/alpine) // Namespace handlers (for two-segment names like library/alpine)
// These combine ns/name into a single name and delegate to the main handlers // These combine ns/name into a single name and delegate to the main handlers
@@ -581,6 +709,22 @@ async fn list_tags_ns(
list_tags(state, Path(full_name)).await list_tags(state, Path(full_name)).await
} }
async fn delete_manifest_ns(
state: State<Arc<AppState>>,
Path((ns, name, reference)): Path<(String, String, String)>,
) -> Response {
let full_name = format!("{}/{}", ns, name);
delete_manifest(state, Path((full_name, reference))).await
}
async fn delete_blob_ns(
state: State<Arc<AppState>>,
Path((ns, name, digest)): Path<(String, String, String)>,
) -> Response {
let full_name = format!("{}/{}", ns, name);
delete_blob(state, Path((full_name, digest))).await
}
/// Fetch a blob from an upstream Docker registry /// Fetch a blob from an upstream Docker registry
async fn fetch_blob_from_upstream( async fn fetch_blob_from_upstream(
client: &reqwest::Client, client: &reqwest::Client,
@@ -739,8 +883,16 @@ fn detect_manifest_media_type(data: &[u8]) -> String {
if schema_version == 1 { if schema_version == 1 {
return "application/vnd.docker.distribution.manifest.v1+json".to_string(); return "application/vnd.docker.distribution.manifest.v1+json".to_string();
} }
// schemaVersion 2 without mediaType is likely docker manifest v2 // schemaVersion 2 without mediaType - check config.mediaType to distinguish OCI vs Docker
if json.get("config").is_some() { if let Some(config) = json.get("config") {
if let Some(config_mt) = config.get("mediaType").and_then(|v| v.as_str()) {
if config_mt.starts_with("application/vnd.docker.") {
return "application/vnd.docker.distribution.manifest.v2+json".to_string();
}
// OCI or Helm or any non-docker config mediaType
return "application/vnd.oci.image.manifest.v1+json".to_string();
}
// No config.mediaType - assume docker v2
return "application/vnd.docker.distribution.manifest.v2+json".to_string(); return "application/vnd.docker.distribution.manifest.v2+json".to_string();
} }
// If it has "manifests" array, it's an index/list // If it has "manifests" array, it's an index/list

View File

@@ -2,6 +2,7 @@
// SPDX-License-Identifier: MIT // SPDX-License-Identifier: MIT
use crate::activity_log::{ActionType, ActivityEntry}; use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::AppState; use crate::AppState;
use axum::{ use axum::{
body::Bytes, body::Bytes,
@@ -42,6 +43,9 @@ async fn download(State(state): State<Arc<AppState>>, Path(path): Path<String>)
"maven", "maven",
"CACHE", "CACHE",
)); ));
state
.audit
.log(AuditEntry::new("cache_hit", "api", "", "maven", ""));
return with_content_type(&path, data).into_response(); return with_content_type(&path, data).into_response();
} }
@@ -58,6 +62,9 @@ async fn download(State(state): State<Arc<AppState>>, Path(path): Path<String>)
"maven", "maven",
"PROXY", "PROXY",
)); ));
state
.audit
.log(AuditEntry::new("proxy_fetch", "api", "", "maven", ""));
let storage = state.storage.clone(); let storage = state.storage.clone();
let key_clone = key.clone(); let key_clone = key.clone();
@@ -103,6 +110,9 @@ async fn upload(
"maven", "maven",
"LOCAL", "LOCAL",
)); ));
state
.audit
.log(AuditEntry::new("push", "api", "", "maven", ""));
state.repo_index.invalidate("maven"); state.repo_index.invalidate("maven");
StatusCode::CREATED StatusCode::CREATED
} }

View File

@@ -2,6 +2,7 @@
// SPDX-License-Identifier: MIT // SPDX-License-Identifier: MIT
use crate::activity_log::{ActionType, ActivityEntry}; use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::AppState; use crate::AppState;
use axum::{ use axum::{
body::Bytes, body::Bytes,
@@ -48,6 +49,9 @@ async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<Str
"npm", "npm",
"CACHE", "CACHE",
)); ));
state
.audit
.log(AuditEntry::new("cache_hit", "api", "", "npm", ""));
} }
return with_content_type(is_tarball, data).into_response(); return with_content_type(is_tarball, data).into_response();
} }
@@ -67,6 +71,9 @@ async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<Str
"npm", "npm",
"PROXY", "PROXY",
)); ));
state
.audit
.log(AuditEntry::new("proxy_fetch", "api", "", "npm", ""));
} }
let storage = state.storage.clone(); let storage = state.storage.clone();

View File

@@ -2,6 +2,7 @@
// SPDX-License-Identifier: MIT // SPDX-License-Identifier: MIT
use crate::activity_log::{ActionType, ActivityEntry}; use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::AppState; use crate::AppState;
use axum::{ use axum::{
extract::{Path, State}, extract::{Path, State},
@@ -115,6 +116,9 @@ async fn download_file(
"pypi", "pypi",
"CACHE", "CACHE",
)); ));
state
.audit
.log(AuditEntry::new("cache_hit", "api", "", "pypi", ""));
let content_type = if filename.ends_with(".whl") { let content_type = if filename.ends_with(".whl") {
"application/zip" "application/zip"
@@ -156,6 +160,9 @@ async fn download_file(
"pypi", "pypi",
"PROXY", "PROXY",
)); ));
state
.audit
.log(AuditEntry::new("proxy_fetch", "api", "", "pypi", ""));
// Cache in local storage // Cache in local storage
let storage = state.storage.clone(); let storage = state.storage.clone();

View File

@@ -2,6 +2,7 @@
// SPDX-License-Identifier: MIT // SPDX-License-Identifier: MIT
use crate::activity_log::{ActionType, ActivityEntry}; use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::AppState; use crate::AppState;
use axum::{ use axum::{
body::Bytes, body::Bytes,
@@ -35,6 +36,9 @@ async fn download(State(state): State<Arc<AppState>>, Path(path): Path<String>)
state state
.activity .activity
.push(ActivityEntry::new(ActionType::Pull, path, "raw", "LOCAL")); .push(ActivityEntry::new(ActionType::Pull, path, "raw", "LOCAL"));
state
.audit
.log(AuditEntry::new("pull", "api", "", "raw", ""));
// Guess content type from extension // Guess content type from extension
let content_type = guess_content_type(&key); let content_type = guess_content_type(&key);
@@ -72,6 +76,9 @@ async fn upload(
state state
.activity .activity
.push(ActivityEntry::new(ActionType::Push, path, "raw", "LOCAL")); .push(ActivityEntry::new(ActionType::Push, path, "raw", "LOCAL"));
state
.audit
.log(AuditEntry::new("push", "api", "", "raw", ""));
StatusCode::CREATED.into_response() StatusCode::CREATED.into_response()
} }
Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(), Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(),

View File

@@ -11,6 +11,35 @@ use uuid::Uuid;
const TOKEN_PREFIX: &str = "nra_"; const TOKEN_PREFIX: &str = "nra_";
/// Access role for API tokens
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "lowercase")]
pub enum Role {
Read,
Write,
Admin,
}
impl std::fmt::Display for Role {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Role::Read => write!(f, "read"),
Role::Write => write!(f, "write"),
Role::Admin => write!(f, "admin"),
}
}
}
impl Role {
pub fn can_write(&self) -> bool {
matches!(self, Role::Write | Role::Admin)
}
pub fn can_admin(&self) -> bool {
matches!(self, Role::Admin)
}
}
/// API Token metadata stored on disk /// API Token metadata stored on disk
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TokenInfo { pub struct TokenInfo {
@@ -20,6 +49,12 @@ pub struct TokenInfo {
pub expires_at: u64, pub expires_at: u64,
pub last_used: Option<u64>, pub last_used: Option<u64>,
pub description: Option<String>, pub description: Option<String>,
#[serde(default = "default_role")]
pub role: Role,
}
fn default_role() -> Role {
Role::Read
} }
/// Token store for managing API tokens /// Token store for managing API tokens
@@ -44,6 +79,7 @@ impl TokenStore {
user: &str, user: &str,
ttl_days: u64, ttl_days: u64,
description: Option<String>, description: Option<String>,
role: Role,
) -> Result<String, TokenError> { ) -> Result<String, TokenError> {
// Generate random token // Generate random token
let raw_token = format!( let raw_token = format!(
@@ -67,6 +103,7 @@ impl TokenStore {
expires_at, expires_at,
last_used: None, last_used: None,
description, description,
role,
}; };
// Save to file // Save to file
@@ -81,7 +118,7 @@ impl TokenStore {
} }
/// Verify a token and return user info if valid /// Verify a token and return user info if valid
pub fn verify_token(&self, token: &str) -> Result<String, TokenError> { pub fn verify_token(&self, token: &str) -> Result<(String, Role), TokenError> {
if !token.starts_with(TOKEN_PREFIX) { if !token.starts_with(TOKEN_PREFIX) {
return Err(TokenError::InvalidFormat); return Err(TokenError::InvalidFormat);
} }
@@ -121,7 +158,7 @@ impl TokenStore {
let _ = fs::write(&file_path, json); let _ = fs::write(&file_path, json);
} }
Ok(info.user) Ok((info.user, info.role))
} }
/// List all tokens for a user /// List all tokens for a user
@@ -210,7 +247,7 @@ mod tests {
let store = TokenStore::new(temp_dir.path()); let store = TokenStore::new(temp_dir.path());
let token = store let token = store
.create_token("testuser", 30, Some("Test token".to_string())) .create_token("testuser", 30, Some("Test token".to_string()), Role::Write)
.unwrap(); .unwrap();
assert!(token.starts_with("nra_")); assert!(token.starts_with("nra_"));
@@ -222,10 +259,13 @@ mod tests {
let temp_dir = TempDir::new().unwrap(); let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path()); let store = TokenStore::new(temp_dir.path());
let token = store.create_token("testuser", 30, None).unwrap(); let token = store
let user = store.verify_token(&token).unwrap(); .create_token("testuser", 30, None, Role::Write)
.unwrap();
let (user, role) = store.verify_token(&token).unwrap();
assert_eq!(user, "testuser"); assert_eq!(user, "testuser");
assert_eq!(role, Role::Write);
} }
#[test] #[test]
@@ -252,7 +292,9 @@ mod tests {
let store = TokenStore::new(temp_dir.path()); let store = TokenStore::new(temp_dir.path());
// Create token and manually set it as expired // Create token and manually set it as expired
let token = store.create_token("testuser", 1, None).unwrap(); let token = store
.create_token("testuser", 1, None, Role::Write)
.unwrap();
let token_hash = hash_token(&token); let token_hash = hash_token(&token);
let file_path = temp_dir.path().join(format!("{}.json", &token_hash[..16])); let file_path = temp_dir.path().join(format!("{}.json", &token_hash[..16]));
@@ -272,9 +314,9 @@ mod tests {
let temp_dir = TempDir::new().unwrap(); let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path()); let store = TokenStore::new(temp_dir.path());
store.create_token("user1", 30, None).unwrap(); store.create_token("user1", 30, None, Role::Write).unwrap();
store.create_token("user1", 30, None).unwrap(); store.create_token("user1", 30, None, Role::Write).unwrap();
store.create_token("user2", 30, None).unwrap(); store.create_token("user2", 30, None, Role::Read).unwrap();
let user1_tokens = store.list_tokens("user1"); let user1_tokens = store.list_tokens("user1");
assert_eq!(user1_tokens.len(), 2); assert_eq!(user1_tokens.len(), 2);
@@ -291,7 +333,9 @@ mod tests {
let temp_dir = TempDir::new().unwrap(); let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path()); let store = TokenStore::new(temp_dir.path());
let token = store.create_token("testuser", 30, None).unwrap(); let token = store
.create_token("testuser", 30, None, Role::Write)
.unwrap();
let token_hash = hash_token(&token); let token_hash = hash_token(&token);
let hash_prefix = &token_hash[..16]; let hash_prefix = &token_hash[..16];
@@ -320,9 +364,9 @@ mod tests {
let temp_dir = TempDir::new().unwrap(); let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path()); let store = TokenStore::new(temp_dir.path());
store.create_token("user1", 30, None).unwrap(); store.create_token("user1", 30, None, Role::Write).unwrap();
store.create_token("user1", 30, None).unwrap(); store.create_token("user1", 30, None, Role::Write).unwrap();
store.create_token("user2", 30, None).unwrap(); store.create_token("user2", 30, None, Role::Read).unwrap();
let revoked = store.revoke_all_for_user("user1"); let revoked = store.revoke_all_for_user("user1");
assert_eq!(revoked, 2); assert_eq!(revoked, 2);
@@ -336,7 +380,9 @@ mod tests {
let temp_dir = TempDir::new().unwrap(); let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path()); let store = TokenStore::new(temp_dir.path());
let token = store.create_token("testuser", 30, None).unwrap(); let token = store
.create_token("testuser", 30, None, Role::Write)
.unwrap();
// First verification // First verification
store.verify_token(&token).unwrap(); store.verify_token(&token).unwrap();
@@ -352,7 +398,12 @@ mod tests {
let store = TokenStore::new(temp_dir.path()); let store = TokenStore::new(temp_dir.path());
store store
.create_token("testuser", 30, Some("CI/CD Pipeline".to_string())) .create_token(
"testuser",
30,
Some("CI/CD Pipeline".to_string()),
Role::Admin,
)
.unwrap(); .unwrap();
let tokens = store.list_tokens("testuser"); let tokens = store.list_tokens("testuser");