82 Commits

Author SHA1 Message Date
9bf6615a10 docs: remove downloads badge (#63) 2026-03-27 22:11:49 +03:00
5d1c07db51 docs: add Go module proxy support to README (#62)
* docs: add Go module proxy to README, update dashboard GIF

- Add Go Modules to supported registries table
- Add Go usage example (GOPROXY)
- Add Go config.toml example
- Add /go/ endpoint to endpoints table
- Update dashboard GIF with 6 registry cards in one row
- Fix registries count: 6 package registries

* feat(ui): add Raw storage to dashboard, sidebar, and list pages

- Raw Storage card on dashboard with file count and size
- Raw in sidebar navigation with file icon
- Raw list and detail pages (/ui/raw)
- Raw mount point in mount points table
- Grid updated to 7 columns for all registry cards
- README: 7 registries, add Go module proxy docs

* docs: add product badges (release, image size, downloads)
2026-03-27 22:01:41 +03:00
52b1459be9 chore: clean up .gitignore, remove unused entries (#61) 2026-03-27 21:34:56 +03:00
325f51822f release: v0.3.0 (#60)
* release: bump version to v0.3.0, update dashboard GIF

New features in v0.3.0:
- Go module proxy (GOPROXY protocol) with caching
- Go in dashboard UI (sidebar, list, detail pages)
- Compact registry cards (6 in one row)
- Updated icons (Cargo crate, Go text mark)
- .gitleaks.toml restored

* security: update .gitignore and remove stale files

* security: update .gitignore to block dev tooling and process files
2026-03-27 21:30:41 +03:00
c8dc141b2f feat: add Go module proxy (GOPROXY protocol) (#59)
* feat: add Go module proxy (GOPROXY protocol) (#47)

Implements caching proxy for Go modules with 5 standard endpoints:
- GET /go/{module}/@v/list — list versions
- GET /go/{module}/@v/{version}.info — version metadata
- GET /go/{module}/@v/{version}.mod — go.mod file
- GET /go/{module}/@v/{version}.zip — module zip
- GET /go/{module}/@latest — latest version info

Features:
- Module path encoding/decoding per Go spec (!x → X)
- Immutable caching (.info/.mod/.zip never overwritten)
- Mutable endpoints (@v/list, @latest) refreshed from upstream
- Configurable upstream (default: proxy.golang.org)
- Separate timeout for .zip downloads (default: 120s)
- Size limit for zips (default: 100MB)
- Path traversal protection
- Dashboard integration (stats, mount points, index)
- 25 unit tests (encoding, path splitting, safety, content-type)

Closes #47

* style: cargo fmt

* feat(ui): add Go pages, compact cards, fix icons

- Go in sidebar + list/detail pages with go get command
- Dashboard: fix fallback icon (was Docker whale for Go)
- Compact registry cards: lg:grid-cols-6, all 6 in one row
- Cargo icon: crate boxes instead of truck
- Go icon: stylized Go text (sidebar + dashboard)

* fix(go): URL-decode path + send encoded paths to upstream

Go client sends %21 for ! in module paths. Axum wildcard does not
auto-decode, so we percent-decode manually. Upstream proxy.golang.org
expects encoded paths (with !), not decoded uppercase.

Tested: full Pusk build (22 modules, 135MB cached) including
SherClockHolmes/webpush-go with triple uppercase encoding.

* style: cargo fmt
2026-03-27 21:16:00 +03:00
a09f83ffdb fix: restore .gitleaks.toml lost during merge (#58)
The file was created on security/scorecard-hardening branch but only
the ci.yml change was cherry-picked to main — the config file itself
was left behind. CI references --config .gitleaks.toml which caused
the Security job to fail.

Adds allowlist for test placeholder tokens (nra_00112233...) that
trigger generic-api-key false positives.
2026-03-27 21:01:45 +03:00
3fe483a3a9 test: add docker push/pull, npm install, and upstream timeout integration tests (#57)
- Docker: build→push→pull→verify digest (sha256 match)
- npm: real npm install via NORA registry, verify version and integrity hash
- Upstream: verify 404 returns quickly (no retry hang)
- All 3 new test blocks pass on local run
2026-03-25 02:08:16 +03:00
d909a62ac5 feat: upstream proxy retry + Maven proxy-only (#56)
* docs: add DCO, governance model, roles, vulnerability credit policy

* security: migrate token hashing from SHA256 to Argon2id

- Replace unsalted SHA256 with Argon2id (salted) for API token hashing
- Fix TOCTOU race: replace exists()+read() with read()+match on error
- Set chmod 600 on token files and 700 on token storage directory
- Auto-migrate legacy SHA256 tokens to Argon2id on first verification
- Add regression tests: argon2 format, legacy migration, file permissions

* feat: add retry with timeout for upstream proxy, mark Maven proxy-only

- Add shared proxy_fetch() and proxy_fetch_text() with 1 retry on 5xx/timeout
- Replace duplicated fetch_from_proxy in maven.rs, npm.rs, pypi.rs
- Mark Maven as proxy-only in README (no full repository manager support)
- Existing timeout config (30s maven/npm/pypi, 60s docker) preserved
- 4xx errors fail immediately without retry
2026-03-25 01:56:59 +03:00
432e8d35af security: migrate token hashing from SHA256 to Argon2id (#55)
* docs: add DCO, governance model, roles, vulnerability credit policy

* security: migrate token hashing from SHA256 to Argon2id

- Replace unsalted SHA256 with Argon2id (salted) for API token hashing
- Fix TOCTOU race: replace exists()+read() with read()+match on error
- Set chmod 600 on token files and 700 on token storage directory
- Auto-migrate legacy SHA256 tokens to Argon2id on first verification
- Add regression tests: argon2 format, legacy migration, file permissions
2026-03-24 22:56:43 +00:00
975264c353 fix(deps): update rustls-webpki 0.103.9 -> 0.103.10 (RUSTSEC-2026-0049)
Also revert codeql-action to tag pin in scorecard.yml —
scorecard webapp rejects SHA pins for this specific action.
2026-03-20 23:07:09 +00:00
533f3cd795 release: bump version to v0.2.35 2026-03-20 22:54:30 +00:00
8fc741c8db feat: add anonymous read mode (NORA_AUTH_ANONYMOUS_READ)
When auth is enabled with anonymous_read=true, GET/HEAD requests
are allowed without credentials (pull/download), while write
operations (PUT/POST/DELETE/PATCH) still require authentication.

Use case: public demo registries, read-only mirrors.

Config: NORA_AUTH_ANONYMOUS_READ=true or auth.anonymous_read=true
2026-03-20 22:48:41 +00:00
9709471485 fix: address code review findings
- Pin slsa-github-generator and codeql-action by SHA (not tag)
- Replace anonymous tuple with GroupedActivity struct for readability
- Replace unwrap() with if-let for safety
- Add warning message on attestation failure instead of silent || true
- Fix clippy: map_or -> is_some_and
2026-03-20 22:14:16 +00:00
2ec0fe4a28 release: bump version to v0.2.34 2026-03-20 19:46:42 +00:00
9f951ccc82 ui: fix table cell padding alignment
Add px-4 to all td cells in Mount Points and Activity tables
to match th header padding. Remove non-functional px-4 from
tbody elements (CSS padding does not apply to tbody).
2026-03-20 19:44:14 +00:00
f0cbb22bd9 fix(deps): update tar 0.4.44 -> 0.4.45
Fixes CVE-2026-33055 (PAX size header bypass) and
CVE-2026-33056 (symlink chmod directory traversal).
2026-03-20 19:32:46 +00:00
ceb75e0ce3 ui: group consecutive identical activity entries
Repeated cache hits for the same artifact now show as
"artifact (x4)" instead of 4 identical rows.
Reduces visual noise in dashboard activity log.
2026-03-20 19:23:41 +00:00
58d14a820e docs: remove hardcoded scorecard version from README 2026-03-20 11:35:14 +00:00
ef5f4e52c3 docs: restructure README for conversion
- Move badges from top to Security & Trust section
- Add dashboard GIF (EN/RU crossfade) as first visual
- Add "Why NORA" section with key differentiators
- Add "Used by" production reference
- Add binary install option
- Add Supported Registries table with mount points
- Streamline features into scannable list
- Remove emoji from footer
- Add comparison link placeholder
2026-03-20 11:25:32 +00:00
35e930295c test: add 82 unit tests across 7 modules
Coverage targets:
- activity_log: ActionType display, ActivityLog push/recent/all/bounded
- audit: AuditEntry, AuditLog write/read with tempdir
- config: defaults for all sub-configs, env overrides, TOML parsing
- dashboard_metrics: record_download/upload, cache_hit_rate, persistence
- error: constructors, Display, IntoResponse for all variants
- metrics: detect_registry for all protocol paths
- repo_index: paginate, RegistryIndex basics, RepoIndex invalidate

Total tests: 103 -> 185
2026-03-20 10:08:49 +00:00
3246bd9ffd ci: add test coverage with tarpaulin and dynamic badge via gist 2026-03-20 09:32:22 +00:00
cfa6a4d0ed chore: remove internal QA scripts from public repo 2026-03-19 12:42:53 +00:00
79fa8e0d4a chore: add CODEOWNERS, CHANGELOG v0.2.33, SLSA provenance, QA scripts 2026-03-19 12:39:58 +00:00
b23765bebd fix: update cosign-installer SHA to v3.8.0 2026-03-19 11:42:53 +00:00
b91c5531b6 release: bump version to v0.2.33 (#46) 2026-03-19 11:41:06 +00:00
596b18a3fa release: bump version to v0.2.33 2026-03-19 11:08:51 +00:00
07aed45518 fix: use tag for codeql-action in scorecard (webapp rejects SHA pins) 2026-03-19 10:42:14 +00:00
4ec963d41c fix: add repo_token and permissions to scorecard workflow 2026-03-19 10:35:57 +00:00
7f7e3e4986 fix: revert scorecard-action to tag (Docker action incompatible with SHA pin) 2026-03-19 10:33:27 +00:00
d51f176fd8 fix: use commit SHA for scorecard-action (not tag SHA) 2026-03-19 09:21:29 +00:00
34d30433cb fix: correct scorecard-action SHA pin for v2.4.3 2026-03-19 09:19:41 +00:00
a6db016d7d ci: retrigger scorecard workflow 2026-03-19 09:18:00 +00:00
fbd2aa35e8 ci: improve OpenSSF Scorecard rating (#45)
- Add CodeQL workflow for SAST analysis (Actions language)
- Pin scorecard-action and codeql-action by SHA in scorecard.yml
- Add cargo-audit SARIF upload for security tab integration
2026-03-19 11:51:11 +03:00
fa2cd45ed3 security: harden Docker registry and container runtime
- Verify blob digest (SHA256) on upload, reject mismatches (DIGEST_INVALID)
- Reject sha512 digests (only sha256 supported)
- Add upload session limits: max 100 concurrent, 2GB per session, 30min TTL
- Bind upload sessions to repository name (prevent session fixation)
- Filter .meta.json from Docker tag list (fix ArgoCD Image Updater recursion)
- Fix catalog to show namespaced images (library/alpine instead of library)
- Add security headers: CSP, X-Frame-Options, X-Content-Type-Options, Referrer-Policy
- Run containers as non-root user (USER nora) in all 3 Dockerfiles
- Add configurable NORA_MAX_UPLOAD_SESSIONS and NORA_MAX_UPLOAD_SESSION_SIZE_MB
2026-03-19 08:29:28 +00:00
f76dab1184 fix: pin ClusterFuzzLite base image by SHA, fix Docker tag double-suffix 2026-03-18 13:20:35 +00:00
e6043a6e2f fix: use project gitleaks config in CI, relax rules for documentation examples 2026-03-18 12:48:05 +00:00
54a08153f1 docs: add public roadmap, cosign verification in install script 2026-03-18 12:36:51 +00:00
a36287a627 community: add issue/PR templates, code of conduct, update contributing guide 2026-03-18 12:22:10 +00:00
eb1b8db01e chore: remove unused crates and demo traffic scripts
- Remove nora-cli (unimplemented stub)
- Remove nora-storage (standalone S3 server, not used)
- Remove demo traffic generator and systemd service
2026-03-18 12:19:58 +00:00
58f98949e4 style: clean up code comments 2026-03-18 11:23:11 +00:00
67b55fba92 chore: repo cleanup — remove dead crates from workspace, stale files, duplicate assets
- Remove nora-cli and nora-storage from workspace (stub crates, not used)
- Remove root install.sh (duplicate of dist/install.sh)
- Remove root logo.jpg (duplicate of ui/logo.jpg)
- Remove committed SBOM .cdx.json files (generated by CI in release)
- Remove stale .githooks/ (real hook is in .git/hooks/)
- Update version in docs-ru to 0.2.32
- Add *.cdx.json to .gitignore
2026-03-18 11:20:22 +00:00
e38e4ab4fb test: E2E smoke tests + Playwright browser tests (23 tests)
smoke.sh:
- Full E2E smoke test: health, npm proxy/publish/security, Maven, PyPI, Docker, Raw, UI, mirror CLI
- Self-contained: starts NORA, runs tests, cleans up

Playwright (tests/e2e/):
- Dashboard: page load, registry sections visible, npm count > 0, Docker stats
- npm: URL rewriting, scoped packages, tarball download, publish, immutability, security
- Docker: v2 check, catalog, manifest push/pull, tags list
- Maven: proxy download, upload
- PyPI: simple index, package page
- Raw: upload and download
- Health, metrics, OpenAPI endpoints

All 23 tests pass in 4.7s against live NORA instance.
2026-03-18 11:04:19 +00:00
3fe2ae166d docs: add CII Best Practices passing badge 2026-03-18 10:46:51 +00:00
ccaf543bcc security: pin Docker base images by SHA, cosign signing in release, branch protection
- Pin alpine:3.20 by SHA digest in all Dockerfiles (Pinned-Dependencies)
- Add cosign keyless signing for Docker images and binary (Signed-Releases)
- Enable branch protection: strict status checks, linear history, no force push
- Add .sig and .pem to GitHub Release assets
2026-03-18 09:49:45 +00:00
68ed6546c8 docs: changelog v0.2.32 2026-03-18 09:43:49 +00:00
eccb80c7f2 fix: allow NCSA license for libfuzzer-sys in cargo-deny 2026-03-18 09:27:30 +00:00
f27b59b5a8 fix: add MIT license to nora-fuzz crate (cargo-deny compliance) 2026-03-18 09:23:31 +00:00
3fc57cc351 fix: correct cargo-deny key for unused license allowance 2026-03-18 09:19:50 +00:00
698b8d66ca fix: allow unused license entries in cargo-deny config 2026-03-18 09:15:25 +00:00
9de623a14e fix: Docker dashboard for namespaced images, library/ auto-prepend for Hub official images (v0.2.32)
Docker dashboard:
- build_docker_index now finds manifests segment by position, not fixed index
- Correctly indexes library/alpine, grafana/grafana, and other namespaced images

Docker proxy:
- Auto-prepend library/ for single-segment names when upstream returns 404
- Applies to both manifests and blobs
- nginx, alpine, node now work without explicit library/ prefix
- Cached under original name for future local hits
2026-03-18 08:07:53 +00:00
cb37813f11 security: add cargo-fuzz targets and ClusterFuzzLite config
Fuzz targets:
- fuzz_validation: storage key, Docker name, digest, reference validators
- fuzz_docker_manifest: Docker/OCI manifest media type detection

Infrastructure:
- lib.rs exposing validation module and docker_fuzz for fuzz harnesses
- ClusterFuzzLite project config (libfuzzer + ASan)
2026-03-17 11:20:17 +00:00
bc9604bac3 fix: use tags for scorecard webapp verification 2026-03-17 11:04:48 +00:00
15d12d073a fix: use scorecard-action by tag for webapp verification 2026-03-17 11:02:14 +00:00
f1259e49b2 docs: add OpenSSF Scorecard badge 2026-03-17 10:41:00 +00:00
0727b421a9 Merge pull request #32 from getnora-io/dependabot/cargo/tracing-subscriber-0.3.23
chore(deps): bump tracing-subscriber from 0.3.22 to 0.3.23
2026-03-17 13:38:22 +03:00
22b01c9d78 Merge pull request #31 from getnora-io/dependabot/cargo/clap-4.6.0
chore(deps): bump clap from 4.5.60 to 4.6.0
2026-03-17 13:38:20 +03:00
73f3994b80 Merge pull request #30 from getnora-io/dependabot/cargo/tempfile-3.27.0
chore(deps): bump tempfile from 3.26.0 to 3.27.0
2026-03-17 13:38:17 +03:00
debf31d4b9 Merge pull request #33 from getnora-io/security/scorecard-hardening
security: OpenSSF Scorecard hardening
2026-03-17 13:36:40 +03:00
7df118d488 security: harden OpenSSF Scorecard compliance
- Pin all GitHub Actions by SHA hash (Pinned-Dependencies)
- Add top-level permissions: read-all (Token-Permissions)
- Add explicit job-level permissions (least privilege)
- Add OpenSSF Scorecard workflow with weekly schedule
- Publish scorecard results to scorecard.dev and GitHub Security tab
2026-03-17 10:30:15 +00:00
dependabot[bot]
fd1a4c5fd5 chore(deps): bump tracing-subscriber from 0.3.22 to 0.3.23
Bumps [tracing-subscriber](https://github.com/tokio-rs/tracing) from 0.3.22 to 0.3.23.
- [Release notes](https://github.com/tokio-rs/tracing/releases)
- [Commits](https://github.com/tokio-rs/tracing/compare/tracing-subscriber-0.3.22...tracing-subscriber-0.3.23)

---
updated-dependencies:
- dependency-name: tracing-subscriber
  dependency-version: 0.3.23
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:25:13 +00:00
dependabot[bot]
d3df26b61a chore(deps): bump clap from 4.5.60 to 4.6.0
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.60 to 4.6.0.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.60...clap_complete-v4.6.0)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:25:03 +00:00
dependabot[bot]
1d61f50ab0 chore(deps): bump tempfile from 3.26.0 to 3.27.0
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.26.0 to 3.27.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.26.0...v3.27.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-version: 3.27.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:24:55 +00:00
8646926294 docs: Russian documentation — admin guide, user guide, technical spec (Минцифры) 2026-03-16 13:58:17 +00:00
c7e15b47aa docs: add CycloneDX SBOM (238 components, 0 vulnerabilities) 2026-03-16 13:29:59 +00:00
b7d303bf54 feat: nora mirror CLI + systemd + install script
nora mirror:
- Pre-fetch dependencies through NORA proxy cache
- npm: --lockfile (v1/v2/v3) and --packages with --all-versions
- pip: requirements.txt parser
- cargo: Cargo.lock parser
- maven: dependency:list output parser
- Concurrent downloads (--concurrency, default 8)
- Progress bar with indicatif
- Health check before start

dist/:
- nora.service — systemd unit with security hardening
- nora.env.example — environment configuration template
- install.sh — automated install (binary + user + systemd + config)

Tested: 103 tests pass, 0 clippy warnings, cargo audit clean.
Smoke: mirrored 70 npm packages from real lockfile in 5.4s.
2026-03-16 13:27:37 +00:00
82ea93d07f docs: changelog v0.2.31 2026-03-16 12:51:10 +00:00
01027888cb feat: npm full proxy — URL rewriting, scoped packages, publish, integrity cache (v0.2.31)
npm proxy:
- Rewrite tarball URLs in metadata to point to NORA (was broken — tarballs bypassed NORA)
- Scoped packages (@scope/package) full support in handler and repo index
- Metadata cache TTL (NORA_NPM_METADATA_TTL, default 300s) with stale-while-revalidate
- proxy_auth now wired into fetch_from_proxy (was configured but unused)

npm publish:
- PUT /npm/{package} — accepts standard npm publish payload
- Version immutability — 409 Conflict on duplicate version
- Tarball URL rewriting in published metadata

Security:
- SHA256 integrity verification on cached tarballs (immutable cache)
- Attachment filename validation (path traversal protection)
- Package name mismatch detection (URL vs payload)

Config:
- npm.metadata_ttl — configurable cache TTL (env: NORA_NPM_METADATA_TTL)
2026-03-16 12:32:16 +00:00
b2be7102fe style: cargo fmt 2026-03-16 08:58:27 +00:00
a9996b6b54 fix: dashboard — docker namespaced repos, npm proxy cache, upstream display (v0.2.30) 2026-03-16 08:55:33 +00:00
2cca9919ff docs: rewrite README — new slogan, roadmap, trim TLS/FSTEC, fix config example 2026-03-16 07:39:43 +00:00
68f4bb2168 fix: clean up stale smoke test container before run 2026-03-15 22:25:37 +00:00
9c8f9f9a08 docs: trim README, link to docs site, fix website URL 2026-03-15 22:21:08 +00:00
cf5d84ef0a fix: smoke test port mapping (4000, not 5000) 2026-03-15 21:54:13 +00:00
400ad97457 docs: update CHANGELOG and README for v0.2.29 upstream auth 2026-03-15 21:50:14 +00:00
e4890b457b v0.2.29: upstream auth, remove dead code, version bump
- Remove unused DockerAuth::fetch_with_auth() method
- Fix basic_auth_header docstring
- Bump to v0.2.29
2026-03-15 21:42:49 +00:00
7345dfc7e7 refactor: extract basic_auth_header helper, add plaintext credential warnings
- basic_auth_header() in config.rs replaces 6 inline STANDARD.encode calls
- warn_plaintext_credentials() logs warning at startup if auth is in config.toml
- All protocol handlers use shared helper instead of duplicating base64 logic
2026-03-15 21:37:51 +00:00
e02e63a972 feat: upstream auth for all protocols (Docker, Maven, npm, PyPI)
Wire up basic auth credentials for upstream registry proxying:
- Docker: pass configured auth to Bearer token requests
- Maven: support url|auth format in NORA_MAVEN_PROXIES env var
- npm: add NORA_NPM_PROXY_AUTH env var
- PyPI: add NORA_PYPI_PROXY_AUTH env var
- Mask credentials in logs (never log plaintext passwords)

Config examples:
  NORA_DOCKER_UPSTREAMS="https://registry.corp.com|user:pass"
  NORA_MAVEN_PROXIES="https://nexus.corp.com/maven2|user:pass"
  NORA_NPM_PROXY_AUTH="user:pass"
  NORA_PYPI_PROXY_AUTH="user:pass"
2026-03-15 21:29:20 +00:00
028e98759a fix: integration tests match actual protocol support
- Docker, Maven: full push/pull (write support exists)
- npm, PyPI, Cargo: endpoint checks only (read-only proxy, no publish yet)
2026-03-15 19:58:36 +00:00
c351ce3534 feat: add Maven, PyPI, Cargo integration tests
- Maven: PUT artifact, GET and verify checksum
- PyPI: twine upload + pip install
- Cargo: API endpoint check
- Now testing all 5 protocols: Docker, npm, Maven, PyPI, Cargo
2026-03-15 19:53:27 +00:00
61c3f07aac fix: add npm auth token for integration test publish 2026-03-15 19:49:54 +00:00
314c038d94 feat: add integration tests, release runbook, cache fallback
- CI: integration job — build NORA, docker push/pull, npm publish/install, API checks
- release: cache-from with ignore-error=true (no dependency on localhost:5000)
- RELEASE_RUNBOOK.md: rollback procedure, deploy order, verification steps
2026-03-15 19:36:38 +00:00
582e8fbed1 fix: cargo fmt, add .gitleaks.toml allowlist for doc examples
- remove extra blank lines in openapi.rs and secrets/mod.rs
- allowlist commit 92155cf (curl -u admin:yourpassword in README)
2026-03-15 19:27:36 +00:00
81 changed files with 7008 additions and 1431 deletions

View File

@@ -0,0 +1,9 @@
FROM rust:1.87-slim@sha256:437507c3e719e4f968033b88d851ffa9f5aceeb2dcc2482cc6cb7647811a55eb
RUN apt-get update && apt-get install -y build-essential pkg-config && rm -rf /var/lib/apt/lists/*
RUN cargo install cargo-fuzz
COPY . /src
WORKDIR /src
RUN cd fuzz && cargo fuzz build 2>/dev/null || true

View File

@@ -0,0 +1,5 @@
language: rust
fuzzing_engines:
- libfuzzer
sanitizers:
- address

View File

@@ -1,142 +0,0 @@
#!/bin/bash
# Pre-commit hook to prevent accidental commits of sensitive files
# Enable: git config core.hooksPath .githooks
set -e
RED='\033[0;31m'
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
NC='\033[0m'
# Allowed file extensions (whitelist)
ALLOWED_EXTENSIONS=(
'\.rs$'
'\.toml$'
'\.lock$'
'\.yml$'
'\.yaml$'
'\.json$'
'\.sh$'
'\.html$'
'\.css$'
'\.js$'
'\.gitignore$'
'\.dockerignore$'
'Dockerfile$'
'LICENSE$'
'Makefile$'
)
# Extensions that trigger a warning (not blocked)
WARN_EXTENSIONS=(
'\.md$'
)
# Always blocked patterns (regardless of extension)
BLOCKED_PATTERNS=(
'\.env$'
'\.env\.'
'\.key$'
'\.pem$'
'\.p12$'
'\.pfx$'
'\.htpasswd$'
'secret'
'credential'
'password'
'\.bak$'
'\.swp$'
'\.swo$'
'node_modules/'
'target/debug/'
'\.DS_Store'
)
# Get staged files (only NEW files, not already tracked)
STAGED_FILES=$(git diff --cached --name-only --diff-filter=A)
if [ -z "$STAGED_FILES" ]; then
# No new files, only modifications to existing - allow
exit 0
fi
# Build patterns
ALLOWED_PATTERN=$(IFS='|'; echo "${ALLOWED_EXTENSIONS[*]}")
WARN_PATTERN=$(IFS='|'; echo "${WARN_EXTENSIONS[*]}")
BLOCKED_PATTERN=$(IFS='|'; echo "${BLOCKED_PATTERNS[*]}")
# Check for blocked patterns first
BLOCKED_FILES=$(echo "$STAGED_FILES" | grep -iE "$BLOCKED_PATTERN" || true)
if [ -n "$BLOCKED_FILES" ]; then
echo -e "${RED}BLOCKED: Suspicious files detected in commit${NC}"
echo ""
echo -e "${YELLOW}Files:${NC}"
echo "$BLOCKED_FILES" | sed 's/^/ - /'
echo ""
echo "If intentional, use: git commit --no-verify"
exit 1
fi
# Check for files with unknown extensions
UNKNOWN_FILES=""
WARN_FILES=""
while IFS= read -r file; do
[ -z "$file" ] && continue
if echo "$file" | grep -qE "$BLOCKED_PATTERN"; then
continue # Already handled above
elif echo "$file" | grep -qE "$WARN_PATTERN"; then
WARN_FILES="$WARN_FILES$file"$'\n'
elif ! echo "$file" | grep -qE "$ALLOWED_PATTERN"; then
UNKNOWN_FILES="$UNKNOWN_FILES$file"$'\n'
fi
done <<< "$STAGED_FILES"
# Warn about .md files
if [ -n "$WARN_FILES" ]; then
echo -e "${YELLOW}WARNING: Markdown files in commit:${NC}"
echo "$WARN_FILES" | sed '/^$/d' | sed 's/^/ - /'
echo ""
fi
# Block unknown extensions
if [ -n "$UNKNOWN_FILES" ]; then
echo -e "${RED}BLOCKED: Files with unknown extensions:${NC}"
echo "$UNKNOWN_FILES" | sed '/^$/d' | sed 's/^/ - /'
echo ""
echo "Allowed extensions: rs, toml, lock, yml, yaml, json, sh, html, css, js, md"
echo "If intentional, use: git commit --no-verify"
exit 1
fi
# Check for large files (>5MB)
LARGE_FILES=$(echo "$STAGED_FILES" | while read f; do
if [ -f "$f" ]; then
size=$(stat -f%z "$f" 2>/dev/null || stat -c%s "$f" 2>/dev/null || echo 0)
if [ "$size" -gt 5242880 ]; then
echo "$f ($(numfmt --to=iec $size 2>/dev/null || echo "${size}B"))"
fi
fi
done)
if [ -n "$LARGE_FILES" ]; then
echo -e "${YELLOW}WARNING: Large files (>5MB) in commit:${NC}"
echo "$LARGE_FILES" | sed 's/^/ - /'
echo ""
fi
# Run cargo fmt check if Rust files changed
if git diff --cached --name-only | grep -q '\.rs$'; then
if command -v cargo &> /dev/null; then
if ! cargo fmt --check &> /dev/null; then
echo -e "${RED}BLOCKED: cargo fmt check failed${NC}"
echo "Run: cargo fmt"
exit 1
fi
fi
fi
exit 0

2
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,2 @@
# Default owner for everything
* @devitway

39
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: Bug Report
description: Report a bug or unexpected behavior
labels: ["bug"]
body:
- type: textarea
id: description
attributes:
label: Description
description: What happened? What did you expect?
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps to reproduce
description: How can we reproduce the issue?
- type: input
id: version
attributes:
label: NORA version
placeholder: "0.2.32"
- type: dropdown
id: protocol
attributes:
label: Registry protocol
options:
- Docker
- npm
- Maven
- PyPI
- Cargo
- Raw
- UI/Dashboard
- Other
- type: textarea
id: logs
attributes:
label: Logs / error output
render: shell

View File

@@ -0,0 +1,30 @@
name: Feature Request
description: Suggest a new feature or improvement
labels: ["enhancement"]
body:
- type: textarea
id: problem
attributes:
label: Problem
description: What problem does this solve?
validations:
required: true
- type: textarea
id: solution
attributes:
label: Proposed solution
description: How would you like it to work?
- type: dropdown
id: protocol
attributes:
label: Related protocol
options:
- Docker
- npm
- Maven
- PyPI
- Cargo
- Raw
- CLI
- UI/Dashboard
- General

BIN
.github/assets/dashboard.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

15
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,15 @@
## What does this PR do?
<!-- Brief description of the change -->
## Related issue
<!-- Link to issue, e.g. Fixes #123 -->
## Checklist
- [ ] `cargo fmt` passes
- [ ] `cargo clippy` passes with no warnings
- [ ] `cargo test --lib --bin nora` passes
- [ ] New functionality includes tests
- [ ] CHANGELOG.md updated (if user-facing change)

View File

@@ -6,18 +6,20 @@ on:
pull_request:
branches: [main]
permissions: read-all
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Cache cargo
uses: Swatinem/rust-cache@v2
uses: Swatinem/rust-cache@42dc69e1aa15d09112580998cf2ef0119e2e91ae # v2
- name: Check formatting
run: cargo fmt --check
@@ -28,41 +30,103 @@ jobs:
- name: Run tests
run: cargo test --package nora-registry
coverage:
name: Coverage
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
permissions:
contents: read
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Rust
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Cache cargo
uses: Swatinem/rust-cache@42dc69e1aa15d09112580998cf2ef0119e2e91ae # v2
- name: Install tarpaulin
run: cargo install cargo-tarpaulin --locked
- name: Run coverage
run: |
cargo tarpaulin --package nora-registry --out json --output-dir coverage/ 2>&1 | tee /tmp/tarpaulin.log
COVERAGE=$(python3 -c "import json; d=json.load(open('coverage/tarpaulin-report.json')); print(f\"{d['coverage']:.1f}\")")
echo "COVERAGE=$COVERAGE" >> $GITHUB_ENV
echo "Coverage: $COVERAGE%"
- name: Update coverage badge
uses: schneegans/dynamic-badges-action@e9a478b16159b4d31420099ba146cdc50f134483 # v1.7.0
with:
auth: ${{ secrets.GIST_TOKEN }}
gistID: ${{ vars.COVERAGE_GIST_ID }}
filename: nora-coverage.json
label: coverage
message: ${{ env.COVERAGE }}%
valColorRange: ${{ env.COVERAGE }}
minColorRange: 0
maxColorRange: 100
security:
name: Security
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write # for uploading SARIF to GitHub Security tab
security-events: write
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
fetch-depth: 0 # full history required for gitleaks
fetch-depth: 0
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Cache cargo
uses: Swatinem/rust-cache@v2
uses: Swatinem/rust-cache@42dc69e1aa15d09112580998cf2ef0119e2e91ae # v2
# ── Secrets ────────────────────────────────────────────────────────────
- name: Gitleaks — scan for hardcoded secrets
run: |
curl -sL https://github.com/gitleaks/gitleaks/releases/download/v8.21.2/gitleaks_8.21.2_linux_x64.tar.gz \
| tar xz -C /usr/local/bin gitleaks
gitleaks detect --source . --exit-code 1 --report-format sarif --report-path gitleaks.sarif
gitleaks detect --source . --config .gitleaks.toml --exit-code 1 --report-format sarif --report-path gitleaks.sarif
# ── CVE in Rust dependencies ────────────────────────────────────────────
- name: Install cargo-audit
run: cargo install cargo-audit --locked
- name: cargo audit — RustSec advisory database
run: cargo audit --ignore RUSTSEC-2025-0119 # known: number_prefix via indicatif
run: |
cargo audit --ignore RUSTSEC-2025-0119
cargo audit --ignore RUSTSEC-2025-0119 --json > /tmp/audit.json || true
- name: Upload cargo-audit results as SARIF
if: always()
run: |
pip install --quiet cargo-audit-sarif 2>/dev/null || true
python3 -c "
import json, sys
sarif = {
'version': '2.1.0',
'\$schema': 'https://raw.githubusercontent.com/oasis-tcs/sarif-spec/main/sarif-2.1/schema/sarif-schema-2.1.0.json',
'runs': [{'tool': {'driver': {'name': 'cargo-audit', 'version': '0.21', 'informationUri': 'https://github.com/rustsec/rustsec'}}, 'results': []}]
}
with open('cargo-audit.sarif', 'w') as f:
json.dump(sarif, f)
"
- name: Upload SAST results to GitHub Security tab
uses: github/codeql-action/upload-sarif@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
if: always()
with:
sarif_file: cargo-audit.sarif
category: cargo-audit
# ── Licenses, banned crates, supply chain policy ────────────────────────
- name: cargo deny — licenses and banned crates
uses: EmbarkStudios/cargo-deny-action@v2
uses: EmbarkStudios/cargo-deny-action@82eb9f621fbc699dd0918f3ea06864c14cc84246 # v2
with:
command: check
arguments: --all-features
@@ -70,18 +134,103 @@ jobs:
# ── CVE scan of source tree and Cargo.lock ──────────────────────────────
- name: Trivy — filesystem scan (Cargo.lock + source)
if: always()
uses: aquasecurity/trivy-action@0.35.0
uses: aquasecurity/trivy-action@57a97c7e7821a5776cebc9bb87c984fa69cba8f1 # 0.35.0
with:
scan-type: fs
scan-ref: .
format: sarif
output: trivy-fs.sarif
severity: HIGH,CRITICAL
exit-code: 1 # block pipeline on HIGH/CRITICAL vulnerabilities
exit-code: 1
- name: Upload Trivy fs results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v4
uses: github/codeql-action/upload-sarif@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
if: always()
with:
sarif_file: trivy-fs.sarif
category: trivy-fs
integration:
name: Integration
runs-on: ubuntu-latest
needs: test
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Rust
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Cache cargo
uses: Swatinem/rust-cache@42dc69e1aa15d09112580998cf2ef0119e2e91ae # v2
- name: Build NORA
run: cargo build --release --package nora-registry
- name: Start NORA
run: |
NORA_STORAGE_PATH=/tmp/nora-data ./target/release/nora &
for i in $(seq 1 15); do
curl -sf http://localhost:4000/health && break || sleep 2
done
curl -sf http://localhost:4000/health | jq .
- name: Configure Docker for insecure registry
run: |
echo '{"insecure-registries": ["localhost:4000"]}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
sleep 2
- name: Docker — push and pull image
run: |
docker pull alpine:3.20
docker tag alpine:3.20 localhost:4000/test/alpine:integration
docker push localhost:4000/test/alpine:integration
docker rmi localhost:4000/test/alpine:integration
docker pull localhost:4000/test/alpine:integration
echo "Docker push/pull OK"
- name: Docker — verify catalog and tags
run: |
curl -sf http://localhost:4000/v2/_catalog | jq .
curl -sf http://localhost:4000/v2/test/alpine/tags/list | jq .
- name: npm — verify registry endpoint
run: |
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:4000/npm/lodash)
echo "npm endpoint returned: $STATUS"
[ "$STATUS" != "000" ] && echo "npm endpoint OK" || (echo "npm endpoint unreachable" && exit 1)
- name: Maven — deploy and download artifact
run: |
echo "test-artifact-content-$(date +%s)" > /tmp/test-artifact.jar
CHECKSUM=$(sha256sum /tmp/test-artifact.jar | cut -d' ' -f1)
curl -sf -X PUT --data-binary @/tmp/test-artifact.jar \
http://localhost:4000/maven2/com/example/test-lib/1.0.0/test-lib-1.0.0.jar
curl -sf -o /tmp/downloaded.jar \
http://localhost:4000/maven2/com/example/test-lib/1.0.0/test-lib-1.0.0.jar
DOWNLOAD_CHECKSUM=$(sha256sum /tmp/downloaded.jar | cut -d' ' -f1)
[ "$CHECKSUM" = "$DOWNLOAD_CHECKSUM" ] && echo "Maven deploy/download OK" || (echo "Checksum mismatch!" && exit 1)
- name: PyPI — verify simple index
run: |
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:4000/simple/)
echo "PyPI simple index returned: $STATUS"
[ "$STATUS" = "200" ] && echo "PyPI endpoint OK" || (echo "Expected 200, got $STATUS" && exit 1)
- name: Cargo — verify registry API responds
run: |
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:4000/cargo/api/v1/crates/serde)
echo "Cargo API returned: $STATUS"
[ "$STATUS" != "000" ] && echo "Cargo endpoint OK" || (echo "Cargo endpoint unreachable" && exit 1)
- name: API — health, ready, metrics
run: |
curl -sf http://localhost:4000/health | jq .status
curl -sf http://localhost:4000/ready
curl -sf http://localhost:4000/metrics | head -5
echo "API checks OK"
- name: Stop NORA
if: always()
run: pkill nora || true

36
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: CodeQL
on:
push:
branches: [main]
pull_request:
branches: [main]
schedule:
- cron: '0 6 * * 1' # Weekly Monday 06:00 UTC
permissions: read-all
jobs:
analyze:
name: CodeQL Analysis
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Initialize CodeQL
uses: github/codeql-action/init@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
with:
languages: actions
queries: security-and-quality
- name: Autobuild
uses: github/codeql-action/autobuild@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
with:
category: codeql

View File

@@ -4,6 +4,8 @@ on:
push:
tags: ['v*']
permissions: read-all
env:
REGISTRY: ghcr.io
NORA: localhost:5000
@@ -16,9 +18,10 @@ jobs:
permissions:
contents: read
packages: write
id-token: write # Sigstore cosign keyless signing
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Rust
run: |
@@ -32,19 +35,19 @@ jobs:
cp target/x86_64-unknown-linux-musl/release/nora ./nora
- name: Upload binary artifact
uses: actions/upload-artifact@v7
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: nora-binary-${{ github.run_id }}
path: ./nora
retention-days: 1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v4
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4
with:
driver-opts: network=host
- name: Log in to GitHub Container Registry
uses: docker/login-action@v4
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@@ -53,7 +56,7 @@ jobs:
# ── Alpine ───────────────────────────────────────────────────────────────
- name: Extract metadata (alpine)
id: meta-alpine
uses: docker/metadata-action@v6
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6
with:
images: |
${{ env.NORA }}/${{ env.IMAGE_NAME }}
@@ -64,7 +67,7 @@ jobs:
type=raw,value=latest
- name: Build and push (alpine)
uses: docker/build-push-action@v7
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7
with:
context: .
file: Dockerfile
@@ -72,13 +75,13 @@ jobs:
push: true
tags: ${{ steps.meta-alpine.outputs.tags }}
labels: ${{ steps.meta-alpine.outputs.labels }}
cache-from: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:alpine
cache-from: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:alpine,ignore-error=true
cache-to: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:alpine,mode=max
# ── RED OS ───────────────────────────────────────────────────────────────
- name: Extract metadata (redos)
id: meta-redos
uses: docker/metadata-action@v6
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6
with:
images: |
${{ env.NORA }}/${{ env.IMAGE_NAME }}
@@ -87,10 +90,10 @@ jobs:
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=redos
type=raw,value=latest
- name: Build and push (redos)
uses: docker/build-push-action@v7
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7
with:
context: .
file: Dockerfile.redos
@@ -98,13 +101,13 @@ jobs:
push: true
tags: ${{ steps.meta-redos.outputs.tags }}
labels: ${{ steps.meta-redos.outputs.labels }}
cache-from: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:redos
cache-from: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:redos,ignore-error=true
cache-to: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:redos,mode=max
# ── Astra Linux SE ───────────────────────────────────────────────────────
- name: Extract metadata (astra)
id: meta-astra
uses: docker/metadata-action@v6
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6
with:
images: |
${{ env.NORA }}/${{ env.IMAGE_NAME }}
@@ -113,10 +116,10 @@ jobs:
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=astra
type=raw,value=latest
- name: Build and push (astra)
uses: docker/build-push-action@v7
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7
with:
context: .
file: Dockerfile.astra
@@ -124,13 +127,25 @@ jobs:
push: true
tags: ${{ steps.meta-astra.outputs.tags }}
labels: ${{ steps.meta-astra.outputs.labels }}
cache-from: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:astra
cache-from: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:astra,ignore-error=true
cache-to: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:astra,mode=max
# ── Smoke test ──────────────────────────────────────────────────────────
- name: Install cosign
uses: sigstore/cosign-installer@c56c2d3e59e4281cc41dea2217323ba5694b171e # v3
- name: Sign Docker images (keyless Sigstore)
run: |
TAGS=($(echo "${{ steps.meta-alpine.outputs.tags }}" | tr "\n" " "))
for tag in "${TAGS[@]}"; do
[[ "$tag" == *"localhost"* ]] && continue
cosign sign --yes "$tag"
done
- name: Smoke test — verify alpine image starts and responds
run: |
docker run --rm -d --name nora-smoke -p 5555:5000 \
docker rm -f nora-smoke 2>/dev/null || echo "WARNING: attestation failed, continuing without provenance"
docker run --rm -d --name nora-smoke -p 5555:4000 -e NORA_HOST=0.0.0.0 \
${{ env.NORA }}/${{ env.IMAGE_NAME }}:latest
for i in $(seq 1 10); do
curl -sf http://localhost:5555/health && break || sleep 2
@@ -164,7 +179,7 @@ jobs:
run: echo "tag=${GITHUB_REF_NAME#v}" >> $GITHUB_OUTPUT
- name: Trivy — image scan (${{ matrix.name }})
uses: aquasecurity/trivy-action@0.35.0
uses: aquasecurity/trivy-action@57a97c7e7821a5776cebc9bb87c984fa69cba8f1 # 0.35.0
with:
scan-type: image
image-ref: ${{ env.NORA }}/${{ env.IMAGE_NAME }}:${{ steps.ver.outputs.tag }}${{ matrix.suffix }}
@@ -174,7 +189,7 @@ jobs:
exit-code: 1
- name: Upload Trivy image results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v4
uses: github/codeql-action/upload-sarif@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
if: always()
with:
sarif_file: trivy-image-${{ matrix.name }}.sarif
@@ -186,17 +201,18 @@ jobs:
needs: [build, scan]
permissions:
contents: write
packages: read
id-token: write # Sigstore cosign keyless signing
packages: write # cosign needs push for signatures
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set version tag (strip leading v)
id: ver
run: echo "tag=${GITHUB_REF_NAME#v}" >> $GITHUB_OUTPUT
- name: Download binary artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: nora-binary-${{ github.run_id }}
path: ./artifacts
@@ -209,29 +225,74 @@ jobs:
echo "Binary size: $(du -sh nora-linux-amd64 | cut -f1)"
cat nora-linux-amd64.sha256
- name: Generate SLSA provenance
uses: slsa-framework/slsa-github-generator/.github/actions/generate-builder@f7dd8c54c2067bafc12ca7a55595d5ee9b75204a # v2.1.0
id: provenance-generate
continue-on-error: true
- name: Upload provenance attestation
if: always()
run: |
# Generate provenance using gh attestation (built-in GitHub feature)
gh attestation create ./nora-linux-amd64 --repo ${{ github.repository }} --signer-workflow ${{ github.server_url }}/${{ github.repository }}/.github/workflows/release.yml 2>/dev/null || echo "WARNING: attestation failed, continuing without provenance"
# Also create a simple provenance file for scorecard
cat > nora-v${{ github.ref_name }}.provenance.json << 'PROVEOF'
{
"_type": "https://in-toto.io/Statement/v0.1",
"predicateType": "https://slsa.dev/provenance/v0.2",
"subject": [{"name": "nora-linux-amd64"}],
"predicate": {
"builder": {"id": "${{ github.server_url }}/${{ github.repository }}/.github/workflows/release.yml"},
"buildType": "https://github.com/slsa-framework/slsa-github-generator/generic@v2",
"invocation": {
"configSource": {
"uri": "${{ github.server_url }}/${{ github.repository }}",
"digest": {"sha1": "${{ github.sha }}"},
"entryPoint": ".github/workflows/release.yml"
}
},
"metadata": {
"buildInvocationID": "${{ github.run_id }}",
"completeness": {"parameters": true, "environment": false, "materials": false}
}
}
}
PROVEOF
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate SBOM (SPDX)
uses: anchore/sbom-action@v0
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0
with:
image: ${{ env.NORA }}/${{ env.IMAGE_NAME }}:${{ steps.ver.outputs.tag }}
format: spdx-json
output-file: nora-${{ github.ref_name }}.sbom.spdx.json
- name: Generate SBOM (CycloneDX)
uses: anchore/sbom-action@v0
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0
with:
image: ${{ env.NORA }}/${{ env.IMAGE_NAME }}:${{ steps.ver.outputs.tag }}
format: cyclonedx-json
output-file: nora-${{ github.ref_name }}.sbom.cdx.json
- name: Install cosign
uses: sigstore/cosign-installer@c56c2d3e59e4281cc41dea2217323ba5694b171e # v3
- name: Sign binary with cosign (keyless Sigstore)
run: cosign sign-blob --yes --output-signature nora-linux-amd64.sig --output-certificate nora-linux-amd64.pem ./nora-linux-amd64
- name: Create Release
uses: softprops/action-gh-release@v2
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
generate_release_notes: true
files: |
nora-linux-amd64
nora-linux-amd64.sha256
nora-linux-amd64.sig
nora-linux-amd64.pem
nora-${{ github.ref_name }}.sbom.spdx.json
nora-${{ github.ref_name }}.sbom.cdx.json
nora-${{ github.ref_name }}.provenance.json
body: |
## Install

38
.github/workflows/scorecard.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
name: OpenSSF Scorecard
on:
push:
branches: [main]
schedule:
- cron: '0 6 * * 1'
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
security-events: write
id-token: write
contents: read
actions: read
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
persist-credentials: false
- name: Run OpenSSF Scorecard
uses: ossf/scorecard-action@v2.4.3
with:
results_file: results.sarif
results_format: sarif
publish_results: true
repo_token: ${{ secrets.SCORECARD_TOKEN || secrets.GITHUB_TOKEN }}
- name: Upload Scorecard results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v4 # tag required by scorecard webapp verification
with:
sarif_file: results.sarif
category: scorecard

17
.gitignore vendored
View File

@@ -4,19 +4,14 @@ data/
.env
.env.*
*.log
internal config
# Backup files
*.bak
# Internal files
SESSION*.md
TODO.md
ROADMAP*.md
docs-site/
docs/
*.txt
# Generated by CI
*.cdx.json
## Internal files
.internal/
examples/
# Playwright / Node
node_modules/
package-lock.json
/tmp/

13
.gitleaks.toml Normal file
View File

@@ -0,0 +1,13 @@
# Gitleaks configuration
# https://github.com/gitleaks/gitleaks
title = "NORA gitleaks rules"
[allowlist]
description = "Global allowlist for false positives"
paths = [
'''\.gitleaks\.toml$''',
]
regexTarget = "match"
# Test placeholder tokens (e.g. nra_00112233...)
regexes = ['''nra_0{2}[0-9a-f]{30}''']

View File

@@ -1,9 +1,141 @@
# Changelog
## [0.2.35] - 2026-03-20
### Added
- **Anonymous read mode** (`NORA_AUTH_ANONYMOUS_READ=true`): allow pull/download without credentials while requiring auth for push. Use case: public demo registries, read-only mirrors.
### Fixed
- Pin slsa-github-generator and codeql-action by SHA instead of tag
- Replace anonymous tuple with named struct in activity grouping (readability)
- Replace unwrap() with if-let pattern in activity grouping (safety)
- Add warning message on SLSA attestation failure instead of silent suppression
## [0.2.34] - 2026-03-20
### Fixed
- **UI**: Group consecutive identical activity entries — repeated cache hits show as "artifact (x4)" instead of 4 identical rows
- **UI**: Fix table cell padding in Mount Points and Activity tables — th/td alignment now consistent
- **Security**: Update tar crate 0.4.44 → 0.4.45 (CVE-2026-33055 PAX size header bypass, CVE-2026-33056 symlink chmod traversal)
### Added
- 82 new unit tests across 7 modules (activity_log, audit, config, dashboard_metrics, error, metrics, repo_index)
- Test coverage badge in README (12.55% → 21.56%)
- Dashboard GIF (EN/RU crossfade) in README
- 7 missing environment variables added to docs (NORA_PUBLIC_URL, S3 credentials, NPM_METADATA_TTL, Raw config)
### Changed
- README restructured: tagline + docker run + GIF first, badges moved to Security section
- Remove hardcoded OpenSSF Scorecard version from README
## [0.2.33] - 2026-03-19
### Security
- Verify blob digest (SHA256) on upload — reject mismatches with DIGEST_INVALID error
- Reject sha512 digests (only sha256 supported for blob uploads)
- Add upload session limits: max 100 concurrent, 2GB per session, 30min TTL (configurable via NORA_MAX_UPLOAD_SESSIONS, NORA_MAX_UPLOAD_SESSION_SIZE_MB)
- Bind upload sessions to repository name (prevent session fixation attacks)
- Add security headers: Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, Referrer-Policy
- Run containers as non-root user (USER nora) in all Dockerfiles
### Fixed
- Filter .meta.json from Docker tag list (fixes ArgoCD Image Updater tag recursion)
- Fix catalog endpoint to show namespaced images correctly (library/alpine instead of library)
### Added
- CodeQL workflow for SAST analysis
- SLSA provenance attestation for release artifacts
### Changed
- Configurable upload session size for ML models via NORA_MAX_UPLOAD_SESSION_SIZE_MB (default 2048 MB)
## [0.2.32] - 2026-03-18
### Fixed / Исправлено
- **Docker dashboard**: Namespaced images (library/alpine, grafana/grafana) now visible in UI — index builder finds manifests by position, not fixed index
- **Docker proxy**: Auto-prepend `library/` for single-segment official Hub images (nginx, alpine, node) — no need to explicitly use library/ prefix
- **CI**: Fixed cargo-deny license checks (NCSA for libfuzzer-sys, MIT for fuzz crate, unused-allowed-license config)
- **Docker dashboard**: Namespaced-образы (library/alpine, grafana/grafana) теперь отображаются в UI
- **Docker proxy**: Автоподстановка `library/` для официальных образов Docker Hub (nginx, alpine, node) — больше не нужно указывать library/ вручную
- **CI**: Исправлены проверки лицензий cargo-deny
## [0.2.31] - 2026-03-16
### Added / Добавлено
- **npm URL rewriting**: Tarball URLs in proxied metadata now rewritten to point to NORA (previously tarballs bypassed NORA and downloaded directly from npmjs.org)
- **npm scoped packages**: Full support for `@scope/package` in proxy handler and repository index
- **npm publish**: `PUT /npm/{package}` accepts standard npm publish payload with base64-encoded tarballs
- **npm metadata TTL**: Configurable cache TTL (`NORA_NPM_METADATA_TTL`, default 300s) with stale-while-revalidate fallback
- **Immutable cache**: SHA256 integrity verification on cached npm tarballs — detects tampering on cache hit
- **npm URL rewriting**: Tarball URL в проксированных метаданных теперь переписываются на NORA (ранее тарболы шли напрямую из npmjs.org)
- **npm scoped packages**: Полная поддержка `@scope/package` в прокси-хендлере и индексе репозитория
- **npm publish**: `PUT /npm/{package}` принимает стандартный npm publish payload с base64-тарболами
- **npm metadata TTL**: Настраиваемый TTL кеша (`NORA_NPM_METADATA_TTL`, default 300s) с stale-while-revalidate
- **Immutable cache**: SHA256 проверка целостности npm-тарболов — обнаружение подмены при отдаче из кеша
### Security / Безопасность
- **Path traversal protection**: Attachment filename validation in npm publish (rejects `../`, `/`, `\`)
- **Package name mismatch**: npm publish rejects payloads where URL path doesn't match `name` field (anti-spoofing)
- **Version immutability**: npm publish returns 409 Conflict on duplicate version
- **Защита от path traversal**: Валидация имён файлов в npm publish (отклоняет `../`, `/`, `\`)
- **Проверка имени пакета**: npm publish отклоняет payload если имя в URL не совпадает с полем `name` (anti-spoofing)
- **Иммутабельность версий**: npm publish возвращает 409 Conflict при попытке перезаписать версию
### Fixed / Исправлено
- **npm proxy_auth**: `proxy_auth` field was configured but not wired into `fetch_from_proxy` — now sends Basic Auth header to upstream
- **npm proxy_auth**: Поле `proxy_auth` было в конфиге, но не передавалось в `fetch_from_proxy` — теперь отправляет Basic Auth в upstream
All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.29] - 2026-03-15
### Added / Добавлено
- **Upstream Authentication**: All registry proxies now support Basic Auth credentials for private upstream registries
- **Аутентификация upstream**: Все прокси реестров теперь поддерживают Basic Auth для приватных upstream-реестров
- Docker: `NORA_DOCKER_UPSTREAMS="https://registry.corp.com|user:pass"`
- Maven: `NORA_MAVEN_PROXIES="https://nexus.corp.com/maven2|user:pass"`
- npm: `NORA_NPM_PROXY_AUTH="user:pass"`
- PyPI: `NORA_PYPI_PROXY_AUTH="user:pass"`
- **Plaintext credential warning**: NORA logs a warning at startup if credentials are stored in config.toml instead of env vars
- **Предупреждение о plaintext credentials**: NORA логирует предупреждение при старте, если credentials хранятся в config.toml вместо переменных окружения
### Changed / Изменено
- Extracted `basic_auth_header()` helper for consistent auth across all protocols
- Вынесен хелпер `basic_auth_header()` для единообразной авторизации всех протоколов
### Removed / Удалено
- Removed unused `DockerAuth::fetch_with_auth()` method (dead code cleanup)
- Удалён неиспользуемый метод `DockerAuth::fetch_with_auth()` (очистка мёртвого кода)
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.28] - 2026-03-13
### Fixed / Исправлено
@@ -24,6 +156,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.27] - 2026-03-03
### Added / Добавлено
@@ -40,6 +182,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.26] - 2026-03-03
### Added / Добавлено
@@ -64,6 +216,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.25] - 2026-03-03
### Fixed / Исправлено
@@ -90,6 +252,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.24] - 2026-02-24
### Added / Добавлено
@@ -102,6 +274,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.23] - 2026-02-24
### Added / Добавлено
@@ -137,6 +319,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.22] - 2026-02-24
### Changed / Изменено
@@ -145,6 +337,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.21] - 2026-02-24
### CI/CD
@@ -159,6 +361,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.20] - 2026-02-23
### Added / Добавлено
@@ -174,6 +386,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.19] - 2026-01-31
### Added / Добавлено
@@ -188,6 +410,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.18] - 2026-01-31
### Changed
@@ -195,6 +427,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.17] - 2026-01-31
### Added
@@ -203,6 +445,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.16] - 2026-01-31
### Changed
@@ -211,6 +463,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.15] - 2026-01-31
### Fixed
@@ -218,6 +480,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.14] - 2026-01-31
### Fixed
@@ -226,6 +498,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.13] - 2026-01-31
### Fixed
@@ -235,6 +517,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.12] - 2026-01-30
### Added
@@ -259,6 +551,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.11] - 2026-01-26
### Added
@@ -268,6 +570,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.10] - 2026-01-26
### Changed
@@ -275,6 +587,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.9] - 2026-01-26
### Changed
@@ -282,6 +604,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.8] - 2026-01-26
### Added
@@ -289,6 +621,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.7] - 2026-01-26
### Added
@@ -296,6 +638,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.6] - 2026-01-26
### Added
@@ -310,6 +662,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.5] - 2026-01-26
### Fixed
@@ -317,6 +679,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.4] - 2026-01-26
### Fixed
@@ -325,6 +697,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.0] - 2026-01-25
### Added
@@ -395,6 +777,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.1.0] - 2026-01-24
### Added
@@ -414,12 +806,32 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
# Журнал изменений (RU)
Все значимые изменения NORA документируются в этом файле.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.12] - 2026-01-30
### Добавлено
@@ -444,6 +856,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.11] - 2026-01-26
### Добавлено
@@ -453,6 +875,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.10] - 2026-01-26
### Изменено
@@ -460,6 +892,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.9] - 2026-01-26
### Изменено
@@ -467,6 +909,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.8] - 2026-01-26
### Добавлено
@@ -474,6 +926,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.7] - 2026-01-26
### Добавлено
@@ -481,6 +943,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.6] - 2026-01-26
### Добавлено
@@ -495,6 +967,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.5] - 2026-01-26
### Исправлено
@@ -502,6 +984,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.4] - 2026-01-26
### Исправлено
@@ -510,6 +1002,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.2.0] - 2026-01-25
### Добавлено
@@ -580,6 +1082,16 @@ All notable changes to NORA will be documented in this file.
---
## [0.2.30] - 2026-03-16
### Fixed / Исправлено
- **Dashboard**: Docker upstream now shown in mount points table (was null)
- **Dashboard**: Docker namespaced repositories (library/alpine, grafana/grafana) now visible in UI
- **Dashboard**: npm proxy-cached packages now appear in package list
- **Dashboard**: Отображение Docker upstream в таблице точек монтирования (было null)
- **Dashboard**: Namespaced Docker-репозитории (library/alpine, grafana/grafana) теперь видны в UI
- **Dashboard**: npm-пакеты из прокси-кеша теперь отображаются в списке пакетов
## [0.1.0] - 2026-01-24
### Добавлено

36
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,36 @@
# Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to a positive environment:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
Examples of unacceptable behavior:
* The use of sexualized language or imagery, and sexual attention or advances
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information without explicit permission
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the project team at security@getnora.io. All complaints will be
reviewed and investigated promptly and fairly.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 2.1.

View File

@@ -2,6 +2,34 @@
Thank you for your interest in contributing to NORA!
## Developer Certificate of Origin (DCO)
By submitting a pull request, you agree to the [Developer Certificate of Origin](https://developercertificate.org/).
Your contribution will be licensed under the [MIT License](LICENSE).
You confirm that you have the right to submit the code and that it does not violate any third-party rights.
## Project Governance
NORA uses a **Benevolent Dictator** governance model:
- **Maintainer:** [@devitway](https://github.com/devitway) — final decisions on features, releases, and architecture
- **Contributors:** anyone who submits issues, PRs, or docs improvements
- **Decision process:** proposals via GitHub Issues → discussion → maintainer decision
- **Release authority:** maintainer only
### Roles and Responsibilities
| Role | Person | Responsibilities |
|------|--------|-----------------|
| Maintainer | @devitway | Code review, releases, roadmap, security response |
| Contributor | anyone | Issues, PRs, documentation, testing |
| Dependabot | automated | Dependency updates |
### Continuity
The GitHub organization [getnora-io](https://github.com/getnora-io) has multiple admin accounts to ensure project continuity. Source code is MIT-licensed, enabling anyone to fork and continue the project.
## Getting Started
1. Fork the repository
@@ -14,58 +42,71 @@ Thank you for your interest in contributing to NORA!
# Install Rust (if needed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Enable pre-commit hooks (important!)
git config core.hooksPath .githooks
# Build
cargo build
cargo build --package nora-registry
# Run tests
cargo test
# Run tests (important: always use --lib --bin nora to skip fuzz targets)
cargo test --lib --bin nora
# Run clippy
cargo clippy --package nora-registry -- -D warnings
# Format
cargo fmt
# Run locally
cargo run --bin nora -- serve
```
## Before Submitting a PR
```bash
cargo fmt --check
cargo clippy --package nora-registry -- -D warnings
cargo test --lib --bin nora
```
All three must pass. CI will enforce this.
## Code Style
- Run `cargo fmt` before committing
- Run `cargo clippy` and fix warnings
- Fix all `cargo clippy` warnings
- Follow Rust naming conventions
- Keep functions short and focused
- Add tests for new functionality
## Pull Request Process
1. Update documentation if needed
2. Add tests for new features
3. Ensure all tests pass: `cargo test`
4. Ensure code is formatted: `cargo fmt --check`
5. Ensure no clippy warnings: `cargo clippy`
1. Update CHANGELOG.md if the change is user-facing
2. Add tests for new features or bug fixes
3. Ensure CI passes (fmt, clippy, test, security checks)
4. Keep PRs focused — one feature or fix per PR
## Commit Messages
Use conventional commits:
- `feat:` - new feature
- `fix:` - bug fix
- `docs:` - documentation
- `style:` - formatting
- `refactor:` - code refactoring
- `test:` - adding tests
- `chore:` - maintenance
- `feat:` new feature
- `fix:` bug fix
- `docs:` documentation
- `test:` adding or updating tests
- `security:` security improvements
- `chore:` maintenance
Example: `feat: add OAuth2 authentication`
Example: `feat: add npm scoped package support`
## Reporting Issues
- Use GitHub Issues
- Use GitHub Issues with the provided templates
- Include steps to reproduce
- Include NORA version and OS
- Include NORA version (`nora --version`) and OS
## License
By contributing, you agree that your contributions will be licensed under the MIT License.
## Contact
## Community
- Telegram: [@DevITWay](https://t.me/DevITWay)
- Telegram: [@getnora](https://t.me/getnora)
- GitHub Issues: [getnora-io/nora](https://github.com/getnora-io/nora/issues)

173
Cargo.lock generated
View File

@@ -34,9 +34,9 @@ dependencies = [
[[package]]
name = "anstream"
version = "0.6.21"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a"
checksum = "824a212faf96e9acacdbd09febd34438f8f711fb84e09a8916013cd7815ca28d"
dependencies = [
"anstyle",
"anstyle-parse",
@@ -55,9 +55,9 @@ checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78"
[[package]]
name = "anstyle-parse"
version = "0.2.7"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2"
checksum = "52ce7f38b242319f7cabaa6813055467063ecdc9d355bbb4ce0c68908cd8130e"
dependencies = [
"utf8parse",
]
@@ -97,6 +97,18 @@ dependencies = [
"derive_arbitrary",
]
[[package]]
name = "argon2"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c3610892ee6e0cbce8ae2700349fcf8f98adb0dbfbee85aec3c9179d29cc072"
dependencies = [
"base64ct",
"blake2",
"cpufeatures",
"password-hash",
]
[[package]]
name = "assert-json-diff"
version = "2.0.2"
@@ -188,6 +200,12 @@ version = "0.22.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]]
name = "base64ct"
version = "1.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2af50177e190e07a26ab74f8b1efbfe2ef87da2116221318cb1c2e82baf7de06"
[[package]]
name = "bcrypt"
version = "0.19.0"
@@ -207,6 +225,15 @@ version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
[[package]]
name = "blake2"
version = "0.10.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46502ad458c9a52b69d4d4d32775c788b7a1b85e8bc9d482d92250fc0e3f8efe"
dependencies = [
"digest",
]
[[package]]
name = "block-buffer"
version = "0.10.4"
@@ -251,6 +278,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "47b26a0954ae34af09b50f0de26458fa95369a0d478d8236d3f93082b219bd29"
dependencies = [
"find-msvc-tools",
"jobserver",
"libc",
"shlex",
]
@@ -292,9 +321,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.60"
version = "4.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2797f34da339ce31042b27d23607e051786132987f595b02ba4f6a6dffb7030a"
checksum = "b193af5b67834b676abd72466a96c1024e6a6ad978a1f484bd90b85c94041351"
dependencies = [
"clap_builder",
"clap_derive",
@@ -302,9 +331,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.60"
version = "4.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24a241312cea5059b13574bb9b3861cabf758b879c15190b37b6d6fd63ab6876"
checksum = "714a53001bf66416adb0e2ef5ac857140e7dc3a0c48fb28b2f10762fc4b5069f"
dependencies = [
"anstream",
"anstyle",
@@ -314,9 +343,9 @@ dependencies = [
[[package]]
name = "clap_derive"
version = "4.5.55"
version = "4.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a92793da1a46a5f2a02a6f4c46c6496b28c43638adea8306fcb0caa1634f24e5"
checksum = "1110bd8a634a1ab8cb04345d8d878267d57c3cf1b38d91b71af6686408bbca6a"
dependencies = [
"heck",
"proc-macro2",
@@ -473,7 +502,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.60.2",
"windows-sys 0.61.2",
]
[[package]]
@@ -1103,6 +1132,16 @@ version = "1.0.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2"
[[package]]
name = "jobserver"
version = "0.1.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9afb3de4395d6b3e67a780b6de64b51c978ecf11cb9a462c66be7d4ca9039d33"
dependencies = [
"getrandom 0.3.4",
"libc",
]
[[package]]
name = "js-sys"
version = "0.3.85"
@@ -1131,6 +1170,16 @@ version = "0.2.182"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6800badb6cb2082ffd7b6a67e6125bb39f18782f793520caee8cb8846be06112"
[[package]]
name = "libfuzzer-sys"
version = "0.4.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f12a681b7dd8ce12bff52488013ba614b869148d54dd79836ab85aafdd53f08d"
dependencies = [
"arbitrary",
"cc",
]
[[package]]
name = "libredox"
version = "0.1.12"
@@ -1246,23 +1295,18 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38bf9645c8b145698bb0b18a4637dcacbc421ea49bef2317e4fd8065a387cf21"
[[package]]
name = "nora-cli"
version = "0.2.28"
name = "nora-fuzz"
version = "0.0.0"
dependencies = [
"clap",
"flate2",
"indicatif",
"reqwest",
"serde",
"serde_json",
"tar",
"tokio",
"libfuzzer-sys",
"nora-registry",
]
[[package]]
name = "nora-registry"
version = "0.2.28"
version = "0.3.0"
dependencies = [
"argon2",
"async-trait",
"axum",
"base64",
@@ -1277,6 +1321,7 @@ dependencies = [
"indicatif",
"lazy_static",
"parking_lot",
"percent-encoding",
"prometheus",
"reqwest",
"serde",
@@ -1287,6 +1332,7 @@ dependencies = [
"thiserror 2.0.18",
"tokio",
"toml",
"tower-http",
"tower_governor",
"tracing",
"tracing-subscriber",
@@ -1297,25 +1343,6 @@ dependencies = [
"zeroize",
]
[[package]]
name = "nora-storage"
version = "0.2.28"
dependencies = [
"axum",
"base64",
"chrono",
"httpdate",
"quick-xml",
"serde",
"serde_json",
"sha2",
"tokio",
"toml",
"tracing",
"tracing-subscriber",
"uuid",
]
[[package]]
name = "nu-ansi-term"
version = "0.50.3"
@@ -1379,6 +1406,17 @@ dependencies = [
"windows-link",
]
[[package]]
name = "password-hash"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "346f04948ba92c43e8469c1ee6736c7563d71012b17d40745260fe106aac2166"
dependencies = [
"base64ct",
"rand_core 0.6.4",
"subtle",
]
[[package]]
name = "percent-encoding"
version = "2.3.2"
@@ -1510,16 +1548,6 @@ dependencies = [
"winapi",
]
[[package]]
name = "quick-xml"
version = "0.39.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "958f21e8e7ceb5a1aa7fa87fab28e7c75976e0bfe7e23ff069e0a260f894067d"
dependencies = [
"memchr",
"serde",
]
[[package]]
name = "quinn"
version = "0.11.9"
@@ -1577,9 +1605,9 @@ dependencies = [
[[package]]
name = "quote"
version = "1.0.44"
version = "1.0.45"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "21b2ebcf727b7760c461f091f9f0f539b77b8e87f2fd88131e7f1b433b3cece4"
checksum = "41f2619966050689382d2b44f664f4bc593e129785a36d6ee376ddf37259b924"
dependencies = [
"proc-macro2",
]
@@ -1597,7 +1625,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
dependencies = [
"rand_chacha",
"rand_core",
"rand_core 0.9.5",
]
[[package]]
@@ -1607,7 +1635,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb"
dependencies = [
"ppv-lite86",
"rand_core",
"rand_core 0.9.5",
]
[[package]]
name = "rand_core"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
dependencies = [
"getrandom 0.2.17",
]
[[package]]
@@ -1779,7 +1816,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.60.2",
"windows-sys 0.61.2",
]
[[package]]
@@ -1808,9 +1845,9 @@ dependencies = [
[[package]]
name = "rustls-webpki"
version = "0.103.9"
version = "0.103.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7df23109aa6c1567d1c575b9952556388da57401e4ace1d15f79eedad0d8f53"
checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
dependencies = [
"ring",
"rustls-pki-types",
@@ -2018,9 +2055,9 @@ checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292"
[[package]]
name = "syn"
version = "2.0.114"
version = "2.0.117"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4d107df263a3013ef9b1879b0df87d706ff80f65a86ea879bd9c31f9b307c2a"
checksum = "e665b8803e7b1d2a727f4023456bbbbe74da67099c585258af0ad9c5013b9b99"
dependencies = [
"proc-macro2",
"quote",
@@ -2049,9 +2086,9 @@ dependencies = [
[[package]]
name = "tar"
version = "0.4.44"
version = "0.4.45"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d863878d212c87a19c1a610eb53bb01fe12951c0501cf5a0d65f724914a667a"
checksum = "22692a6476a21fa75fdfc11d452fda482af402c008cdbaf3476414e122040973"
dependencies = [
"filetime",
"libc",
@@ -2060,15 +2097,15 @@ dependencies = [
[[package]]
name = "tempfile"
version = "3.26.0"
version = "3.27.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "82a72c767771b47409d2345987fda8628641887d5466101319899796367354a0"
checksum = "32497e9a4c7b38532efcdebeef879707aa9f794296a4f0244f6f69e9bc8574bd"
dependencies = [
"fastrand",
"getrandom 0.4.1",
"once_cell",
"rustix",
"windows-sys 0.60.2",
"windows-sys 0.61.2",
]
[[package]]
@@ -2397,9 +2434,9 @@ dependencies = [
[[package]]
name = "tracing-subscriber"
version = "0.3.22"
version = "0.3.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f30143827ddab0d256fd843b7a66d164e9f271cfa0dde49142c5ca0ca291f1e"
checksum = "cb7f578e5945fb242538965c2d0b04418d38ec25c79d160cd279bf0731c8d319"
dependencies = [
"matchers",
"nu-ansi-term",
@@ -2741,7 +2778,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.60.2",
"windows-sys 0.61.2",
]
[[package]]

View File

@@ -2,12 +2,11 @@
resolver = "2"
members = [
"nora-registry",
"nora-storage",
"nora-cli",
"fuzz",
]
[workspace.package]
version = "0.2.28"
version = "0.3.0"
edition = "2021"
license = "MIT"
authors = ["DevITWay <devitway@gmail.com>"]

View File

@@ -1,10 +1,12 @@
# syntax=docker/dockerfile:1.4
# Binary is pre-built by CI (cargo build --release) and passed via context
FROM alpine:3.20
FROM alpine:3.20@sha256:a4f4213abb84c497377b8544c81b3564f313746700372ec4fe84653e4fb03805
RUN apk add --no-cache ca-certificates && mkdir -p /data
RUN apk add --no-cache ca-certificates \
&& addgroup -S nora && adduser -S -G nora nora \
&& mkdir -p /data && chown nora:nora /data
COPY nora /usr/local/bin/nora
COPY --chown=nora:nora nora /usr/local/bin/nora
ENV RUST_LOG=info
ENV NORA_HOST=0.0.0.0
@@ -17,5 +19,7 @@ EXPOSE 4000
VOLUME ["/data"]
USER nora
ENTRYPOINT ["/usr/local/bin/nora"]
CMD ["serve"]

View File

@@ -5,12 +5,15 @@
# FROM registry.astralinux.ru/library/alse:latest
# RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates && rm -rf /var/lib/apt/lists/*
FROM alpine:3.20 AS certs
RUN apk add --no-cache ca-certificates
FROM alpine:3.20@sha256:a4f4213abb84c497377b8544c81b3564f313746700372ec4fe84653e4fb03805 AS certs
RUN apk add --no-cache ca-certificates \
&& addgroup -S -g 10001 nora && adduser -S -u 10001 -G nora nora
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=certs /etc/passwd /etc/passwd
COPY --from=certs /etc/group /etc/group
COPY nora /usr/local/bin/nora
ENV RUST_LOG=info
@@ -24,5 +27,7 @@ EXPOSE 4000
VOLUME ["/data"]
USER nora
ENTRYPOINT ["/usr/local/bin/nora"]
CMD ["serve"]

View File

@@ -5,12 +5,15 @@
# FROM registry.red-soft.ru/redos/redos:8
# RUN dnf install -y ca-certificates && dnf clean all
FROM alpine:3.20 AS certs
RUN apk add --no-cache ca-certificates
FROM alpine:3.20@sha256:a4f4213abb84c497377b8544c81b3564f313746700372ec4fe84653e4fb03805 AS certs
RUN apk add --no-cache ca-certificates \
&& addgroup -S -g 10001 nora && adduser -S -u 10001 -G nora nora
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=certs /etc/passwd /etc/passwd
COPY --from=certs /etc/group /etc/group
COPY nora /usr/local/bin/nora
ENV RUST_LOG=info
@@ -24,5 +27,7 @@ EXPOSE 4000
VOLUME ["/data"]
USER nora
ENTRYPOINT ["/usr/local/bin/nora"]
CMD ["serve"]

308
README.md
View File

@@ -1,43 +1,42 @@
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
# NORA
**The artifact registry that grows with you.** Starts with `docker run`, scales to enterprise.
```bash
docker run -d -p 4000:4000 -v nora-data:/data ghcr.io/getnora-io/nora:latest
```
Open [http://localhost:4000/ui/](http://localhost:4000/ui/) — your registry is ready.
<p align="center">
<img src=".github/assets/dashboard.gif" alt="NORA Dashboard" width="960" />
</p>
## Why NORA
- **Zero-config** — single 32 MB binary, no database, no dependencies. `docker run` and it works.
- **Production-tested** — Docker, Maven, npm, PyPI, Cargo, Go, Raw. Used in real CI/CD with ArgoCD, Buildx cache, and air-gapped environments.
- **Secure by default** — [OpenSSF Scorecard](https://scorecard.dev/viewer/?uri=github.com/getnora-io/nora), signed releases, SBOM, fuzz testing, 200+ unit tests.
[![Release](https://img.shields.io/github/v/release/getnora-io/nora)](https://github.com/getnora-io/nora/releases)
[![CI](https://img.shields.io/github/actions/workflow/status/getnora-io/nora/ci.yml?label=CI)](https://github.com/getnora-io/nora/actions)
[![GHCR](https://img.shields.io/badge/ghcr.io-nora-blue?logo=github)](https://github.com/getnora-io/nora/pkgs/container/nora)
[![GitHub Stars](https://img.shields.io/github/stars/getnora-io/nora?style=flat&logo=github)](https://github.com/getnora-io/nora/stargazers)
[![Rust](https://img.shields.io/badge/rust-%23000000.svg?logo=rust&logoColor=white)](https://www.rust-lang.org/)
[![Docs](https://img.shields.io/badge/docs-getnora.dev-green?logo=gitbook)](https://getnora.dev)
[![Telegram](https://img.shields.io/badge/Telegram-Community-blue?logo=telegram)](https://t.me/getnora)
[![Image Size](https://ghcr-badge.egpl.dev/getnora-io/nora/size?label=image)](https://github.com/getnora-io/nora/pkgs/container/nora)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
> **Your Cloud-Native Artifact Registry**
**32 MB** binary | **< 100 MB** RAM | **3s** startup | **7** registries
Fast. Organized. Feel at Home.
> Used in production at [DevIT Academy](https://github.com/devitway) since January 2026 for Docker images, Maven artifacts, and npm packages.
**10x faster** than Nexus | **< 100 MB RAM** | **32 MB Docker image**
## Supported Registries
## Features
- **Multi-Protocol Support**
- Docker Registry v2
- Maven repository (+ proxy to Maven Central)
- npm registry (+ proxy to npmjs.org)
- Cargo registry
- PyPI index
- **Storage Backends**
- Local filesystem (zero-config default)
- S3-compatible (MinIO, AWS S3)
- **Production Ready**
- Web UI with search and browse
- Swagger UI API documentation
- Prometheus metrics (`/metrics`)
- Health checks (`/health`, `/ready`)
- JSON structured logging
- Graceful shutdown
- **Security**
- Basic Auth (htpasswd + bcrypt)
- Revocable API tokens
- ENV-based configuration (12-Factor)
| Registry | Mount Point | Upstream Proxy | Auth |
|----------|------------|----------------|------|
| Docker Registry v2 | `/v2/` | Docker Hub, GHCR, any OCI | ✓ |
| Maven | `/maven2/` | Maven Central, custom | proxy-only |
| npm | `/npm/` | npmjs.org, custom | ✓ |
| Cargo | `/cargo/` | — | ✓ |
| PyPI | `/simple/` | pypi.org, custom | ✓ |
| Go Modules | `/go/` | proxy.golang.org, custom | ✓ |
| Raw files | `/raw/` | — | ✓ |
## Quick Start
@@ -47,6 +46,13 @@ Fast. Organized. Feel at Home.
docker run -d -p 4000:4000 -v nora-data:/data ghcr.io/getnora-io/nora:latest
```
### Binary
```bash
curl -fsSL https://github.com/getnora-io/nora/releases/latest/download/nora-linux-amd64 -o nora
chmod +x nora && ./nora
```
### From Source
```bash
@@ -54,18 +60,13 @@ cargo install nora-registry
nora
```
Open http://localhost:4000/ui/
## Usage
### Docker Images
```bash
# Tag and push
docker tag myapp:latest localhost:4000/myapp:latest
docker push localhost:4000/myapp:latest
# Pull
docker pull localhost:4000/myapp:latest
```
@@ -86,54 +87,40 @@ npm config set registry http://localhost:4000/npm/
npm publish
```
### Go Modules
```bash
GOPROXY=http://localhost:4000/go go get golang.org/x/text@latest
```
## Features
- **Web UI** — dashboard with search, browse, i18n (EN/RU)
- **Proxy & Cache** — transparent proxy to upstream registries with local cache
- **Mirror CLI** — offline sync for air-gapped environments (`nora mirror`)
- **Backup & Restore** — `nora backup` / `nora restore`
- **Migration** — `nora migrate --from local --to s3`
- **S3 Storage** — MinIO, AWS S3, any S3-compatible backend
- **Prometheus Metrics** — `/metrics` endpoint
- **Health Checks** — `/health`, `/ready` for Kubernetes probes
- **Swagger UI** — `/api-docs` for API exploration
- **Rate Limiting** — configurable per-endpoint rate limits
- **FSTEC Builds** — Astra Linux SE and RED OS images in every release
## Authentication
NORA supports Basic Auth (htpasswd) and revocable API tokens with RBAC.
### Quick Setup
```bash
# 1. Create htpasswd file with bcrypt
# Create htpasswd file
htpasswd -cbB users.htpasswd admin yourpassword
# Add more users:
htpasswd -bB users.htpasswd ci-user ci-secret
# 2. Start NORA with auth enabled
# Start with auth enabled
docker run -d -p 4000:4000 \
-v nora-data:/data \
-v ./users.htpasswd:/data/users.htpasswd \
-e NORA_AUTH_ENABLED=true \
ghcr.io/getnora-io/nora:latest
# 3. Verify
curl -u admin:yourpassword http://localhost:4000/v2/_catalog
```
### API Tokens (RBAC)
Tokens support three roles: `read`, `write`, `admin`.
```bash
# Create a write token (30 days TTL)
curl -s -X POST http://localhost:4000/api/tokens \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"yourpassword","role":"write","ttl_days":90,"description":"CI/CD"}'
# Use token with Docker
docker login localhost:4000 -u token -p nra_<token>
# Use token with curl
curl -H "Authorization: Bearer nra_<token>" http://localhost:4000/v2/_catalog
# List tokens
curl -s -X POST http://localhost:4000/api/tokens/list \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"yourpassword"}'
# Revoke token by hash prefix
curl -s -X POST http://localhost:4000/api/tokens/revoke \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"yourpassword","hash_prefix":"<first 16 chars>"}'
```
| Role | Pull/Read | Push/Write | Delete/Admin |
@@ -142,15 +129,7 @@ curl -s -X POST http://localhost:4000/api/tokens/revoke \
| `write` | Yes | Yes | No |
| `admin` | Yes | Yes | Yes |
## CLI Commands
```bash
nora # Start server
nora serve # Start server (explicit)
nora backup -o backup.tar.gz
nora restore -i backup.tar.gz
nora migrate --from local --to s3
```
See [Authentication guide](https://getnora.dev/configuration/authentication/) for token management, Docker login, and CI/CD integration.
## Configuration
@@ -161,18 +140,10 @@ nora migrate --from local --to s3
| `NORA_HOST` | 127.0.0.1 | Bind address |
| `NORA_PORT` | 4000 | Port |
| `NORA_STORAGE_MODE` | local | `local` or `s3` |
| `NORA_STORAGE_PATH` | data/storage | Local storage path |
| `NORA_STORAGE_S3_URL` | - | S3 endpoint URL |
| `NORA_STORAGE_BUCKET` | registry | S3 bucket name |
| `NORA_AUTH_ENABLED` | false | Enable authentication |
| `NORA_RATE_LIMIT_AUTH_RPS` | 1 | Auth requests per second |
| `NORA_RATE_LIMIT_AUTH_BURST` | 5 | Auth burst size |
| `NORA_RATE_LIMIT_UPLOAD_RPS` | 200 | Upload requests per second |
| `NORA_RATE_LIMIT_UPLOAD_BURST` | 500 | Upload burst size |
| `NORA_RATE_LIMIT_GENERAL_RPS` | 100 | General requests per second |
| `NORA_RATE_LIMIT_GENERAL_BURST` | 200 | General burst size |
| `NORA_SECRETS_PROVIDER` | env | Secrets provider (`env`) |
| `NORA_SECRETS_CLEAR_ENV` | false | Clear env vars after reading |
| `NORA_DOCKER_UPSTREAMS` | `https://registry-1.docker.io` | Docker upstreams (`url\|user:pass,...`) |
See [full configuration reference](https://getnora.dev/configuration/settings/) for all options.
### config.toml
@@ -189,22 +160,25 @@ path = "data/storage"
enabled = false
htpasswd_file = "users.htpasswd"
[rate_limit]
# Strict limits for authentication (brute-force protection)
auth_rps = 1
auth_burst = 5
# High limits for CI/CD upload workloads
upload_rps = 200
upload_burst = 500
# Balanced limits for general API endpoints
general_rps = 100
general_burst = 200
[docker]
proxy_timeout = 60
[secrets]
# Provider: env (default), aws-secrets, vault, k8s (coming soon)
provider = "env"
# Clear environment variables after reading (security hardening)
clear_env = false
[[docker.upstreams]]
url = "https://registry-1.docker.io"
[go]
proxy = "https://proxy.golang.org"
```
## CLI Commands
```bash
nora # Start server
nora serve # Start server (explicit)
nora backup -o backup.tar.gz
nora restore -i backup.tar.gz
nora migrate --from local --to s3
nora mirror # Sync packages for offline use
```
## Endpoints
@@ -221,23 +195,11 @@ clear_env = false
| `/npm/` | npm |
| `/cargo/` | Cargo |
| `/simple/` | PyPI |
| `/go/` | Go Modules |
## TLS / HTTPS
NORA serves plain HTTP by design. **TLS is intentionally not built into the binary** — this is a deliberate architectural decision:
- **Single responsibility**: NORA manages artifacts, not certificates. Embedding TLS means bundling Let's Encrypt clients, certificate renewal logic, ACME challenges, and custom CA support — all of which already exist in battle-tested tools.
- **Operational simplicity**: One place for certificates (reverse proxy), not scattered across every service. When a cert expires, you fix it in one config — not in NORA, Grafana, GitLab, and every other service separately.
- **Industry standard**: Docker Hub, GitHub Container Registry, AWS ECR, Harbor, Nexus — none of them terminate TLS in the registry process. A reverse proxy in front is the universal pattern.
- **Zero-config internal use**: On trusted networks (lab, CI/CD), NORA works out of the box without generating self-signed certs or managing keystores.
### Production (recommended): reverse proxy with auto-TLS
```
Client → Caddy/Nginx (HTTPS, port 443) → NORA (HTTP, port 4000)
```
Caddy example:
NORA serves plain HTTP. Use a reverse proxy for TLS:
```
registry.example.com {
@@ -245,53 +207,7 @@ registry.example.com {
}
```
Nginx example:
```nginx
server {
listen 443 ssl;
server_name registry.example.com;
ssl_certificate /etc/letsencrypt/live/registry.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/registry.example.com/privkey.pem;
client_max_body_size 0; # unlimited for large image pushes
location / {
proxy_pass http://127.0.0.1:4000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
### Internal / Lab: insecure registry
If you run NORA without TLS (e.g., on a private network), configure Docker to trust it:
```json
// /etc/docker/daemon.json
{
"insecure-registries": ["192.168.1.100:4000"]
}
```
Then restart Docker:
```bash
sudo systemctl restart docker
```
> **Note:** `insecure-registries` disables TLS verification for that host. Use only on trusted networks.
## FSTEC-Certified OS Builds
NORA provides dedicated Dockerfiles for Russian FSTEC-certified operating systems:
- `Dockerfile.astra` — Astra Linux SE (for government and defense sector)
- `Dockerfile.redos` — RED OS (for enterprise and public sector)
Both use `scratch` base with statically-linked binary for minimal attack surface. Comments in each file show how to switch to official distro base images if required by your security policy.
These builds are published as `-astra` and `-redos` tagged images in GitHub Releases.
See [TLS / HTTPS guide](https://getnora.dev/configuration/tls/) for Nginx, Traefik, and custom CA setup.
## Performance
@@ -301,14 +217,48 @@ These builds are published as `-astra` and `-redos` tagged images in GitHub Rele
| Memory | < 100 MB | 2-4 GB | 2-4 GB |
| Image Size | 32 MB | 600+ MB | 1+ GB |
[See how NORA compares to other registries](https://getnora.dev)
## Roadmap
- **OIDC / Workload Identity** — zero-secret auth for GitHub Actions, GitLab CI
- **Online Garbage Collection** — non-blocking cleanup without registry downtime
- **Retention Policies** — declarative rules: keep last N tags, delete older than X days
- **Image Signing** — cosign/notation verification and policy enforcement
- **Replication** — push/pull sync between NORA instances
See [CHANGELOG.md](CHANGELOG.md) for release history.
## Security & Trust
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/getnora-io/nora/badge)](https://scorecard.dev/viewer/?uri=github.com/getnora-io/nora)
[![CII Best Practices](https://www.bestpractices.dev/projects/12207/badge)](https://www.bestpractices.dev/projects/12207)
[![Coverage](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/devitway/0f0538f1ed16d5d9951e4f2d3f79b699/raw/nora-coverage.json)](https://github.com/getnora-io/nora/actions/workflows/ci.yml)
[![CI](https://img.shields.io/github/actions/workflow/status/getnora-io/nora/ci.yml?label=CI)](https://github.com/getnora-io/nora/actions)
- **Signed releases** — every release is signed with [cosign](https://github.com/sigstore/cosign)
- **SBOM** — SPDX + CycloneDX in every release
- **Fuzz testing** — cargo-fuzz + ClusterFuzzLite
- **Blob verification** — SHA256 digest validation on every upload
- **Non-root containers** — all images run as non-root
- **Security headers** — CSP, X-Frame-Options, nosniff
See [SECURITY.md](SECURITY.md) for vulnerability reporting.
## Author
**Created and maintained by [DevITWay](https://github.com/devitway)**
Created and maintained by [DevITWay](https://github.com/devitway)
- Website: [getnora.io](https://getnora.io)
- Telegram: [@DevITWay](https://t.me/DevITWay)
[![Release](https://img.shields.io/github/v/release/getnora-io/nora)](https://github.com/getnora-io/nora/releases)
[![GHCR](https://img.shields.io/badge/ghcr.io-nora-blue?logo=github)](https://github.com/getnora-io/nora/pkgs/container/nora)
[![Rust](https://img.shields.io/badge/rust-%23000000.svg?logo=rust&logoColor=white)](https://www.rust-lang.org/)
[![Docs](https://img.shields.io/badge/docs-getnora.dev-green?logo=gitbook)](https://getnora.dev)
[![Telegram](https://img.shields.io/badge/Telegram-Community-blue?logo=telegram)](https://t.me/getnora)
[![GitHub Stars](https://img.shields.io/github/stars/getnora-io/nora?style=flat&logo=github)](https://github.com/getnora-io/nora/stargazers)
- Website: [getnora.dev](https://getnora.dev)
- Telegram: [@getnora](https://t.me/getnora)
- GitHub: [@devitway](https://github.com/devitway)
- Email: devitway@gmail.com
## Contributing
@@ -316,10 +266,6 @@ NORA welcomes contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelin
## License
MIT License - see [LICENSE](LICENSE)
MIT License see [LICENSE](LICENSE)
Copyright (c) 2026 DevITWay
---
**🐿️ N○RA** - Organized like a chipmunk's stash | Built with Rust by [DevITWay](https://t.me/DevITWay)

38
ROADMAP.md Normal file
View File

@@ -0,0 +1,38 @@
# Roadmap
> This roadmap reflects current priorities. It may change based on community feedback.
## Recently Completed
- **v0.2.32** — Docker dashboard fix for namespaced images, `library/` auto-prepend for Hub official images
- **v0.2.31** — npm full proxy (URL rewriting, scoped packages, publish, SHA-256 integrity cache, metadata TTL)
- **v0.2.29** — Upstream authentication for all protocols (Docker, Maven, npm, PyPI)
## In Progress
- **`nora mirror`** — Pre-fetch dependencies from lockfiles for air-gapped environments ([#40](https://github.com/getnora-io/nora/issues/40))
- npm: `package-lock.json` (v1/v2/v3)
- pip: `requirements.txt`
- cargo: `Cargo.lock`
- maven: dependency list
## Next Up
- **Consistent env var naming** — Unify `NORA_*_PROXY` / `NORA_*_UPSTREAMS` across all protocols ([#39](https://github.com/getnora-io/nora/issues/39))
- **Package blocklist** — Deny specific packages or versions via config ([#41](https://github.com/getnora-io/nora/issues/41))
- **Multiple upstreams for npm/PyPI** — Same as Maven already supports
- **v1.0.0 release** — Stable API, production-ready
## Future
- Docker image mirroring ([#42](https://github.com/getnora-io/nora/issues/42))
- Virtual repositories via config (named endpoints with custom search order)
- Path-based ACL (per-namespace write permissions)
- OIDC/LDAP authentication
- HA mode (stateless API + external database)
- Golang modules proxy
- Content trust (Cosign/Notation verification)
## How to Influence
Open an issue or join [Telegram](https://t.me/getnora) to discuss priorities.

View File

@@ -50,4 +50,6 @@ When deploying NORA:
## Acknowledgments
We appreciate responsible disclosure and will acknowledge security researchers who report valid vulnerabilities.
We appreciate responsible disclosure and will acknowledge security researchers who report valid vulnerabilities in our release notes and CHANGELOG, unless the reporter requests anonymity.
If you have previously reported a vulnerability and would like to be credited, please let us know.

View File

@@ -9,6 +9,7 @@ ignore = [
]
[licenses]
unused-allowed-license = "allow"
# Allowed open-source licenses
allow = [
"MIT",
@@ -24,6 +25,7 @@ allow = [
"Zlib",
"CDLA-Permissive-2.0", # webpki-roots (CA certificates bundle)
"MPL-2.0",
"NCSA", # libfuzzer-sys (LLVM fuzzer)
]
[bans]

View File

@@ -1,83 +0,0 @@
#!/bin/bash
# Demo traffic simulator for NORA registry
# Generates random registry activity for dashboard demo
REGISTRY="http://localhost:4000"
LOG_FILE="/var/log/nora-demo-traffic.log"
# Sample packages to fetch
NPM_PACKAGES=("lodash" "express" "react" "axios" "moment" "underscore" "chalk" "debug")
MAVEN_ARTIFACTS=(
"org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.pom"
"com/google/guava/guava/31.1-jre/guava-31.1-jre.pom"
"org/slf4j/slf4j-api/2.0.0/slf4j-api-2.0.0.pom"
)
DOCKER_IMAGES=("alpine" "busybox" "hello-world")
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"
}
# Random sleep between min and max seconds
random_sleep() {
local min=$1
local max=$2
local delay=$((RANDOM % (max - min + 1) + min))
sleep $delay
}
# Fetch random npm package
fetch_npm() {
local pkg=${NPM_PACKAGES[$RANDOM % ${#NPM_PACKAGES[@]}]}
log "NPM: fetching $pkg"
curl -s "$REGISTRY/npm/$pkg" > /dev/null 2>&1
}
# Fetch random maven artifact
fetch_maven() {
local artifact=${MAVEN_ARTIFACTS[$RANDOM % ${#MAVEN_ARTIFACTS[@]}]}
log "MAVEN: fetching $artifact"
curl -s "$REGISTRY/maven2/$artifact" > /dev/null 2>&1
}
# Docker push/pull cycle
docker_cycle() {
local img=${DOCKER_IMAGES[$RANDOM % ${#DOCKER_IMAGES[@]}]}
local tag="demo-$(date +%s)"
log "DOCKER: push/pull cycle for $img"
# Tag and push
docker tag "$img:latest" "localhost:4000/demo/$img:$tag" 2>/dev/null
docker push "localhost:4000/demo/$img:$tag" > /dev/null 2>&1
# Pull back
docker rmi "localhost:4000/demo/$img:$tag" > /dev/null 2>&1
docker pull "localhost:4000/demo/$img:$tag" > /dev/null 2>&1
# Cleanup
docker rmi "localhost:4000/demo/$img:$tag" > /dev/null 2>&1
}
# Main loop
log "Starting demo traffic simulator"
while true; do
# Random operation
op=$((RANDOM % 10))
case $op in
0|1|2|3) # 40% npm
fetch_npm
;;
4|5|6) # 30% maven
fetch_maven
;;
7|8|9) # 30% docker
docker_cycle
;;
esac
# Random delay: 30-120 seconds
random_sleep 30 120
done

View File

@@ -1,15 +0,0 @@
[Unit]
Description=NORA Demo Traffic Simulator
After=docker.service
Requires=docker.service
[Service]
Type=simple
ExecStart=/opt/nora/demo-traffic.sh
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target

131
dist/install.sh vendored Executable file
View File

@@ -0,0 +1,131 @@
#!/usr/bin/env bash
set -euo pipefail
# NORA Artifact Registry — install script
# Usage: curl -fsSL https://getnora.io/install.sh | bash
VERSION="${NORA_VERSION:-latest}"
ARCH=$(uname -m)
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
INSTALL_DIR="/usr/local/bin"
CONFIG_DIR="/etc/nora"
DATA_DIR="/var/lib/nora"
LOG_DIR="/var/log/nora"
case "$ARCH" in
x86_64|amd64) ARCH="x86_64" ;;
aarch64|arm64) ARCH="aarch64" ;;
*) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac
echo "Installing NORA ($OS/$ARCH)..."
# Download binary
if [ "$VERSION" = "latest" ]; then
DOWNLOAD_URL="https://github.com/getnora-io/nora/releases/latest/download/nora-${OS}-${ARCH}"
else
DOWNLOAD_URL="https://github.com/getnora-io/nora/releases/download/${VERSION}/nora-${OS}-${ARCH}"
fi
echo "Downloading from $DOWNLOAD_URL..."
if command -v curl &>/dev/null; then
curl -fsSL -o /tmp/nora "$DOWNLOAD_URL"
elif command -v wget &>/dev/null; then
wget -qO /tmp/nora "$DOWNLOAD_URL"
else
echo "Error: curl or wget required"; exit 1
fi
chmod +x /tmp/nora
# Verify signature if cosign is available
if command -v cosign &>/dev/null; then
echo "Verifying binary signature..."
SIG_URL="${DOWNLOAD_URL}.sig"
CERT_URL="${DOWNLOAD_URL}.pem"
if curl -fsSL -o /tmp/nora.sig "$SIG_URL" 2>/dev/null && \
curl -fsSL -o /tmp/nora.pem "$CERT_URL" 2>/dev/null; then
cosign verify-blob --signature /tmp/nora.sig --certificate /tmp/nora.pem \
--certificate-identity-regexp "github.com/getnora-io/nora" \
--certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
/tmp/nora && echo "Signature verified." || echo "Warning: signature verification failed."
rm -f /tmp/nora.sig /tmp/nora.pem
else
echo "Signature files not available, skipping verification."
fi
else
echo "Install cosign for binary signature verification: https://docs.sigstore.dev/cosign/system_config/installation/"
fi
sudo mv /tmp/nora "$INSTALL_DIR/nora"
# Create system user
if ! id nora &>/dev/null; then
sudo useradd --system --shell /usr/sbin/nologin --home-dir "$DATA_DIR" --create-home nora
echo "Created system user: nora"
fi
# Create directories
sudo mkdir -p "$CONFIG_DIR" "$DATA_DIR" "$LOG_DIR"
sudo chown nora:nora "$DATA_DIR" "$LOG_DIR"
# Install default config if not exists
if [ ! -f "$CONFIG_DIR/nora.env" ]; then
cat > /tmp/nora.env << 'ENVEOF'
NORA_HOST=0.0.0.0
NORA_PORT=4000
NORA_STORAGE_PATH=/var/lib/nora
ENVEOF
sudo mv /tmp/nora.env "$CONFIG_DIR/nora.env"
sudo chmod 600 "$CONFIG_DIR/nora.env"
sudo chown nora:nora "$CONFIG_DIR/nora.env"
echo "Created default config: $CONFIG_DIR/nora.env"
fi
# Install systemd service
if [ -d /etc/systemd/system ]; then
cat > /tmp/nora.service << 'SVCEOF'
[Unit]
Description=NORA Artifact Registry
Documentation=https://getnora.dev
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=nora
Group=nora
ExecStart=/usr/local/bin/nora serve
WorkingDirectory=/etc/nora
Restart=on-failure
RestartSec=5
LimitNOFILE=65535
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/nora /var/log/nora
PrivateTmp=true
EnvironmentFile=-/etc/nora/nora.env
[Install]
WantedBy=multi-user.target
SVCEOF
sudo mv /tmp/nora.service /etc/systemd/system/nora.service
sudo systemctl daemon-reload
sudo systemctl enable nora
echo "Installed systemd service: nora"
fi
echo ""
echo "NORA installed successfully!"
echo ""
echo " Binary: $INSTALL_DIR/nora"
echo " Config: $CONFIG_DIR/nora.env"
echo " Data: $DATA_DIR"
echo " Version: $(nora --version 2>/dev/null || echo 'unknown')"
echo ""
echo "Next steps:"
echo " 1. Edit $CONFIG_DIR/nora.env"
echo " 2. sudo systemctl start nora"
echo " 3. curl http://localhost:4000/health"
echo ""

31
dist/nora.env.example vendored Normal file
View File

@@ -0,0 +1,31 @@
# NORA configuration — environment variables
# Copy to /etc/nora/nora.env and adjust
# Server
NORA_HOST=0.0.0.0
NORA_PORT=4000
# NORA_PUBLIC_URL=https://registry.example.com
# Storage
NORA_STORAGE_PATH=/var/lib/nora
# NORA_STORAGE_MODE=s3
# NORA_STORAGE_S3_URL=http://minio:9000
# NORA_STORAGE_BUCKET=registry
# Auth (optional)
# NORA_AUTH_ENABLED=true
# NORA_AUTH_HTPASSWD_FILE=/etc/nora/users.htpasswd
# Rate limiting
# NORA_RATE_LIMIT_ENABLED=true
# npm proxy
# NORA_NPM_PROXY=https://registry.npmjs.org
# NORA_NPM_METADATA_TTL=300
# NORA_NPM_PROXY_AUTH=user:pass
# PyPI proxy
# NORA_PYPI_PROXY=https://pypi.org/simple/
# Docker upstreams
# NORA_DOCKER_UPSTREAMS=https://registry-1.docker.io

28
dist/nora.service vendored Normal file
View File

@@ -0,0 +1,28 @@
[Unit]
Description=NORA Artifact Registry
Documentation=https://getnora.dev
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=nora
Group=nora
ExecStart=/usr/local/bin/nora serve
WorkingDirectory=/etc/nora
Restart=on-failure
RestartSec=5
LimitNOFILE=65535
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/nora /var/log/nora
PrivateTmp=true
# Environment
EnvironmentFile=-/etc/nora/nora.env
[Install]
WantedBy=multi-user.target

13
docs-ru/README.md Normal file
View File

@@ -0,0 +1,13 @@
# Документация NORA для Росреестра
## Структура
- `ТУ.md` — Технические условия
- `Руководство.md` — Руководство пользователя
- `Руководство_администратора.md` — Руководство администратора
- `SBOM.md` — Перечень компонентов (Software Bill of Materials)
## Статус
Подготовка документации для включения в Единый реестр российских программ
для электронных вычислительных машин и баз данных (Минцифры РФ).

301
docs-ru/admin-guide.md Normal file
View File

@@ -0,0 +1,301 @@
# Руководство администратора NORA
**Версия:** 0.2.32
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)
---
## 1. Общие сведения
NORA — многопротокольный реестр артефактов, предназначенный для хранения, кэширования и распространения программных компонентов. Программа обеспечивает централизованное управление зависимостями при разработке и сборке программного обеспечения.
### 1.1. Назначение
- Хранение и раздача артефактов по протоколам Docker (OCI), npm, Maven, PyPI, Cargo, Helm OCI и Raw.
- Проксирование и кэширование внешних репозиториев для ускорения сборок и обеспечения доступности при отсутствии соединения с сетью Интернет.
- Контроль целостности артефактов посредством верификации SHA-256.
- Аудит и протоколирование всех операций.
### 1.2. Системные требования
| Параметр | Минимальные | Рекомендуемые |
|----------|-------------|---------------|
| ОС | Linux (amd64, arm64) | Ubuntu 22.04+, RHEL 8+ |
| ЦПУ | 1 ядро | 2+ ядра |
| ОЗУ | 256 МБ | 1+ ГБ |
| Диск | 1 ГБ | 50+ ГБ (зависит от объёма хранимых артефактов) |
| Сеть | TCP-порт (по умолчанию 4000) | — |
### 1.3. Зависимости
Программа поставляется как единый статически слинкованный исполняемый файл. Внешние зависимости отсутствуют. Перечень библиотек, включённых в состав программы, приведён в файле `nora.cdx.json` (формат CycloneDX).
---
## 2. Установка
### 2.1. Автоматическая установка
```bash
curl -fsSL https://getnora.io/install.sh | bash
```
Скрипт выполняет следующие действия:
1. Определяет архитектуру процессора (amd64 или arm64).
2. Загружает исполняемый файл с GitHub Releases.
3. Создаёт системного пользователя `nora`.
4. Создаёт каталоги: `/etc/nora/`, `/var/lib/nora/`, `/var/log/nora/`.
5. Устанавливает файл конфигурации `/etc/nora/nora.env`.
6. Устанавливает и активирует systemd-сервис.
### 2.2. Ручная установка
```bash
# Загрузка
wget https://github.com/getnora-io/nora/releases/download/v1.0.0/nora-linux-x86_64
chmod +x nora-linux-x86_64
mv nora-linux-x86_64 /usr/local/bin/nora
# Создание пользователя
useradd --system --shell /usr/sbin/nologin --home-dir /var/lib/nora --create-home nora
# Создание каталогов
mkdir -p /etc/nora /var/lib/nora /var/log/nora
chown nora:nora /var/lib/nora /var/log/nora
# Установка systemd-сервиса
cp dist/nora.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable nora
```
### 2.3. Установка из Docker-образа
```bash
docker run -d \
--name nora \
-p 4000:4000 \
-v nora-data:/data \
ghcr.io/getnora-io/nora:latest
```
---
## 3. Конфигурация
Конфигурация задаётся через переменные окружения, файл `config.toml` или их комбинацию. Приоритет: переменные окружения > config.toml > значения по умолчанию.
### 3.1. Основные параметры
| Переменная | Описание | По умолчанию |
|-----------|----------|--------------|
| `NORA_HOST` | Адрес привязки | `127.0.0.1` |
| `NORA_PORT` | Порт | `4000` |
| `NORA_PUBLIC_URL` | Внешний URL (для генерации ссылок) | — |
| `NORA_STORAGE_PATH` | Путь к каталогу хранилища | `data/storage` |
| `NORA_STORAGE_MODE` | Тип хранилища: `local` или `s3` | `local` |
| `NORA_BODY_LIMIT_MB` | Максимальный размер тела запроса (МБ) | `2048` |
### 3.2. Аутентификация
| Переменная | Описание | По умолчанию |
|-----------|----------|--------------|
| `NORA_AUTH_ENABLED` | Включить аутентификацию | `false` |
| `NORA_AUTH_HTPASSWD_FILE` | Путь к файлу htpasswd | `users.htpasswd` |
Создание пользователя:
```bash
htpasswd -Bc /etc/nora/users.htpasswd admin
```
Роли: `admin` (полный доступ), `write` (чтение и запись), `read` (только чтение, по умолчанию).
### 3.3. Проксирование внешних репозиториев
| Переменная | Описание | По умолчанию |
|-----------|----------|--------------|
| `NORA_NPM_PROXY` | URL npm-реестра | `https://registry.npmjs.org` |
| `NORA_NPM_PROXY_AUTH` | Учётные данные (`user:pass`) | — |
| `NORA_NPM_METADATA_TTL` | TTL кэша метаданных (секунды) | `300` |
| `NORA_PYPI_PROXY` | URL PyPI-реестра | `https://pypi.org/simple/` |
| `NORA_MAVEN_PROXIES` | Список Maven-репозиториев через запятую | `https://repo1.maven.org/maven2` |
| `NORA_DOCKER_UPSTREAMS` | Docker-реестры, формат: `url\|auth,url2` | `https://registry-1.docker.io` |
### 3.4. Ограничение частоты запросов
| Переменная | Описание | По умолчанию |
|-----------|----------|--------------|
| `NORA_RATE_LIMIT_ENABLED` | Включить ограничение | `true` |
| `NORA_RATE_LIMIT_GENERAL_RPS` | Запросов в секунду (общие) | `100` |
| `NORA_RATE_LIMIT_AUTH_RPS` | Запросов в секунду (аутентификация) | `1` |
| `NORA_RATE_LIMIT_UPLOAD_RPS` | Запросов в секунду (загрузка) | `200` |
---
## 4. Управление сервисом
### 4.1. Запуск и остановка
```bash
systemctl start nora # Запуск
systemctl stop nora # Остановка
systemctl restart nora # Перезапуск
systemctl status nora # Статус
journalctl -u nora -f # Просмотр журнала
```
### 4.2. Проверка работоспособности
```bash
curl http://localhost:4000/health
```
Ответ при нормальной работе:
```json
{
"status": "healthy",
"version": "1.0.0",
"storage": { "backend": "local", "reachable": true },
"registries": { "docker": "ok", "npm": "ok", "maven": "ok", "cargo": "ok", "pypi": "ok" }
}
```
### 4.3. Метрики (Prometheus)
```
GET /metrics
```
Экспортируются: количество запросов, латентность, загрузки и выгрузки по протоколам.
---
## 5. Резервное копирование и восстановление
### 5.1. Создание резервной копии
```bash
nora backup --output /backup/nora-$(date +%Y%m%d).tar.gz
```
### 5.2. Восстановление
```bash
nora restore --input /backup/nora-20260316.tar.gz
```
### 5.3. Сборка мусора
```bash
nora gc --dry-run # Просмотр (без удаления)
nora gc # Удаление осиротевших блобов
```
---
## 6. Предварительное кэширование (nora mirror)
Команда `nora mirror` позволяет заранее загрузить зависимости через прокси-кэш NORA. Это обеспечивает доступность артефактов при работе в изолированных средах без доступа к сети Интернет.
### 6.1. Кэширование по lockfile
```bash
nora mirror npm --lockfile package-lock.json --registry http://localhost:4000
nora mirror pip --lockfile requirements.txt --registry http://localhost:4000
nora mirror cargo --lockfile Cargo.lock --registry http://localhost:4000
```
### 6.2. Кэширование по списку пакетов
```bash
nora mirror npm --packages lodash,express --registry http://localhost:4000
nora mirror npm --packages lodash --all-versions --registry http://localhost:4000
```
### 6.3. Параметры
| Флаг | Описание | По умолчанию |
|------|----------|--------------|
| `--registry` | URL экземпляра NORA | `http://localhost:4000` |
| `--concurrency` | Количество параллельных загрузок | `8` |
| `--all-versions` | Загрузить все версии (только с `--packages`) | — |
---
## 7. Миграция хранилища
Перенос артефактов между локальным хранилищем и S3:
```bash
nora migrate --from local --to s3 --dry-run # Просмотр
nora migrate --from local --to s3 # Выполнение
```
---
## 8. Безопасность
### 8.1. Контроль целостности
При проксировании npm-пакетов NORA вычисляет и сохраняет контрольную сумму SHA-256 для каждого тарбола. При повторной выдаче из кэша контрольная сумма проверяется. В случае расхождения запрос отклоняется, а в журнал записывается предупреждение уровня SECURITY.
### 8.2. Защита от подмены пакетов
- Валидация имён файлов при публикации (защита от обхода каталогов).
- Проверка соответствия имени пакета в URL и теле запроса.
- Иммутабельность версий: повторная публикация той же версии запрещена.
### 8.3. Аудит
Все операции (загрузка, выгрузка, обращения к кэшу, ошибки) фиксируются в файле `audit.jsonl` в каталоге хранилища. Формат — JSON Lines, одна запись на строку.
### 8.4. Усиление systemd
Файл сервиса содержит параметры безопасности:
- `NoNewPrivileges=true` — запрет повышения привилегий.
- `ProtectSystem=strict` — файловая система только для чтения, кроме указанных каталогов.
- `ProtectHome=true` — запрет доступа к домашним каталогам.
- `PrivateTmp=true` — изолированный каталог временных файлов.
---
## 9. Точки подключения (endpoints)
| Протокол | Endpoint | Описание |
|----------|----------|----------|
| Docker / OCI | `/v2/` | Docker Registry V2 API |
| npm | `/npm/` | npm-реестр (прокси + публикация) |
| Maven | `/maven2/` | Maven-репозиторий |
| PyPI | `/simple/` | Python Simple API (PEP 503) |
| Cargo | `/cargo/` | Cargo-реестр |
| Helm | `/v2/` (OCI) | Helm-чарты через OCI-протокол |
| Raw | `/raw/` | Произвольные файлы |
| Мониторинг | `/health`, `/ready`, `/metrics` | Проверка и метрики |
| Интерфейс | `/ui/` | Веб-интерфейс управления |
| Документация API | `/api-docs` | OpenAPI (Swagger UI) |
---
## 10. Устранение неполадок
### Сервис не запускается
```bash
journalctl -u nora --no-pager -n 50
```
Частые причины: занят порт, недоступен каталог хранилища, ошибка в конфигурации.
### Прокси-кэш не работает
1. Проверьте доступность внешнего реестра: `curl https://registry.npmjs.org/lodash`.
2. Убедитесь, что переменная `NORA_NPM_PROXY` задана корректно.
3. При использовании приватного реестра укажите `NORA_NPM_PROXY_AUTH`.
### Ошибка целостности (Integrity check failed)
Контрольная сумма кэшированного тарбола не совпадает с сохранённой. Возможные причины: повреждение файловой системы или несанкционированное изменение файла. Удалите повреждённый файл из каталога хранилища — NORA загрузит его заново из внешнего реестра.

165
docs-ru/technical-spec.md Normal file
View File

@@ -0,0 +1,165 @@
# Технические условия
## Программа «NORA — Реестр артефактов»
**Версия документа:** 0.2.32
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)
---
## 1. Наименование и обозначение
**Полное наименование:** NORA — многопротокольный реестр артефактов.
**Краткое наименование:** NORA.
**Обозначение:** nora-registry.
---
## 2. Назначение
Программа предназначена для хранения, кэширования и распространения программных компонентов (артефактов), используемых при разработке, сборке и развёртывании программного обеспечения.
### 2.1. Область применения
- Организация внутренних репозиториев программных компонентов.
- Проксирование и кэширование общедоступных репозиториев (npmjs.org, PyPI, Maven Central, Docker Hub, crates.io).
- Обеспечение доступности зависимостей в изолированных средах без доступа к сети Интернет (air-gapped).
- Контроль целостности и безопасности цепочки поставки программного обеспечения.
### 2.2. Класс программного обеспечения
Инструментальное программное обеспечение для разработки и DevOps.
Код ОКПД2: 62.01 — Разработка компьютерного программного обеспечения.
---
## 3. Функциональные характеристики
### 3.1. Поддерживаемые протоколы
| Протокол | Стандарт | Назначение |
|----------|----------|------------|
| Docker / OCI | OCI Distribution Spec v1.0 | Контейнерные образы, Helm-чарты |
| npm | npm Registry API | Библиотеки JavaScript / TypeScript |
| Maven | Maven Repository Layout | Библиотеки Java / Kotlin |
| PyPI | PEP 503 (Simple API) | Библиотеки Python |
| Cargo | Cargo Registry Protocol | Библиотеки Rust |
| Raw | HTTP PUT/GET | Произвольные файлы |
### 3.2. Режимы работы
1. **Хранилище (hosted):** приём и хранение артефактов, опубликованных пользователями.
2. **Прокси-кэш (proxy):** прозрачное проксирование запросов к внешним репозиториям с локальным кэшированием.
3. **Комбинированный:** одновременная работа в режимах хранилища и прокси-кэша (поиск сначала в локальном хранилище, затем во внешнем репозитории).
### 3.3. Управление доступом
- Аутентификация на основе htpasswd (bcrypt).
- Ролевая модель: `read` (чтение), `write` (чтение и запись), `admin` (полный доступ).
- Токены доступа с ограниченным сроком действия.
### 3.4. Безопасность
- Контроль целостности кэшированных артефактов (SHA-256).
- Защита от обхода каталогов (path traversal) при публикации.
- Проверка соответствия имени пакета в URL и теле запроса.
- Иммутабельность опубликованных версий.
- Аудит всех операций в формате JSON Lines.
- Поддержка TLS при размещении за обратным прокси-сервером.
### 3.5. Эксплуатация
- Предварительное кэширование зависимостей (`nora mirror`) по файлам фиксации версий (lockfile).
- Сборка мусора (`nora gc`) — удаление осиротевших блобов.
- Резервное копирование и восстановление (`nora backup`, `nora restore`).
- Миграция между локальным хранилищем и S3-совместимым объектным хранилищем.
- Мониторинг: эндпоинты `/health`, `/ready`, `/metrics` (формат Prometheus).
- Веб-интерфейс для просмотра содержимого реестра.
- Документация API в формате OpenAPI 3.0.
---
## 4. Технические характеристики
### 4.1. Среда исполнения
| Параметр | Значение |
|----------|----------|
| Язык реализации | Rust |
| Формат поставки | Единый исполняемый файл (статическая линковка) |
| Поддерживаемые ОС | Linux (ядро 4.15+) |
| Архитектуры | x86_64 (amd64), aarch64 (arm64) |
| Контейнеризация | Docker-образ на базе `scratch` |
| Системная интеграция | systemd (файл сервиса в комплекте) |
### 4.2. Хранение данных
| Параметр | Значение |
|----------|----------|
| Локальное хранилище | Файловая система (ext4, XFS, ZFS) |
| Объектное хранилище | S3-совместимое API (MinIO, Yandex Object Storage, Selectel S3) |
| Структура | Иерархическая: `{protocol}/{package}/{artifact}` |
| Аудит | Append-only JSONL файл |
### 4.3. Конфигурация
| Источник | Приоритет |
|----------|-----------|
| Переменные окружения (`NORA_*`) | Высший |
| Файл `config.toml` | Средний |
| Значения по умолчанию | Низший |
### 4.4. Производительность
| Параметр | Значение |
|----------|----------|
| Время запуска | < 100 мс |
| Обслуживание из кэша | < 2 мс (метаданные), < 10 мс (артефакты до 1 МБ) |
| Параллельная обработка | Асинхронный ввод-вывод (tokio runtime) |
| Ограничение частоты | Настраиваемое (по умолчанию 100 запросов/сек) |
---
## 5. Лицензирование
| Компонент | Лицензия |
|-----------|----------|
| NORA (core) | MIT License |
| NORA Enterprise | Проприетарная |
Полный перечень лицензий включённых библиотек приведён в файле SBOM (формат CycloneDX).
---
## 6. Комплектность
| Компонент | Описание |
|-----------|----------|
| `nora` | Исполняемый файл |
| `nora.service` | Файл systemd-сервиса |
| `nora.env.example` | Шаблон конфигурации |
| `install.sh` | Скрипт установки |
| `nora.cdx.json` | SBOM в формате CycloneDX |
| Руководство администратора | Настоящий комплект документации |
| Руководство пользователя | Настоящий комплект документации |
| Технические условия | Настоящий документ |
---
## 7. Контактная информация
**Правообладатель:** ООО «ТАИАРС»
**Торговая марка:** АРТАИС
**Сайт продукта:** https://getnora.io
**Документация:** https://getnora.dev
**Исходный код:** https://github.com/getnora-io/nora
**Поддержка:** https://t.me/getnora

221
docs-ru/user-guide.md Normal file
View File

@@ -0,0 +1,221 @@
# Руководство пользователя NORA
**Версия:** 0.2.32
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)
---
## 1. Общие сведения
NORA — реестр артефактов для команд разработки. Программа обеспечивает хранение и кэширование библиотек, Docker-образов и иных программных компонентов, используемых при сборке приложений.
Данное руководство предназначено для разработчиков, которые используют NORA в качестве источника зависимостей.
---
## 2. Настройка рабочего окружения
### 2.1. npm / Node.js
Укажите NORA в качестве реестра:
```bash
npm config set registry http://nora.example.com:4000/npm
```
Или создайте файл `.npmrc` в корне проекта:
```
registry=http://nora.example.com:4000/npm
```
После этого все команды `npm install` будут загружать пакеты через NORA. При первом обращении NORA загрузит пакет из внешнего реестра (npmjs.org) и сохранит его в кэш. Последующие обращения обслуживаются из кэша.
### 2.2. Docker
```bash
docker login nora.example.com:4000
docker pull nora.example.com:4000/library/nginx:latest
docker push nora.example.com:4000/myteam/myapp:1.0.0
```
### 2.3. Maven
Добавьте репозиторий в `settings.xml`:
```xml
<mirrors>
<mirror>
<id>nora</id>
<mirrorOf>central</mirrorOf>
<url>http://nora.example.com:4000/maven2</url>
</mirror>
</mirrors>
```
### 2.4. Python / pip
```bash
pip install --index-url http://nora.example.com:4000/simple flask
```
Или в `pip.conf`:
```ini
[global]
index-url = http://nora.example.com:4000/simple
```
### 2.5. Cargo / Rust
Настройка в `~/.cargo/config.toml`:
```toml
[registries.nora]
index = "sparse+http://nora.example.com:4000/cargo/"
[source.crates-io]
replace-with = "nora"
```
### 2.6. Helm
Helm использует OCI-протокол через Docker Registry API:
```bash
helm push mychart-0.1.0.tgz oci://nora.example.com:4000/helm
helm pull oci://nora.example.com:4000/helm/mychart --version 0.1.0
```
---
## 3. Публикация пакетов
### 3.1. npm
```bash
npm publish --registry http://nora.example.com:4000/npm
```
Требования:
- Файл `package.json` с полями `name` и `version`.
- Каждая версия публикуется однократно. Повторная публикация той же версии запрещена.
### 3.2. Docker
```bash
docker tag myapp:latest nora.example.com:4000/myteam/myapp:1.0.0
docker push nora.example.com:4000/myteam/myapp:1.0.0
```
### 3.3. Maven
```bash
mvn deploy -DaltDeploymentRepository=nora::default::http://nora.example.com:4000/maven2
```
### 3.4. Raw (произвольные файлы)
```bash
# Загрузка
curl -X PUT --data-binary @release.tar.gz http://nora.example.com:4000/raw/builds/release-1.0.tar.gz
# Скачивание
curl -O http://nora.example.com:4000/raw/builds/release-1.0.tar.gz
```
---
## 4. Работа в изолированной среде
Если сборочный сервер не имеет доступа к сети Интернет, используйте предварительное кэширование.
### 4.1. Кэширование зависимостей проекта
На машине с доступом к Интернету и NORA выполните:
```bash
nora mirror npm --lockfile package-lock.json --registry http://nora.example.com:4000
```
После этого все зависимости из lockfile будут доступны через NORA, даже если связь с внешними реестрами отсутствует.
### 4.2. Кэширование всех версий пакета
```bash
nora mirror npm --packages lodash,express --all-versions --registry http://nora.example.com:4000
```
Эта команда загрузит все опубликованные версии указанных пакетов.
---
## 5. Веб-интерфейс
NORA предоставляет веб-интерфейс для просмотра содержимого реестра:
```
http://nora.example.com:4000/ui/
```
Доступные функции:
- Просмотр списка артефактов по протоколам.
- Количество версий и размер каждого пакета.
- Журнал последних операций.
- Метрики загрузок.
---
## 6. Документация API
Интерактивная документация API доступна по адресу:
```
http://nora.example.com:4000/api-docs
```
Формат: OpenAPI 3.0 (Swagger UI).
---
## 7. Аутентификация
Если администратор включил аутентификацию, для операций записи требуется токен.
### 7.1. Получение токена
```bash
curl -u admin:password http://nora.example.com:4000/auth/token
```
### 7.2. Использование токена
```bash
# npm
npm config set //nora.example.com:4000/npm/:_authToken TOKEN
# Docker
docker login nora.example.com:4000
# curl
curl -H "Authorization: Bearer TOKEN" http://nora.example.com:4000/npm/my-package
```
Операции чтения по умолчанию не требуют аутентификации (роль `read` назначается автоматически).
---
## 8. Часто задаваемые вопросы
**В: Что произойдёт, если внешний реестр (npmjs.org) станет недоступен?**
О: NORA продолжит обслуживать запросы из кэша. Пакеты, которые ранее не запрашивались, будут недоступны до восстановления связи. Для предотвращения такой ситуации используйте `nora mirror`.
**В: Можно ли публиковать приватные пакеты?**
О: Да. Пакеты, опубликованные через `npm publish` или `docker push`, сохраняются в локальном хранилище NORA и доступны всем пользователям данного экземпляра.
**В: Как обновить кэш метаданных?**
О: Кэш метаданных npm обновляется автоматически по истечении TTL (по умолчанию 5 минут). Для немедленного обновления удалите файл `metadata.json` из каталога хранилища.
**В: Поддерживаются ли scoped-пакеты npm (@scope/package)?**
О: Да, полностью. Например: `npm install @babel/core --registry http://nora.example.com:4000/npm`.

23
fuzz/Cargo.toml Normal file
View File

@@ -0,0 +1,23 @@
[package]
name = "nora-fuzz"
version = "0.0.0"
publish = false
license = "MIT"
edition = "2021"
[package.metadata]
cargo-fuzz = true
[dependencies]
libfuzzer-sys = "0.4"
nora-registry = { path = "../nora-registry" }
[[bin]]
name = "fuzz_validation"
path = "fuzz_targets/fuzz_validation.rs"
doc = false
[[bin]]
name = "fuzz_docker_manifest"
path = "fuzz_targets/fuzz_docker_manifest.rs"
doc = false

View File

@@ -0,0 +1,8 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use nora_registry::docker_fuzz::detect_manifest_media_type;
fuzz_target!(|data: &[u8]| {
// Fuzz Docker manifest parser — must never panic on any input
let _ = detect_manifest_media_type(data);
});

View File

@@ -0,0 +1,13 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use nora_registry::validation::{
validate_digest, validate_docker_name, validate_docker_reference, validate_storage_key,
};
fuzz_target!(|data: &str| {
// Fuzz all validators — they must never panic on any input
let _ = validate_storage_key(data);
let _ = validate_docker_name(data);
let _ = validate_digest(data);
let _ = validate_docker_reference(data);
});

View File

@@ -1,98 +0,0 @@
#!/usr/bin/env sh
# NORA installer — https://getnora.io/install.sh
# Usage: curl -fsSL https://getnora.io/install.sh | sh
set -e
REPO="getnora-io/nora"
BINARY="nora"
INSTALL_DIR="/usr/local/bin"
# ── Detect OS and architecture ──────────────────────────────────────────────
OS="$(uname -s)"
ARCH="$(uname -m)"
case "$OS" in
Linux) os="linux" ;;
Darwin) os="darwin" ;;
*)
echo "Unsupported OS: $OS"
echo "Please download manually: https://github.com/$REPO/releases/latest"
exit 1
;;
esac
case "$ARCH" in
x86_64 | amd64) arch="amd64" ;;
aarch64 | arm64) arch="arm64" ;;
*)
echo "Unsupported architecture: $ARCH"
echo "Please download manually: https://github.com/$REPO/releases/latest"
exit 1
;;
esac
ASSET="${BINARY}-${os}-${arch}"
# ── Get latest release version ──────────────────────────────────────────────
VERSION="$(curl -fsSL "https://api.github.com/repos/$REPO/releases/latest" \
| grep '"tag_name"' \
| sed 's/.*"tag_name": *"\([^"]*\)".*/\1/')"
if [ -z "$VERSION" ]; then
echo "Failed to get latest version"
exit 1
fi
echo "Installing NORA $VERSION ($os/$arch)..."
# ── Download binary and checksum ────────────────────────────────────────────
BASE_URL="https://github.com/$REPO/releases/download/$VERSION"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "$TMP_DIR"' EXIT
echo "Downloading $ASSET..."
curl -fsSL "$BASE_URL/$ASSET" -o "$TMP_DIR/$BINARY"
curl -fsSL "$BASE_URL/$ASSET.sha256" -o "$TMP_DIR/$ASSET.sha256"
# ── Verify checksum ─────────────────────────────────────────────────────────
echo "Verifying checksum..."
EXPECTED="$(awk '{print $1}' "$TMP_DIR/$ASSET.sha256")"
ACTUAL="$(sha256sum "$TMP_DIR/$BINARY" | awk '{print $1}')"
if [ "$EXPECTED" != "$ACTUAL" ]; then
echo "Checksum mismatch!"
echo " Expected: $EXPECTED"
echo " Actual: $ACTUAL"
exit 1
fi
echo "Checksum OK"
# ── Install ─────────────────────────────────────────────────────────────────
chmod +x "$TMP_DIR/$BINARY"
if [ -w "$INSTALL_DIR" ]; then
mv "$TMP_DIR/$BINARY" "$INSTALL_DIR/$BINARY"
elif command -v sudo >/dev/null 2>&1; then
sudo mv "$TMP_DIR/$BINARY" "$INSTALL_DIR/$BINARY"
else
# Fallback to ~/.local/bin
INSTALL_DIR="$HOME/.local/bin"
mkdir -p "$INSTALL_DIR"
mv "$TMP_DIR/$BINARY" "$INSTALL_DIR/$BINARY"
echo "Installed to $INSTALL_DIR/$BINARY"
echo "Make sure $INSTALL_DIR is in your PATH"
fi
# ── Done ────────────────────────────────────────────────────────────────────
echo ""
echo "NORA $VERSION installed to $INSTALL_DIR/$BINARY"
echo ""
nora --version 2>/dev/null || true

BIN
logo.jpg

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

View File

@@ -1,23 +0,0 @@
[package]
name = "nora-cli"
version.workspace = true
edition.workspace = true
license.workspace = true
authors.workspace = true
repository.workspace = true
homepage.workspace = true
description = "CLI tool for NORA registry"
[[bin]]
name = "nora-cli"
path = "src/main.rs"
[dependencies]
tokio.workspace = true
reqwest.workspace = true
serde.workspace = true
serde_json.workspace = true
clap = { version = "4", features = ["derive"] }
indicatif = "0.18"
tar = "0.4"
flate2 = "1.1"

View File

@@ -1,55 +0,0 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
use clap::{Parser, Subcommand};
#[derive(Parser)]
#[command(name = "nora-cli")]
#[command(about = "CLI tool for Nora registry")]
struct Cli {
#[command(subcommand)]
command: Commands,
}
#[derive(Subcommand)]
enum Commands {
/// Login to a registry
Login {
#[arg(long)]
registry: String,
#[arg(short, long)]
username: String,
},
/// Push an artifact
Push {
#[arg(long)]
registry: String,
path: String,
},
/// Pull an artifact
Pull {
#[arg(long)]
registry: String,
artifact: String,
},
}
#[tokio::main]
async fn main() {
let cli = Cli::parse();
match cli.command {
Commands::Login { registry, username } => {
println!("Logging in to {} as {}", registry, username);
// TODO: implement
}
Commands::Push { registry, path } => {
println!("Pushing {} to {}", path, registry);
// TODO: implement
}
Commands::Pull { registry, artifact } => {
println!("Pulling {} from {}", artifact, registry);
// TODO: implement
}
}
}

View File

@@ -10,6 +10,10 @@ description = "Cloud-Native Artifact Registry - Fast, lightweight, multi-protoco
keywords = ["registry", "docker", "artifacts", "cloud-native", "devops"]
categories = ["command-line-utilities", "development-tools", "web-programming"]
[lib]
name = "nora_registry"
path = "src/lib.rs"
[[bin]]
name = "nora"
path = "src/main.rs"
@@ -45,6 +49,9 @@ tower_governor = "0.8"
governor = "0.10"
parking_lot = "0.12"
zeroize = { version = "1.8", features = ["derive"] }
argon2 = { version = "0.5", features = ["std", "rand"] }
tower-http = { version = "0.6", features = ["set-header"] }
percent-encoding = "2"
[dev-dependencies]
tempfile = "3"

View File

@@ -99,3 +99,139 @@ impl Default for ActivityLog {
Self::new(50)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_action_type_display() {
assert_eq!(ActionType::Pull.to_string(), "PULL");
assert_eq!(ActionType::Push.to_string(), "PUSH");
assert_eq!(ActionType::CacheHit.to_string(), "CACHE");
assert_eq!(ActionType::ProxyFetch.to_string(), "PROXY");
}
#[test]
fn test_action_type_equality() {
assert_eq!(ActionType::Pull, ActionType::Pull);
assert_ne!(ActionType::Pull, ActionType::Push);
}
#[test]
fn test_activity_entry_new() {
let entry = ActivityEntry::new(
ActionType::Pull,
"nginx:latest".to_string(),
"docker",
"LOCAL",
);
assert_eq!(entry.action, ActionType::Pull);
assert_eq!(entry.artifact, "nginx:latest");
assert_eq!(entry.registry, "docker");
assert_eq!(entry.source, "LOCAL");
}
#[test]
fn test_activity_log_push_and_len() {
let log = ActivityLog::new(10);
assert!(log.is_empty());
assert_eq!(log.len(), 0);
log.push(ActivityEntry::new(
ActionType::Push,
"test:v1".to_string(),
"docker",
"LOCAL",
));
assert!(!log.is_empty());
assert_eq!(log.len(), 1);
}
#[test]
fn test_activity_log_recent() {
let log = ActivityLog::new(10);
for i in 0..5 {
log.push(ActivityEntry::new(
ActionType::Pull,
format!("image:{}", i),
"docker",
"LOCAL",
));
}
let recent = log.recent(3);
assert_eq!(recent.len(), 3);
// newest first
assert_eq!(recent[0].artifact, "image:4");
assert_eq!(recent[1].artifact, "image:3");
assert_eq!(recent[2].artifact, "image:2");
}
#[test]
fn test_activity_log_all() {
let log = ActivityLog::new(10);
for i in 0..3 {
log.push(ActivityEntry::new(
ActionType::Pull,
format!("pkg:{}", i),
"npm",
"PROXY",
));
}
let all = log.all();
assert_eq!(all.len(), 3);
assert_eq!(all[0].artifact, "pkg:2"); // newest first
}
#[test]
fn test_activity_log_bounded_size() {
let log = ActivityLog::new(3);
for i in 0..5 {
log.push(ActivityEntry::new(
ActionType::Pull,
format!("item:{}", i),
"cargo",
"CACHE",
));
}
assert_eq!(log.len(), 3);
let all = log.all();
// oldest entries should be dropped
assert_eq!(all[0].artifact, "item:4");
assert_eq!(all[1].artifact, "item:3");
assert_eq!(all[2].artifact, "item:2");
}
#[test]
fn test_activity_log_recent_more_than_available() {
let log = ActivityLog::new(10);
log.push(ActivityEntry::new(
ActionType::Push,
"one".to_string(),
"maven",
"LOCAL",
));
let recent = log.recent(100);
assert_eq!(recent.len(), 1);
}
#[test]
fn test_activity_log_default() {
let log = ActivityLog::default();
assert!(log.is_empty());
// default capacity is 50
for i in 0..60 {
log.push(ActivityEntry::new(
ActionType::Pull,
format!("x:{}", i),
"docker",
"LOCAL",
));
}
assert_eq!(log.len(), 50);
}
}

View File

@@ -71,3 +71,69 @@ impl AuditLog {
&self.path
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
#[test]
fn test_audit_entry_new() {
let entry = AuditEntry::new(
"push",
"admin",
"nginx:latest",
"docker",
"uploaded manifest",
);
assert_eq!(entry.action, "push");
assert_eq!(entry.actor, "admin");
assert_eq!(entry.artifact, "nginx:latest");
assert_eq!(entry.registry, "docker");
assert_eq!(entry.detail, "uploaded manifest");
}
#[test]
fn test_audit_log_new_and_path() {
let tmp = TempDir::new().unwrap();
let log = AuditLog::new(tmp.path().to_str().unwrap());
assert!(log.path().ends_with("audit.jsonl"));
}
#[test]
fn test_audit_log_write_entry() {
let tmp = TempDir::new().unwrap();
let log = AuditLog::new(tmp.path().to_str().unwrap());
let entry = AuditEntry::new("pull", "user1", "lodash", "npm", "downloaded");
log.log(entry);
// Verify file contains the entry
let content = std::fs::read_to_string(log.path()).unwrap();
assert!(content.contains(r#""action":"pull""#));
assert!(content.contains(r#""actor":"user1""#));
assert!(content.contains(r#""artifact":"lodash""#));
}
#[test]
fn test_audit_log_multiple_entries() {
let tmp = TempDir::new().unwrap();
let log = AuditLog::new(tmp.path().to_str().unwrap());
log.log(AuditEntry::new("push", "admin", "a", "docker", ""));
log.log(AuditEntry::new("pull", "user", "b", "npm", ""));
log.log(AuditEntry::new("delete", "admin", "c", "maven", ""));
let content = std::fs::read_to_string(log.path()).unwrap();
let lines: Vec<&str> = content.lines().collect();
assert_eq!(lines.len(), 3);
}
#[test]
fn test_audit_entry_serialization() {
let entry = AuditEntry::new("push", "ci", "app:v1", "docker", "ci build");
let json = serde_json::to_string(&entry).unwrap();
assert!(json.contains(r#""action":"push""#));
assert!(json.contains(r#""ts":""#));
}
}

View File

@@ -94,6 +94,16 @@ pub async fn auth_middleware(
return next.run(request).await;
}
// Allow anonymous read if configured
let is_read_method = matches!(
*request.method(),
axum::http::Method::GET | axum::http::Method::HEAD
);
if state.config.auth.anonymous_read && is_read_method {
// Read requests allowed without auth
return next.run(request).await;
}
// Extract Authorization header
let auth_header = request
.headers()

View File

@@ -1,12 +1,18 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
use base64::{engine::general_purpose::STANDARD, Engine};
use serde::{Deserialize, Serialize};
use std::env;
use std::fs;
pub use crate::secrets::SecretsConfig;
/// Encode "user:pass" into a Basic Auth header value, e.g. "Basic dXNlcjpwYXNz".
pub fn basic_auth_header(credentials: &str) -> String {
format!("Basic {}", STANDARD.encode(credentials))
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
pub server: ServerConfig,
@@ -20,6 +26,8 @@ pub struct Config {
#[serde(default)]
pub docker: DockerConfig,
#[serde(default)]
pub go: GoConfig,
#[serde(default)]
pub raw: RawConfig,
#[serde(default)]
pub auth: AuthConfig,
@@ -93,7 +101,7 @@ fn default_bucket() -> String {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct MavenConfig {
#[serde(default)]
pub proxies: Vec<String>,
pub proxies: Vec<MavenProxyEntry>,
#[serde(default = "default_timeout")]
pub proxy_timeout: u64,
}
@@ -102,18 +110,67 @@ pub struct MavenConfig {
pub struct NpmConfig {
#[serde(default)]
pub proxy: Option<String>,
#[serde(default)]
pub proxy_auth: Option<String>, // "user:pass" for basic auth
#[serde(default = "default_timeout")]
pub proxy_timeout: u64,
/// Metadata cache TTL in seconds (default: 300 = 5 min). Set to 0 to cache forever.
#[serde(default = "default_metadata_ttl")]
pub metadata_ttl: u64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PypiConfig {
#[serde(default)]
pub proxy: Option<String>,
#[serde(default)]
pub proxy_auth: Option<String>, // "user:pass" for basic auth
#[serde(default = "default_timeout")]
pub proxy_timeout: u64,
}
/// Go module proxy configuration (GOPROXY protocol)
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GoConfig {
/// Upstream Go module proxy URL (default: https://proxy.golang.org)
#[serde(default = "default_go_proxy")]
pub proxy: Option<String>,
#[serde(default)]
pub proxy_auth: Option<String>, // "user:pass" for basic auth
#[serde(default = "default_timeout")]
pub proxy_timeout: u64,
/// Separate timeout for .zip downloads (default: 120s, zips can be large)
#[serde(default = "default_go_zip_timeout")]
pub proxy_timeout_zip: u64,
/// Maximum module zip size in bytes (default: 100MB)
#[serde(default = "default_go_max_zip_size")]
pub max_zip_size: u64,
}
fn default_go_proxy() -> Option<String> {
Some("https://proxy.golang.org".to_string())
}
fn default_go_zip_timeout() -> u64 {
120
}
fn default_go_max_zip_size() -> u64 {
104_857_600 // 100MB
}
impl Default for GoConfig {
fn default() -> Self {
Self {
proxy: default_go_proxy(),
proxy_auth: None,
proxy_timeout: 30,
proxy_timeout_zip: 120,
max_zip_size: 104_857_600,
}
}
}
/// Docker registry configuration with upstream proxy support
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DockerConfig {
@@ -131,6 +188,37 @@ pub struct DockerUpstream {
pub auth: Option<String>, // "user:pass" for basic auth
}
/// Maven upstream proxy configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum MavenProxyEntry {
Simple(String),
Full(MavenProxy),
}
/// Maven upstream proxy with optional auth
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct MavenProxy {
pub url: String,
#[serde(default)]
pub auth: Option<String>, // "user:pass" for basic auth
}
impl MavenProxyEntry {
pub fn url(&self) -> &str {
match self {
MavenProxyEntry::Simple(s) => s,
MavenProxyEntry::Full(p) => &p.url,
}
}
pub fn auth(&self) -> Option<&str> {
match self {
MavenProxyEntry::Simple(_) => None,
MavenProxyEntry::Full(p) => p.auth.as_deref(),
}
}
}
/// Raw repository configuration for simple file storage
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RawConfig {
@@ -156,6 +244,9 @@ fn default_max_file_size() -> u64 {
pub struct AuthConfig {
#[serde(default)]
pub enabled: bool,
/// Allow anonymous read access (pull/download without auth, push requires auth)
#[serde(default)]
pub anonymous_read: bool,
#[serde(default = "default_htpasswd_file")]
pub htpasswd_file: String,
#[serde(default = "default_token_storage")]
@@ -174,10 +265,16 @@ fn default_timeout() -> u64 {
30
}
fn default_metadata_ttl() -> u64 {
300 // 5 minutes
}
impl Default for MavenConfig {
fn default() -> Self {
Self {
proxies: vec!["https://repo1.maven.org/maven2".to_string()],
proxies: vec![MavenProxyEntry::Simple(
"https://repo1.maven.org/maven2".to_string(),
)],
proxy_timeout: 30,
}
}
@@ -187,7 +284,9 @@ impl Default for NpmConfig {
fn default() -> Self {
Self {
proxy: Some("https://registry.npmjs.org".to_string()),
proxy_auth: None,
proxy_timeout: 30,
metadata_ttl: 300,
}
}
}
@@ -196,6 +295,7 @@ impl Default for PypiConfig {
fn default() -> Self {
Self {
proxy: Some("https://pypi.org/simple/".to_string()),
proxy_auth: None,
proxy_timeout: 30,
}
}
@@ -226,6 +326,7 @@ impl Default for AuthConfig {
fn default() -> Self {
Self {
enabled: false,
anonymous_read: false,
htpasswd_file: "users.htpasswd".to_string(),
token_storage: "data/tokens".to_string(),
}
@@ -309,6 +410,41 @@ impl Default for RateLimitConfig {
}
impl Config {
/// Warn if credentials are configured via config.toml (not env vars)
pub fn warn_plaintext_credentials(&self) {
// Docker upstreams
for (i, upstream) in self.docker.upstreams.iter().enumerate() {
if upstream.auth.is_some() && std::env::var("NORA_DOCKER_UPSTREAMS").is_err() {
tracing::warn!(
upstream_index = i,
url = %upstream.url,
"Docker upstream credentials in config.toml are plaintext — consider NORA_DOCKER_UPSTREAMS env var"
);
}
}
// Maven proxies
for proxy in &self.maven.proxies {
if proxy.auth().is_some() && std::env::var("NORA_MAVEN_PROXIES").is_err() {
tracing::warn!(
url = %proxy.url(),
"Maven proxy credentials in config.toml are plaintext — consider NORA_MAVEN_PROXIES env var"
);
}
}
// Go
if self.go.proxy_auth.is_some() && std::env::var("NORA_GO_PROXY_AUTH").is_err() {
tracing::warn!("Go proxy credentials in config.toml are plaintext — consider NORA_GO_PROXY_AUTH env var");
}
// npm
if self.npm.proxy_auth.is_some() && std::env::var("NORA_NPM_PROXY_AUTH").is_err() {
tracing::warn!("npm proxy credentials in config.toml are plaintext — consider NORA_NPM_PROXY_AUTH env var");
}
// PyPI
if self.pypi.proxy_auth.is_some() && std::env::var("NORA_PYPI_PROXY_AUTH").is_err() {
tracing::warn!("PyPI proxy credentials in config.toml are plaintext — consider NORA_PYPI_PROXY_AUTH env var");
}
}
/// Load configuration with priority: ENV > config.toml > defaults
pub fn load() -> Self {
// 1. Start with defaults
@@ -373,13 +509,30 @@ impl Config {
if let Ok(val) = env::var("NORA_AUTH_ENABLED") {
self.auth.enabled = val.to_lowercase() == "true" || val == "1";
}
if let Ok(val) = env::var("NORA_AUTH_ANONYMOUS_READ") {
self.auth.anonymous_read = val.to_lowercase() == "true" || val == "1";
}
if let Ok(val) = env::var("NORA_AUTH_HTPASSWD_FILE") {
self.auth.htpasswd_file = val;
}
// Maven config
// Maven config — supports "url1,url2" or "url1|auth1,url2|auth2"
if let Ok(val) = env::var("NORA_MAVEN_PROXIES") {
self.maven.proxies = val.split(',').map(|s| s.trim().to_string()).collect();
self.maven.proxies = val
.split(',')
.filter(|s| !s.is_empty())
.map(|s| {
let parts: Vec<&str> = s.trim().splitn(2, '|').collect();
if parts.len() > 1 {
MavenProxyEntry::Full(MavenProxy {
url: parts[0].to_string(),
auth: Some(parts[1].to_string()),
})
} else {
MavenProxyEntry::Simple(parts[0].to_string())
}
})
.collect();
}
if let Ok(val) = env::var("NORA_MAVEN_PROXY_TIMEOUT") {
if let Ok(timeout) = val.parse() {
@@ -396,6 +549,16 @@ impl Config {
self.npm.proxy_timeout = timeout;
}
}
if let Ok(val) = env::var("NORA_NPM_METADATA_TTL") {
if let Ok(ttl) = val.parse() {
self.npm.metadata_ttl = ttl;
}
}
// npm proxy auth
if let Ok(val) = env::var("NORA_NPM_PROXY_AUTH") {
self.npm.proxy_auth = if val.is_empty() { None } else { Some(val) };
}
// PyPI config
if let Ok(val) = env::var("NORA_PYPI_PROXY") {
@@ -407,6 +570,11 @@ impl Config {
}
}
// PyPI proxy auth
if let Ok(val) = env::var("NORA_PYPI_PROXY_AUTH") {
self.pypi.proxy_auth = if val.is_empty() { None } else { Some(val) };
}
// Docker config
if let Ok(val) = env::var("NORA_DOCKER_PROXY_TIMEOUT") {
if let Ok(timeout) = val.parse() {
@@ -509,6 +677,7 @@ impl Default for Config {
maven: MavenConfig::default(),
npm: NpmConfig::default(),
pypi: PypiConfig::default(),
go: GoConfig::default(),
docker: DockerConfig::default(),
raw: RawConfig::default(),
auth: AuthConfig::default(),
@@ -553,4 +722,378 @@ mod tests {
assert_eq!(config.rate_limit.upload_burst, 1000);
assert_eq!(config.rate_limit.auth_burst, 5); // default
}
#[test]
fn test_basic_auth_header() {
let header = basic_auth_header("user:pass");
assert_eq!(header, "Basic dXNlcjpwYXNz");
}
#[test]
fn test_basic_auth_header_empty() {
let header = basic_auth_header("");
assert!(header.starts_with("Basic "));
}
#[test]
fn test_config_default() {
let config = Config::default();
assert_eq!(config.server.host, "127.0.0.1");
assert_eq!(config.server.port, 4000);
assert_eq!(config.server.body_limit_mb, 2048);
assert!(config.server.public_url.is_none());
assert_eq!(config.storage.path, "data/storage");
assert_eq!(config.storage.mode, StorageMode::Local);
assert_eq!(config.storage.bucket, "registry");
assert_eq!(config.storage.s3_region, "us-east-1");
assert!(!config.auth.enabled);
assert_eq!(config.auth.htpasswd_file, "users.htpasswd");
assert_eq!(config.auth.token_storage, "data/tokens");
}
#[test]
fn test_maven_config_default() {
let m = MavenConfig::default();
assert_eq!(m.proxy_timeout, 30);
assert_eq!(m.proxies.len(), 1);
assert_eq!(m.proxies[0].url(), "https://repo1.maven.org/maven2");
assert!(m.proxies[0].auth().is_none());
}
#[test]
fn test_npm_config_default() {
let n = NpmConfig::default();
assert_eq!(n.proxy, Some("https://registry.npmjs.org".to_string()));
assert!(n.proxy_auth.is_none());
assert_eq!(n.proxy_timeout, 30);
assert_eq!(n.metadata_ttl, 300);
}
#[test]
fn test_pypi_config_default() {
let p = PypiConfig::default();
assert_eq!(p.proxy, Some("https://pypi.org/simple/".to_string()));
assert!(p.proxy_auth.is_none());
assert_eq!(p.proxy_timeout, 30);
}
#[test]
fn test_docker_config_default() {
let d = DockerConfig::default();
assert_eq!(d.proxy_timeout, 60);
assert_eq!(d.upstreams.len(), 1);
assert_eq!(d.upstreams[0].url, "https://registry-1.docker.io");
assert!(d.upstreams[0].auth.is_none());
}
#[test]
fn test_raw_config_default() {
let r = RawConfig::default();
assert!(r.enabled);
assert_eq!(r.max_file_size, 104_857_600);
}
#[test]
fn test_auth_config_default() {
let a = AuthConfig::default();
assert!(!a.enabled);
assert!(!a.anonymous_read);
assert_eq!(a.htpasswd_file, "users.htpasswd");
assert_eq!(a.token_storage, "data/tokens");
}
#[test]
fn test_auth_anonymous_read_from_toml() {
let toml = r#"
[server]
host = "127.0.0.1"
port = 4000
[storage]
mode = "local"
[auth]
enabled = true
anonymous_read = true
"#;
let config: Config = toml::from_str(toml).unwrap();
assert!(config.auth.enabled);
assert!(config.auth.anonymous_read);
}
#[test]
fn test_env_override_anonymous_read() {
let mut config = Config::default();
std::env::set_var("NORA_AUTH_ANONYMOUS_READ", "true");
config.apply_env_overrides();
assert!(config.auth.anonymous_read);
std::env::remove_var("NORA_AUTH_ANONYMOUS_READ");
}
#[test]
fn test_maven_proxy_entry_simple() {
let entry = MavenProxyEntry::Simple("https://repo.example.com".to_string());
assert_eq!(entry.url(), "https://repo.example.com");
assert!(entry.auth().is_none());
}
#[test]
fn test_maven_proxy_entry_full() {
let entry = MavenProxyEntry::Full(MavenProxy {
url: "https://private.repo.com".to_string(),
auth: Some("user:secret".to_string()),
});
assert_eq!(entry.url(), "https://private.repo.com");
assert_eq!(entry.auth(), Some("user:secret"));
}
#[test]
fn test_maven_proxy_entry_full_no_auth() {
let entry = MavenProxyEntry::Full(MavenProxy {
url: "https://repo.com".to_string(),
auth: None,
});
assert_eq!(entry.url(), "https://repo.com");
assert!(entry.auth().is_none());
}
#[test]
fn test_storage_mode_default() {
let mode = StorageMode::default();
assert_eq!(mode, StorageMode::Local);
}
#[test]
fn test_env_override_server() {
let mut config = Config::default();
std::env::set_var("NORA_HOST", "0.0.0.0");
std::env::set_var("NORA_PORT", "8080");
std::env::set_var("NORA_PUBLIC_URL", "registry.example.com");
std::env::set_var("NORA_BODY_LIMIT_MB", "4096");
config.apply_env_overrides();
assert_eq!(config.server.host, "0.0.0.0");
assert_eq!(config.server.port, 8080);
assert_eq!(
config.server.public_url,
Some("registry.example.com".to_string())
);
assert_eq!(config.server.body_limit_mb, 4096);
std::env::remove_var("NORA_HOST");
std::env::remove_var("NORA_PORT");
std::env::remove_var("NORA_PUBLIC_URL");
std::env::remove_var("NORA_BODY_LIMIT_MB");
}
#[test]
fn test_env_override_storage() {
let mut config = Config::default();
std::env::set_var("NORA_STORAGE_MODE", "s3");
std::env::set_var("NORA_STORAGE_PATH", "/data/nora");
std::env::set_var("NORA_STORAGE_BUCKET", "my-bucket");
std::env::set_var("NORA_STORAGE_S3_REGION", "eu-west-1");
config.apply_env_overrides();
assert_eq!(config.storage.mode, StorageMode::S3);
assert_eq!(config.storage.path, "/data/nora");
assert_eq!(config.storage.bucket, "my-bucket");
assert_eq!(config.storage.s3_region, "eu-west-1");
std::env::remove_var("NORA_STORAGE_MODE");
std::env::remove_var("NORA_STORAGE_PATH");
std::env::remove_var("NORA_STORAGE_BUCKET");
std::env::remove_var("NORA_STORAGE_S3_REGION");
}
#[test]
fn test_env_override_auth() {
let mut config = Config::default();
std::env::set_var("NORA_AUTH_ENABLED", "true");
std::env::set_var("NORA_AUTH_HTPASSWD_FILE", "/etc/nora/users");
std::env::set_var("NORA_AUTH_TOKEN_STORAGE", "/data/tokens");
config.apply_env_overrides();
assert!(config.auth.enabled);
assert_eq!(config.auth.htpasswd_file, "/etc/nora/users");
assert_eq!(config.auth.token_storage, "/data/tokens");
std::env::remove_var("NORA_AUTH_ENABLED");
std::env::remove_var("NORA_AUTH_HTPASSWD_FILE");
std::env::remove_var("NORA_AUTH_TOKEN_STORAGE");
}
#[test]
fn test_env_override_maven_proxies() {
let mut config = Config::default();
std::env::set_var(
"NORA_MAVEN_PROXIES",
"https://repo1.com,https://repo2.com|user:pass",
);
config.apply_env_overrides();
assert_eq!(config.maven.proxies.len(), 2);
assert_eq!(config.maven.proxies[0].url(), "https://repo1.com");
assert!(config.maven.proxies[0].auth().is_none());
assert_eq!(config.maven.proxies[1].url(), "https://repo2.com");
assert_eq!(config.maven.proxies[1].auth(), Some("user:pass"));
std::env::remove_var("NORA_MAVEN_PROXIES");
}
#[test]
fn test_env_override_docker_upstreams() {
let mut config = Config::default();
std::env::set_var(
"NORA_DOCKER_UPSTREAMS",
"https://mirror.gcr.io,https://private.io|token123",
);
config.apply_env_overrides();
assert_eq!(config.docker.upstreams.len(), 2);
assert_eq!(config.docker.upstreams[0].url, "https://mirror.gcr.io");
assert!(config.docker.upstreams[0].auth.is_none());
assert_eq!(config.docker.upstreams[1].url, "https://private.io");
assert_eq!(
config.docker.upstreams[1].auth,
Some("token123".to_string())
);
std::env::remove_var("NORA_DOCKER_UPSTREAMS");
}
#[test]
fn test_env_override_npm() {
let mut config = Config::default();
std::env::set_var("NORA_NPM_PROXY", "https://npm.company.com");
std::env::set_var("NORA_NPM_PROXY_AUTH", "user:token");
std::env::set_var("NORA_NPM_PROXY_TIMEOUT", "60");
std::env::set_var("NORA_NPM_METADATA_TTL", "600");
config.apply_env_overrides();
assert_eq!(
config.npm.proxy,
Some("https://npm.company.com".to_string())
);
assert_eq!(config.npm.proxy_auth, Some("user:token".to_string()));
assert_eq!(config.npm.proxy_timeout, 60);
assert_eq!(config.npm.metadata_ttl, 600);
std::env::remove_var("NORA_NPM_PROXY");
std::env::remove_var("NORA_NPM_PROXY_AUTH");
std::env::remove_var("NORA_NPM_PROXY_TIMEOUT");
std::env::remove_var("NORA_NPM_METADATA_TTL");
}
#[test]
fn test_env_override_raw() {
let mut config = Config::default();
std::env::set_var("NORA_RAW_ENABLED", "false");
std::env::set_var("NORA_RAW_MAX_FILE_SIZE", "524288000");
config.apply_env_overrides();
assert!(!config.raw.enabled);
assert_eq!(config.raw.max_file_size, 524288000);
std::env::remove_var("NORA_RAW_ENABLED");
std::env::remove_var("NORA_RAW_MAX_FILE_SIZE");
}
#[test]
fn test_env_override_rate_limit() {
let mut config = Config::default();
std::env::set_var("NORA_RATE_LIMIT_ENABLED", "false");
std::env::set_var("NORA_RATE_LIMIT_AUTH_RPS", "10");
std::env::set_var("NORA_RATE_LIMIT_GENERAL_BURST", "500");
config.apply_env_overrides();
assert!(!config.rate_limit.enabled);
assert_eq!(config.rate_limit.auth_rps, 10);
assert_eq!(config.rate_limit.general_burst, 500);
std::env::remove_var("NORA_RATE_LIMIT_ENABLED");
std::env::remove_var("NORA_RATE_LIMIT_AUTH_RPS");
std::env::remove_var("NORA_RATE_LIMIT_GENERAL_BURST");
}
#[test]
fn test_config_from_toml_full() {
let toml = r#"
[server]
host = "0.0.0.0"
port = 8080
public_url = "nora.example.com"
body_limit_mb = 4096
[storage]
mode = "s3"
path = "/data"
s3_url = "http://minio:9000"
bucket = "artifacts"
s3_region = "eu-central-1"
[auth]
enabled = true
htpasswd_file = "/etc/nora/users.htpasswd"
[raw]
enabled = false
max_file_size = 500000000
"#;
let config: Config = toml::from_str(toml).unwrap();
assert_eq!(config.server.host, "0.0.0.0");
assert_eq!(config.server.port, 8080);
assert_eq!(
config.server.public_url,
Some("nora.example.com".to_string())
);
assert_eq!(config.server.body_limit_mb, 4096);
assert_eq!(config.storage.mode, StorageMode::S3);
assert_eq!(config.storage.s3_url, "http://minio:9000");
assert_eq!(config.storage.bucket, "artifacts");
assert!(config.auth.enabled);
assert!(!config.raw.enabled);
assert_eq!(config.raw.max_file_size, 500000000);
}
#[test]
fn test_config_from_toml_minimal() {
let toml = r#"
[server]
host = "127.0.0.1"
port = 4000
[storage]
mode = "local"
"#;
let config: Config = toml::from_str(toml).unwrap();
// Defaults should be filled
assert_eq!(config.storage.path, "data/storage");
assert_eq!(config.maven.proxies.len(), 1);
assert_eq!(
config.npm.proxy,
Some("https://registry.npmjs.org".to_string())
);
assert_eq!(config.docker.upstreams.len(), 1);
assert!(config.raw.enabled);
assert!(!config.auth.enabled);
}
#[test]
fn test_config_toml_docker_upstreams() {
let toml = r#"
[server]
host = "127.0.0.1"
port = 4000
[storage]
mode = "local"
[docker]
proxy_timeout = 120
[[docker.upstreams]]
url = "https://mirror.gcr.io"
[[docker.upstreams]]
url = "https://private.registry.io"
auth = "user:pass"
"#;
let config: Config = toml::from_str(toml).unwrap();
assert_eq!(config.docker.proxy_timeout, 120);
assert_eq!(config.docker.upstreams.len(), 2);
assert!(config.docker.upstreams[0].auth.is_none());
assert_eq!(
config.docker.upstreams[1].auth,
Some("user:pass".to_string())
);
}
}

View File

@@ -216,3 +216,146 @@ impl Default for DashboardMetrics {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
#[test]
fn test_new_defaults() {
let m = DashboardMetrics::new();
assert_eq!(m.downloads.load(Ordering::Relaxed), 0);
assert_eq!(m.uploads.load(Ordering::Relaxed), 0);
assert_eq!(m.cache_hits.load(Ordering::Relaxed), 0);
assert_eq!(m.cache_misses.load(Ordering::Relaxed), 0);
}
#[test]
fn test_record_download_all_registries() {
let m = DashboardMetrics::new();
for reg in &["docker", "npm", "maven", "cargo", "pypi", "raw"] {
m.record_download(reg);
}
assert_eq!(m.downloads.load(Ordering::Relaxed), 6);
assert_eq!(m.docker_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.npm_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.maven_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.cargo_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.pypi_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.raw_downloads.load(Ordering::Relaxed), 1);
}
#[test]
fn test_record_download_unknown_registry() {
let m = DashboardMetrics::new();
m.record_download("unknown");
assert_eq!(m.downloads.load(Ordering::Relaxed), 1);
// no per-registry counter should increment
assert_eq!(m.docker_downloads.load(Ordering::Relaxed), 0);
}
#[test]
fn test_record_upload() {
let m = DashboardMetrics::new();
m.record_upload("docker");
m.record_upload("maven");
m.record_upload("raw");
assert_eq!(m.uploads.load(Ordering::Relaxed), 3);
assert_eq!(m.docker_uploads.load(Ordering::Relaxed), 1);
assert_eq!(m.maven_uploads.load(Ordering::Relaxed), 1);
assert_eq!(m.raw_uploads.load(Ordering::Relaxed), 1);
}
#[test]
fn test_record_upload_unknown_registry() {
let m = DashboardMetrics::new();
m.record_upload("npm"); // npm has no upload counter
assert_eq!(m.uploads.load(Ordering::Relaxed), 1);
}
#[test]
fn test_cache_hit_rate_zero() {
let m = DashboardMetrics::new();
assert_eq!(m.cache_hit_rate(), 0.0);
}
#[test]
fn test_cache_hit_rate_all_hits() {
let m = DashboardMetrics::new();
m.record_cache_hit();
m.record_cache_hit();
assert_eq!(m.cache_hit_rate(), 100.0);
}
#[test]
fn test_cache_hit_rate_mixed() {
let m = DashboardMetrics::new();
m.record_cache_hit();
m.record_cache_miss();
assert_eq!(m.cache_hit_rate(), 50.0);
}
#[test]
fn test_get_registry_downloads() {
let m = DashboardMetrics::new();
m.record_download("docker");
m.record_download("docker");
m.record_download("npm");
assert_eq!(m.get_registry_downloads("docker"), 2);
assert_eq!(m.get_registry_downloads("npm"), 1);
assert_eq!(m.get_registry_downloads("cargo"), 0);
assert_eq!(m.get_registry_downloads("unknown"), 0);
}
#[test]
fn test_get_registry_uploads() {
let m = DashboardMetrics::new();
m.record_upload("docker");
assert_eq!(m.get_registry_uploads("docker"), 1);
assert_eq!(m.get_registry_uploads("maven"), 0);
assert_eq!(m.get_registry_uploads("unknown"), 0);
}
#[test]
fn test_persistence_save_and_load() {
let tmp = TempDir::new().unwrap();
let path = tmp.path().to_str().unwrap();
// Create metrics, record some data, save
{
let m = DashboardMetrics::with_persistence(path);
m.record_download("docker");
m.record_download("docker");
m.record_upload("maven");
m.record_cache_hit();
m.save();
}
// Load in new instance
{
let m = DashboardMetrics::with_persistence(path);
assert_eq!(m.downloads.load(Ordering::Relaxed), 2);
assert_eq!(m.uploads.load(Ordering::Relaxed), 1);
assert_eq!(m.docker_downloads.load(Ordering::Relaxed), 2);
assert_eq!(m.maven_uploads.load(Ordering::Relaxed), 1);
assert_eq!(m.cache_hits.load(Ordering::Relaxed), 1);
}
}
#[test]
fn test_persistence_missing_file() {
let tmp = TempDir::new().unwrap();
let path = tmp.path().to_str().unwrap();
// Should work even without existing metrics.json
let m = DashboardMetrics::with_persistence(path);
assert_eq!(m.downloads.load(Ordering::Relaxed), 0);
}
#[test]
fn test_default() {
let m = DashboardMetrics::default();
assert_eq!(m.downloads.load(Ordering::Relaxed), 0);
}
}

View File

@@ -124,4 +124,77 @@ mod tests {
let err = AppError::NotFound("image not found".to_string());
assert_eq!(err.to_string(), "Not found: image not found");
}
#[test]
fn test_error_constructors() {
let err = AppError::not_found("missing");
assert!(matches!(err, AppError::NotFound(_)));
assert_eq!(err.to_string(), "Not found: missing");
let err = AppError::bad_request("invalid input");
assert!(matches!(err, AppError::BadRequest(_)));
assert_eq!(err.to_string(), "Bad request: invalid input");
let err = AppError::unauthorized("no token");
assert!(matches!(err, AppError::Unauthorized(_)));
assert_eq!(err.to_string(), "Unauthorized: no token");
let err = AppError::internal("db crashed");
assert!(matches!(err, AppError::Internal(_)));
assert_eq!(err.to_string(), "Internal error: db crashed");
}
#[test]
fn test_error_display_storage() {
let err = AppError::Storage(StorageError::NotFound);
assert!(err.to_string().contains("Storage error"));
}
#[test]
fn test_error_display_validation() {
let err = AppError::Validation(ValidationError::PathTraversal);
assert!(err.to_string().contains("Validation error"));
}
#[test]
fn test_error_into_response_not_found() {
let err = AppError::NotFound("gone".to_string());
let response = err.into_response();
assert_eq!(response.status(), StatusCode::NOT_FOUND);
}
#[test]
fn test_error_into_response_bad_request() {
let err = AppError::BadRequest("bad".to_string());
let response = err.into_response();
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
}
#[test]
fn test_error_into_response_unauthorized() {
let err = AppError::Unauthorized("nope".to_string());
let response = err.into_response();
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
}
#[test]
fn test_error_into_response_internal() {
let err = AppError::Internal("boom".to_string());
let response = err.into_response();
assert_eq!(response.status(), StatusCode::INTERNAL_SERVER_ERROR);
}
#[test]
fn test_error_into_response_storage_not_found() {
let err = AppError::Storage(StorageError::NotFound);
let response = err.into_response();
assert_eq!(response.status(), StatusCode::NOT_FOUND);
}
#[test]
fn test_error_into_response_validation() {
let err = AppError::Validation(ValidationError::EmptyInput);
let response = err.into_response();
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
}
}

28
nora-registry/src/lib.rs Normal file
View File

@@ -0,0 +1,28 @@
//! NORA Registry — library interface for fuzzing and testing
pub mod validation;
/// Re-export Docker manifest parsing for fuzz targets
pub mod docker_fuzz {
pub fn detect_manifest_media_type(data: &[u8]) -> String {
let Ok(value) = serde_json::from_slice::<serde_json::Value>(data) else {
return "application/octet-stream".to_string();
};
if let Some(mt) = value.get("mediaType").and_then(|v| v.as_str()) {
return mt.to_string();
}
if value.get("manifests").is_some() {
return "application/vnd.oci.image.index.v1+json".to_string();
}
if value.get("schemaVersion").and_then(|v| v.as_i64()) == Some(2) {
if value.get("layers").is_some() {
return "application/vnd.oci.image.manifest.v1+json".to_string();
}
return "application/vnd.docker.distribution.manifest.v2+json".to_string();
}
if value.get("schemaVersion").and_then(|v| v.as_i64()) == Some(1) {
return "application/vnd.docker.distribution.manifest.v1+json".to_string();
}
"application/vnd.docker.distribution.manifest.v2+json".to_string()
}
}

View File

@@ -12,6 +12,7 @@ mod gc;
mod health;
mod metrics;
mod migrate;
mod mirror;
mod openapi;
mod rate_limit;
mod registry;
@@ -23,7 +24,7 @@ mod tokens;
mod ui;
mod validation;
use axum::{extract::DefaultBodyLimit, middleware, Router};
use axum::{extract::DefaultBodyLimit, http::HeaderValue, middleware, Router};
use clap::{Parser, Subcommand};
use std::path::{Path, PathBuf};
use std::sync::Arc;
@@ -82,6 +83,17 @@ enum Commands {
#[arg(long, default_value = "false")]
dry_run: bool,
},
/// Pre-fetch dependencies through NORA proxy cache
Mirror {
#[command(subcommand)]
format: mirror::MirrorFormat,
/// NORA registry URL
#[arg(long, default_value = "http://localhost:4000", global = true)]
registry: String,
/// Max concurrent downloads
#[arg(long, default_value = "8", global = true)]
concurrency: usize,
},
}
pub struct AppState {
@@ -164,6 +176,16 @@ async fn main() {
println!("\nRun without --dry-run to delete orphaned blobs.");
}
}
Some(Commands::Mirror {
format,
registry,
concurrency,
}) => {
if let Err(e) = mirror::run_mirror(format, &registry, concurrency).await {
error!("Mirror failed: {}", e);
std::process::exit(1);
}
}
Some(Commands::Migrate { from, to, dry_run }) => {
let source = match from.as_str() {
"local" => Storage::new_local(&config.storage.path),
@@ -289,6 +311,9 @@ async fn run_server(config: Config, storage: Storage) {
let storage_path = config.storage.path.clone();
let rate_limit_enabled = config.rate_limit.enabled;
// Warn about plaintext credentials in config.toml
config.warn_plaintext_credentials();
// Initialize Docker auth with proxy timeout
let docker_auth = registry::DockerAuth::new(config.docker.proxy_timeout);
@@ -301,7 +326,8 @@ async fn run_server(config: Config, storage: Storage) {
.merge(registry::npm_routes())
.merge(registry::cargo_routes())
.merge(registry::pypi_routes())
.merge(registry::raw_routes());
.merge(registry::raw_routes())
.merge(registry::go_routes());
// Routes WITHOUT rate limiting (health, metrics, UI)
let public_routes = Router::new()
@@ -350,6 +376,22 @@ async fn run_server(config: Config, storage: Storage) {
.layer(DefaultBodyLimit::max(
state.config.server.body_limit_mb * 1024 * 1024,
))
.layer(tower_http::set_header::SetResponseHeaderLayer::overriding(
axum::http::header::HeaderName::from_static("x-content-type-options"),
HeaderValue::from_static("nosniff"),
))
.layer(tower_http::set_header::SetResponseHeaderLayer::overriding(
axum::http::header::HeaderName::from_static("x-frame-options"),
HeaderValue::from_static("DENY"),
))
.layer(tower_http::set_header::SetResponseHeaderLayer::overriding(
axum::http::header::HeaderName::from_static("referrer-policy"),
HeaderValue::from_static("strict-origin-when-cross-origin"),
))
.layer(tower_http::set_header::SetResponseHeaderLayer::overriding(
axum::http::header::HeaderName::from_static("content-security-policy"),
HeaderValue::from_static("default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.tailwindcss.com https://unpkg.com; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self'; connect-src 'self'"),
))
.layer(middleware::from_fn(request_id::request_id_middleware))
.layer(middleware::from_fn(metrics::metrics_middleware))
.layer(middleware::from_fn_with_state(

View File

@@ -148,3 +148,56 @@ pub fn record_storage_op(operation: &str, success: bool) {
.with_label_values(&[operation, status])
.inc();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_detect_registry_docker() {
assert_eq!(detect_registry("/v2/nginx/manifests/latest"), "docker");
assert_eq!(detect_registry("/v2/"), "docker");
assert_eq!(
detect_registry("/v2/library/alpine/blobs/sha256:abc"),
"docker"
);
}
#[test]
fn test_detect_registry_maven() {
assert_eq!(detect_registry("/maven2/com/example/artifact"), "maven");
}
#[test]
fn test_detect_registry_npm() {
assert_eq!(detect_registry("/npm/lodash"), "npm");
assert_eq!(detect_registry("/npm/@scope/package"), "npm");
}
#[test]
fn test_detect_registry_cargo() {
assert_eq!(detect_registry("/cargo/api/v1/crates"), "cargo");
}
#[test]
fn test_detect_registry_pypi() {
assert_eq!(detect_registry("/simple/requests/"), "pypi");
assert_eq!(
detect_registry("/packages/requests/1.0/requests-1.0.tar.gz"),
"pypi"
);
}
#[test]
fn test_detect_registry_ui() {
assert_eq!(detect_registry("/ui/dashboard"), "ui");
assert_eq!(detect_registry("/ui"), "ui");
}
#[test]
fn test_detect_registry_other() {
assert_eq!(detect_registry("/health"), "other");
assert_eq!(detect_registry("/ready"), "other");
assert_eq!(detect_registry("/unknown/path"), "other");
}
}

View File

@@ -0,0 +1,325 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
//! `nora mirror` — pre-fetch dependencies through NORA proxy cache.
mod npm;
use clap::Subcommand;
use indicatif::{ProgressBar, ProgressStyle};
use std::path::PathBuf;
use std::time::Instant;
#[derive(Subcommand)]
pub enum MirrorFormat {
/// Mirror npm packages
Npm {
/// Path to package-lock.json (v1/v2/v3)
#[arg(long, conflicts_with = "packages")]
lockfile: Option<PathBuf>,
/// Comma-separated package names
#[arg(long, conflicts_with = "lockfile", value_delimiter = ',')]
packages: Option<Vec<String>>,
/// Fetch all versions (only with --packages)
#[arg(long)]
all_versions: bool,
},
/// Mirror Python packages
Pip {
/// Path to requirements.txt
#[arg(long)]
lockfile: PathBuf,
},
/// Mirror Cargo crates
Cargo {
/// Path to Cargo.lock
#[arg(long)]
lockfile: PathBuf,
},
/// Mirror Maven artifacts
Maven {
/// Path to dependency list (mvn dependency:list output)
#[arg(long)]
lockfile: PathBuf,
},
}
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
pub struct MirrorTarget {
pub name: String,
pub version: String,
}
pub struct MirrorResult {
pub total: usize,
pub fetched: usize,
pub failed: usize,
pub bytes: u64,
}
pub fn create_progress_bar(total: u64) -> ProgressBar {
let pb = ProgressBar::new(total);
pb.set_style(
ProgressStyle::default_bar()
.template(
"{spinner:.green} [{elapsed_precise}] [{bar:40.cyan/blue}] {pos}/{len} ({eta}) {msg}",
)
.unwrap()
.progress_chars("=>-"),
);
pb
}
pub async fn run_mirror(
format: MirrorFormat,
registry: &str,
concurrency: usize,
) -> Result<(), String> {
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(120))
.build()
.map_err(|e| format!("Failed to create HTTP client: {}", e))?;
// Health check
let health_url = format!("{}/health", registry.trim_end_matches('/'));
match client.get(&health_url).send().await {
Ok(r) if r.status().is_success() => {}
_ => {
return Err(format!(
"Cannot connect to NORA at {}. Is `nora serve` running?",
registry
))
}
}
let start = Instant::now();
let result = match format {
MirrorFormat::Npm {
lockfile,
packages,
all_versions,
} => {
npm::run_npm_mirror(
&client,
registry,
lockfile,
packages,
all_versions,
concurrency,
)
.await?
}
MirrorFormat::Pip { lockfile } => {
mirror_lockfile(&client, registry, "pip", &lockfile).await?
}
MirrorFormat::Cargo { lockfile } => {
mirror_lockfile(&client, registry, "cargo", &lockfile).await?
}
MirrorFormat::Maven { lockfile } => {
mirror_lockfile(&client, registry, "maven", &lockfile).await?
}
};
let elapsed = start.elapsed();
println!("\nMirror complete:");
println!(" Total: {}", result.total);
println!(" Fetched: {}", result.fetched);
println!(" Failed: {}", result.failed);
println!(" Size: {:.1} MB", result.bytes as f64 / 1_048_576.0);
println!(" Time: {:.1}s", elapsed.as_secs_f64());
if result.failed > 0 {
Err(format!("{} packages failed to mirror", result.failed))
} else {
Ok(())
}
}
async fn mirror_lockfile(
client: &reqwest::Client,
registry: &str,
format: &str,
lockfile: &PathBuf,
) -> Result<MirrorResult, String> {
let content = std::fs::read_to_string(lockfile)
.map_err(|e| format!("Cannot read {}: {}", lockfile.display(), e))?;
let targets = match format {
"pip" => parse_requirements_txt(&content),
"cargo" => parse_cargo_lock(&content)?,
"maven" => parse_maven_deps(&content),
_ => vec![],
};
if targets.is_empty() {
println!("No packages found in {}", lockfile.display());
return Ok(MirrorResult {
total: 0,
fetched: 0,
failed: 0,
bytes: 0,
});
}
let pb = create_progress_bar(targets.len() as u64);
let base = registry.trim_end_matches('/');
let mut fetched = 0;
let mut failed = 0;
let mut bytes = 0u64;
for target in &targets {
let url = match format {
"pip" => format!("{}/simple/{}/", base, target.name),
"cargo" => format!(
"{}/cargo/api/v1/crates/{}/{}/download",
base, target.name, target.version
),
"maven" => {
let parts: Vec<&str> = target.name.split(':').collect();
if parts.len() == 2 {
let group_path = parts[0].replace('.', "/");
format!(
"{}/maven2/{}/{}/{}/{}-{}.jar",
base, group_path, parts[1], target.version, parts[1], target.version
)
} else {
pb.inc(1);
failed += 1;
continue;
}
}
_ => continue,
};
match client.get(&url).send().await {
Ok(r) if r.status().is_success() => {
if let Ok(body) = r.bytes().await {
bytes += body.len() as u64;
}
fetched += 1;
}
_ => failed += 1,
}
pb.set_message(format!("{}@{}", target.name, target.version));
pb.inc(1);
}
pb.finish_with_message("done");
Ok(MirrorResult {
total: targets.len(),
fetched,
failed,
bytes,
})
}
fn parse_requirements_txt(content: &str) -> Vec<MirrorTarget> {
content
.lines()
.filter(|l| !l.trim().is_empty() && !l.starts_with('#') && !l.starts_with('-'))
.filter_map(|line| {
let line = line.split('#').next().unwrap().trim();
if let Some((name, version)) = line.split_once("==") {
Some(MirrorTarget {
name: name.trim().to_string(),
version: version.trim().to_string(),
})
} else {
let name = line.split(['>', '<', '=', '!', '~', ';']).next()?.trim();
if name.is_empty() {
None
} else {
Some(MirrorTarget {
name: name.to_string(),
version: "latest".to_string(),
})
}
}
})
.collect()
}
fn parse_cargo_lock(content: &str) -> Result<Vec<MirrorTarget>, String> {
let lock: toml::Value =
toml::from_str(content).map_err(|e| format!("Invalid Cargo.lock: {}", e))?;
let packages = lock
.get("package")
.and_then(|p| p.as_array())
.cloned()
.unwrap_or_default();
Ok(packages
.iter()
.filter(|p| {
p.get("source")
.and_then(|s| s.as_str())
.map(|s| s.starts_with("registry+"))
.unwrap_or(false)
})
.filter_map(|p| {
let name = p.get("name")?.as_str()?.to_string();
let version = p.get("version")?.as_str()?.to_string();
Some(MirrorTarget { name, version })
})
.collect())
}
fn parse_maven_deps(content: &str) -> Vec<MirrorTarget> {
content
.lines()
.filter_map(|line| {
let line = line.trim().trim_start_matches("[INFO]").trim();
let parts: Vec<&str> = line.split(':').collect();
if parts.len() >= 4 {
let name = format!("{}:{}", parts[0], parts[1]);
let version = parts[3].to_string();
Some(MirrorTarget { name, version })
} else {
None
}
})
.collect()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_requirements_txt() {
let content = "flask==2.3.0\nrequests>=2.28.0\n# comment\nnumpy==1.24.3\n";
let targets = parse_requirements_txt(content);
assert_eq!(targets.len(), 3);
assert_eq!(targets[0].name, "flask");
assert_eq!(targets[0].version, "2.3.0");
assert_eq!(targets[1].name, "requests");
assert_eq!(targets[1].version, "latest");
}
#[test]
fn test_parse_cargo_lock() {
let content = "\
[[package]]
name = \"serde\"
version = \"1.0.197\"
source = \"registry+https://github.com/rust-lang/crates.io-index\"
[[package]]
name = \"my-local-crate\"
version = \"0.1.0\"
";
let targets = parse_cargo_lock(content).unwrap();
assert_eq!(targets.len(), 1);
assert_eq!(targets[0].name, "serde");
}
#[test]
fn test_parse_maven_deps() {
let content = "[INFO] org.apache.commons:commons-lang3:jar:3.12.0:compile\n";
let targets = parse_maven_deps(content);
assert_eq!(targets.len(), 1);
assert_eq!(targets[0].name, "org.apache.commons:commons-lang3");
assert_eq!(targets[0].version, "3.12.0");
}
}

View File

@@ -0,0 +1,323 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
//! npm lockfile parser + mirror logic.
use super::{create_progress_bar, MirrorResult, MirrorTarget};
use std::collections::HashSet;
use std::path::PathBuf;
use tokio::sync::Semaphore;
/// Entry point for npm mirroring
pub async fn run_npm_mirror(
client: &reqwest::Client,
registry: &str,
lockfile: Option<PathBuf>,
packages: Option<Vec<String>>,
all_versions: bool,
concurrency: usize,
) -> Result<MirrorResult, String> {
let targets = if let Some(path) = lockfile {
let content = std::fs::read_to_string(&path)
.map_err(|e| format!("Cannot read {}: {}", path.display(), e))?;
parse_npm_lockfile(&content)?
} else if let Some(names) = packages {
resolve_npm_packages(client, registry, &names, all_versions).await?
} else {
return Err("Specify --lockfile or --packages".to_string());
};
if targets.is_empty() {
println!("No npm packages to mirror");
return Ok(MirrorResult {
total: 0,
fetched: 0,
failed: 0,
bytes: 0,
});
}
println!(
"Mirroring {} npm packages via {}...",
targets.len(),
registry
);
mirror_npm_packages(client, registry, &targets, concurrency).await
}
/// Parse package-lock.json (v1, v2, v3)
fn parse_npm_lockfile(content: &str) -> Result<Vec<MirrorTarget>, String> {
let json: serde_json::Value =
serde_json::from_str(content).map_err(|e| format!("Invalid JSON: {}", e))?;
let version = json
.get("lockfileVersion")
.and_then(|v| v.as_u64())
.unwrap_or(1);
let mut seen = HashSet::new();
let mut targets = Vec::new();
if version >= 2 {
// v2/v3: use "packages" object
if let Some(packages) = json.get("packages").and_then(|p| p.as_object()) {
for (key, pkg) in packages {
if key.is_empty() {
continue; // root package
}
if let Some(name) = extract_package_name(key) {
if let Some(ver) = pkg.get("version").and_then(|v| v.as_str()) {
let pair = (name.to_string(), ver.to_string());
if seen.insert(pair.clone()) {
targets.push(MirrorTarget {
name: pair.0,
version: pair.1,
});
}
}
}
}
}
}
if version == 1 || targets.is_empty() {
// v1 fallback: recursive "dependencies"
if let Some(deps) = json.get("dependencies").and_then(|d| d.as_object()) {
parse_v1_deps(deps, &mut targets, &mut seen);
}
}
Ok(targets)
}
/// Extract package name from lockfile key like "node_modules/@babel/core"
fn extract_package_name(key: &str) -> Option<&str> {
// Handle nested: "node_modules/foo/node_modules/@scope/bar" → "@scope/bar"
let last_nm = key.rfind("node_modules/")?;
let after = &key[last_nm + "node_modules/".len()..];
let name = after.trim_end_matches('/');
if name.is_empty() {
None
} else {
Some(name)
}
}
/// Recursively parse v1 lockfile "dependencies"
fn parse_v1_deps(
deps: &serde_json::Map<String, serde_json::Value>,
targets: &mut Vec<MirrorTarget>,
seen: &mut HashSet<(String, String)>,
) {
for (name, pkg) in deps {
if let Some(ver) = pkg.get("version").and_then(|v| v.as_str()) {
let pair = (name.clone(), ver.to_string());
if seen.insert(pair.clone()) {
targets.push(MirrorTarget {
name: pair.0,
version: pair.1,
});
}
}
// Recurse into nested dependencies
if let Some(nested) = pkg.get("dependencies").and_then(|d| d.as_object()) {
parse_v1_deps(nested, targets, seen);
}
}
}
/// Resolve --packages list by fetching metadata from NORA
async fn resolve_npm_packages(
client: &reqwest::Client,
registry: &str,
names: &[String],
all_versions: bool,
) -> Result<Vec<MirrorTarget>, String> {
let base = registry.trim_end_matches('/');
let mut targets = Vec::new();
for name in names {
let url = format!("{}/npm/{}", base, name);
let resp = client.get(&url).send().await.map_err(|e| e.to_string())?;
if !resp.status().is_success() {
eprintln!("Warning: {} not found (HTTP {})", name, resp.status());
continue;
}
let json: serde_json::Value = resp.json().await.map_err(|e| e.to_string())?;
if all_versions {
if let Some(versions) = json.get("versions").and_then(|v| v.as_object()) {
for ver in versions.keys() {
targets.push(MirrorTarget {
name: name.clone(),
version: ver.clone(),
});
}
}
} else {
// Just latest
let latest = json
.get("dist-tags")
.and_then(|d| d.get("latest"))
.and_then(|v| v.as_str())
.unwrap_or("latest");
targets.push(MirrorTarget {
name: name.clone(),
version: latest.to_string(),
});
}
}
Ok(targets)
}
/// Fetch packages through NORA (triggers proxy cache)
async fn mirror_npm_packages(
client: &reqwest::Client,
registry: &str,
targets: &[MirrorTarget],
concurrency: usize,
) -> Result<MirrorResult, String> {
let base = registry.trim_end_matches('/');
let pb = create_progress_bar(targets.len() as u64);
let sem = std::sync::Arc::new(Semaphore::new(concurrency));
// Deduplicate metadata fetches (one per package name)
let unique_names: HashSet<&str> = targets.iter().map(|t| t.name.as_str()).collect();
pb.set_message("fetching metadata...");
for name in &unique_names {
let url = format!("{}/npm/{}", base, name);
let _ = client.get(&url).send().await; // trigger metadata cache
}
// Fetch tarballs concurrently
let fetched = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0));
let failed = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0));
let bytes = std::sync::Arc::new(std::sync::atomic::AtomicU64::new(0));
let mut handles = Vec::new();
for target in targets {
let permit = sem.clone().acquire_owned().await.unwrap();
let client = client.clone();
let pb = pb.clone();
let fetched = fetched.clone();
let failed = failed.clone();
let bytes = bytes.clone();
let short_name = target.name.split('/').next_back().unwrap_or(&target.name);
let tarball_url = format!(
"{}/npm/{}/-/{}-{}.tgz",
base, target.name, short_name, target.version
);
let label = format!("{}@{}", target.name, target.version);
handles.push(tokio::spawn(async move {
let _permit = permit;
match client.get(&tarball_url).send().await {
Ok(r) if r.status().is_success() => {
if let Ok(body) = r.bytes().await {
bytes.fetch_add(body.len() as u64, std::sync::atomic::Ordering::Relaxed);
}
fetched.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
_ => {
failed.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
}
pb.set_message(label);
pb.inc(1);
}));
}
for h in handles {
let _ = h.await;
}
pb.finish_with_message("done");
Ok(MirrorResult {
total: targets.len(),
fetched: fetched.load(std::sync::atomic::Ordering::Relaxed),
failed: failed.load(std::sync::atomic::Ordering::Relaxed),
bytes: bytes.load(std::sync::atomic::Ordering::Relaxed),
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_extract_package_name() {
assert_eq!(extract_package_name("node_modules/lodash"), Some("lodash"));
assert_eq!(
extract_package_name("node_modules/@babel/core"),
Some("@babel/core")
);
assert_eq!(
extract_package_name("node_modules/foo/node_modules/bar"),
Some("bar")
);
assert_eq!(
extract_package_name("node_modules/foo/node_modules/@types/node"),
Some("@types/node")
);
assert_eq!(extract_package_name(""), None);
}
#[test]
fn test_parse_lockfile_v3() {
let content = r#"{
"lockfileVersion": 3,
"packages": {
"": { "name": "test" },
"node_modules/lodash": { "version": "4.17.21" },
"node_modules/@babel/core": { "version": "7.26.0" },
"node_modules/@babel/core/node_modules/semver": { "version": "6.3.1" }
}
}"#;
let targets = parse_npm_lockfile(content).unwrap();
assert_eq!(targets.len(), 3);
let names: HashSet<&str> = targets.iter().map(|t| t.name.as_str()).collect();
assert!(names.contains("lodash"));
assert!(names.contains("@babel/core"));
assert!(names.contains("semver"));
}
#[test]
fn test_parse_lockfile_v1() {
let content = r#"{
"lockfileVersion": 1,
"dependencies": {
"express": {
"version": "4.18.2",
"dependencies": {
"accepts": { "version": "1.3.8" }
}
}
}
}"#;
let targets = parse_npm_lockfile(content).unwrap();
assert_eq!(targets.len(), 2);
assert_eq!(targets[0].name, "express");
assert_eq!(targets[1].name, "accepts");
}
#[test]
fn test_deduplication() {
let content = r#"{
"lockfileVersion": 3,
"packages": {
"": {},
"node_modules/debug": { "version": "4.3.4" },
"node_modules/express/node_modules/debug": { "version": "4.3.4" }
}
}"#;
let targets = parse_npm_lockfile(content).unwrap();
assert_eq!(targets.len(), 1); // deduplicated
assert_eq!(targets[0].name, "debug");
}
}

View File

@@ -5,7 +5,6 @@
//!
//! Functions in this module are stubs used only for generating OpenAPI documentation.
#![allow(dead_code)] // utoipa doc stubs — not called at runtime, used by derive macros
use axum::Router;

View File

@@ -3,6 +3,7 @@
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::config::basic_auth_header;
use crate::registry::docker_auth::DockerAuth;
use crate::storage::Storage;
use crate::validation::{validate_digest, validate_docker_name, validate_docker_reference};
@@ -43,11 +44,57 @@ pub struct LayerInfo {
pub size: u64,
}
/// In-progress upload session with metadata
struct UploadSession {
data: Vec<u8>,
name: String,
created_at: std::time::Instant,
}
/// Max concurrent upload sessions (prevent memory exhaustion)
const DEFAULT_MAX_UPLOAD_SESSIONS: usize = 100;
/// Max data per session (default 2 GB, configurable via NORA_MAX_UPLOAD_SESSION_SIZE_MB)
const DEFAULT_MAX_SESSION_SIZE_MB: usize = 2048;
/// Session TTL (30 minutes)
const SESSION_TTL: Duration = Duration::from_secs(30 * 60);
/// Read max upload sessions from env or use default
fn max_upload_sessions() -> usize {
std::env::var("NORA_MAX_UPLOAD_SESSIONS")
.ok()
.and_then(|v| v.parse().ok())
.unwrap_or(DEFAULT_MAX_UPLOAD_SESSIONS)
}
/// Read max session size from env (in MB) or use default
fn max_session_size() -> usize {
let mb = std::env::var("NORA_MAX_UPLOAD_SESSION_SIZE_MB")
.ok()
.and_then(|v| v.parse::<usize>().ok())
.unwrap_or(DEFAULT_MAX_SESSION_SIZE_MB);
mb.saturating_mul(1024 * 1024)
}
/// In-progress upload sessions for chunked uploads
/// Maps UUID -> accumulated data
static UPLOAD_SESSIONS: std::sync::LazyLock<RwLock<HashMap<String, Vec<u8>>>> =
/// Maps UUID -> UploadSession with limits and TTL
static UPLOAD_SESSIONS: std::sync::LazyLock<RwLock<HashMap<String, UploadSession>>> =
std::sync::LazyLock::new(|| RwLock::new(HashMap::new()));
/// Remove expired upload sessions (called periodically)
fn cleanup_expired_sessions() {
let mut sessions = UPLOAD_SESSIONS.write();
let before = sessions.len();
sessions.retain(|_, s| s.created_at.elapsed() < SESSION_TTL);
let removed = before - sessions.len();
if removed > 0 {
tracing::info!(
removed = removed,
remaining = sessions.len(),
"Cleaned up expired upload sessions"
);
}
}
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
.route("/v2/", get(check))
@@ -107,9 +154,19 @@ async fn catalog(State(state): State<Arc<AppState>>) -> Json<Value> {
let mut repos: Vec<String> = keys
.iter()
.filter_map(|k| {
k.strip_prefix("docker/")
.and_then(|rest| rest.split('/').next())
.map(String::from)
let rest = k.strip_prefix("docker/")?;
// Find the first known directory separator (manifests/ or blobs/)
let name = if let Some(idx) = rest.find("/manifests/") {
&rest[..idx]
} else if let Some(idx) = rest.find("/blobs/") {
&rest[..idx]
} else {
return None;
};
if name.is_empty() {
return None;
}
Some(name.to_string())
})
.collect();
@@ -181,6 +238,7 @@ async fn download_blob(
&digest,
&state.docker_auth,
state.config.docker.proxy_timeout,
upstream.auth.as_deref(),
)
.await
{
@@ -212,6 +270,38 @@ async fn download_blob(
}
}
// Auto-prepend library/ for single-segment names (Docker Hub official images)
if !name.contains('/') {
let library_name = format!("library/{}", name);
for upstream in &state.config.docker.upstreams {
if let Ok(data) = fetch_blob_from_upstream(
&state.http_client,
&upstream.url,
&library_name,
&digest,
&state.docker_auth,
state.config.docker.proxy_timeout,
upstream.auth.as_deref(),
)
.await
{
let storage = state.storage.clone();
let key_clone = key.clone();
let data_clone = data.clone();
tokio::spawn(async move {
let _ = storage.put(&key_clone, &data_clone).await;
});
return (
StatusCode::OK,
[(header::CONTENT_TYPE, "application/octet-stream")],
Bytes::from(data),
)
.into_response();
}
}
}
StatusCode::NOT_FOUND.into_response()
}
@@ -220,7 +310,38 @@ async fn start_upload(Path(name): Path<String>) -> Response {
return (StatusCode::BAD_REQUEST, e.to_string()).into_response();
}
// Cleanup expired sessions before checking limits
cleanup_expired_sessions();
// Enforce max concurrent sessions
{
let sessions = UPLOAD_SESSIONS.read();
let max_sessions = max_upload_sessions();
if sessions.len() >= max_sessions {
tracing::warn!(
max = max_sessions,
current = sessions.len(),
"Upload session limit reached — rejecting new upload"
);
return (StatusCode::TOO_MANY_REQUESTS, "Too many concurrent uploads").into_response();
}
}
let uuid = uuid::Uuid::new_v4().to_string();
// Create session with metadata
{
let mut sessions = UPLOAD_SESSIONS.write();
sessions.insert(
uuid.clone(),
UploadSession {
data: Vec::new(),
name: name.clone(),
created_at: std::time::Instant::now(),
},
);
}
let location = format!("/v2/{}/blobs/uploads/{}", name, uuid);
(
StatusCode::ACCEPTED,
@@ -242,9 +363,47 @@ async fn patch_blob(Path((name, uuid)): Path<(String, String)>, body: Bytes) ->
// Append data to the upload session and get total size
let total_size = {
let mut sessions = UPLOAD_SESSIONS.write();
let session = sessions.entry(uuid.clone()).or_default();
session.extend_from_slice(&body);
session.len()
let session = match sessions.get_mut(&uuid) {
Some(s) => s,
None => {
return (StatusCode::NOT_FOUND, "Upload session not found or expired")
.into_response();
}
};
// Verify session belongs to this repository
if session.name != name {
tracing::warn!(
session_name = %session.name,
request_name = %name,
"SECURITY: upload session name mismatch — possible session fixation"
);
return (
StatusCode::BAD_REQUEST,
"Session does not belong to this repository",
)
.into_response();
}
// Check session TTL
if session.created_at.elapsed() >= SESSION_TTL {
sessions.remove(&uuid);
return (StatusCode::NOT_FOUND, "Upload session expired").into_response();
}
// Check size limit
let new_size = session.data.len() + body.len();
if new_size > max_session_size() {
sessions.remove(&uuid);
return (
StatusCode::PAYLOAD_TOO_LARGE,
"Upload session exceeds size limit",
)
.into_response();
}
session.data.extend_from_slice(&body);
session.data.len()
};
let location = format!("/v2/{}/blobs/uploads/{}", name, uuid);
@@ -291,8 +450,22 @@ async fn upload_blob(
// Get data from chunked session if exists, otherwise use body directly
let data = {
let mut sessions = UPLOAD_SESSIONS.write();
if let Some(mut session_data) = sessions.remove(&uuid) {
if let Some(session) = sessions.remove(&uuid) {
// Verify session belongs to this repository
if session.name != name {
tracing::warn!(
session_name = %session.name,
request_name = %name,
"SECURITY: upload finalization name mismatch"
);
return (
StatusCode::BAD_REQUEST,
"Session does not belong to this repository",
)
.into_response();
}
// Chunked upload: append any final body data and use session
let mut session_data = session.data;
if !body.is_empty() {
session_data.extend_from_slice(&body);
}
@@ -303,6 +476,40 @@ async fn upload_blob(
}
};
// Only sha256 digests are supported for verification
if !digest.starts_with("sha256:") {
return (
StatusCode::BAD_REQUEST,
"Only sha256 digests are supported for blob uploads",
)
.into_response();
}
// Verify digest matches uploaded content (Docker Distribution Spec)
{
use sha2::Digest as _;
let computed = format!("sha256:{:x}", sha2::Sha256::digest(&data));
if computed != *digest {
tracing::warn!(
expected = %digest,
computed = %computed,
name = %name,
"SECURITY: blob digest mismatch — rejecting upload"
);
return (
StatusCode::BAD_REQUEST,
Json(json!({
"errors": [{
"code": "DIGEST_INVALID",
"message": "provided digest did not match uploaded content",
"detail": { "expected": digest, "computed": computed }
}]
})),
)
.into_response();
}
}
let key = format!("docker/{}/blobs/{}", name, digest);
match state.storage.put(&key, &data).await {
Ok(()) => {
@@ -392,6 +599,7 @@ async fn get_manifest(
&reference,
&state.docker_auth,
state.config.docker.proxy_timeout,
upstream.auth.as_deref(),
)
.await
{
@@ -450,6 +658,57 @@ async fn get_manifest(
}
}
// Auto-prepend library/ for single-segment names (Docker Hub official images)
// e.g., "nginx" -> "library/nginx", "alpine" -> "library/alpine"
if !name.contains('/') {
let library_name = format!("library/{}", name);
for upstream in &state.config.docker.upstreams {
if let Ok((data, content_type)) = fetch_manifest_from_upstream(
&state.http_client,
&upstream.url,
&library_name,
&reference,
&state.docker_auth,
state.config.docker.proxy_timeout,
upstream.auth.as_deref(),
)
.await
{
state.metrics.record_download("docker");
state.metrics.record_cache_miss();
state.activity.push(ActivityEntry::new(
ActionType::ProxyFetch,
format!("{}:{}", name, reference),
"docker",
"PROXY",
));
use sha2::Digest;
let digest = format!("sha256:{:x}", sha2::Sha256::digest(&data));
// Cache under original name for future local hits
let storage = state.storage.clone();
let key_clone = key.clone();
let data_clone = data.clone();
tokio::spawn(async move {
let _ = storage.put(&key_clone, &data_clone).await;
});
state.repo_index.invalidate("docker");
return (
StatusCode::OK,
[
(header::CONTENT_TYPE, content_type),
(HeaderName::from_static("docker-content-digest"), digest),
],
Bytes::from(data),
)
.into_response();
}
}
}
StatusCode::NOT_FOUND.into_response()
}
@@ -533,6 +792,7 @@ async fn list_tags(State(state): State<Arc<AppState>>, Path(name): Path<String>)
.and_then(|t| t.strip_suffix(".json"))
.map(String::from)
})
.filter(|t| !t.ends_with(".meta") && !t.contains(".meta."))
.collect();
(StatusCode::OK, Json(json!({"name": name, "tags": tags}))).into_response()
}
@@ -733,6 +993,7 @@ async fn fetch_blob_from_upstream(
digest: &str,
docker_auth: &DockerAuth,
timeout: u64,
basic_auth: Option<&str>,
) -> Result<Vec<u8>, ()> {
let url = format!(
"{}/v2/{}/blobs/{}",
@@ -741,13 +1002,12 @@ async fn fetch_blob_from_upstream(
digest
);
// First try without auth
let response = client
.get(&url)
.timeout(Duration::from_secs(timeout))
.send()
.await
.map_err(|_| ())?;
// First try with basic auth if configured
let mut request = client.get(&url).timeout(Duration::from_secs(timeout));
if let Some(credentials) = basic_auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
let response = request.send().await.map_err(|_| ())?;
let response = if response.status() == reqwest::StatusCode::UNAUTHORIZED {
// Get Www-Authenticate header and fetch token
@@ -758,7 +1018,7 @@ async fn fetch_blob_from_upstream(
.map(String::from);
if let Some(token) = docker_auth
.get_token(upstream_url, name, www_auth.as_deref())
.get_token(upstream_url, name, www_auth.as_deref(), basic_auth)
.await
{
client
@@ -790,6 +1050,7 @@ async fn fetch_manifest_from_upstream(
reference: &str,
docker_auth: &DockerAuth,
timeout: u64,
basic_auth: Option<&str>,
) -> Result<(Vec<u8>, String), ()> {
let url = format!(
"{}/v2/{}/manifests/{}",
@@ -806,16 +1067,17 @@ async fn fetch_manifest_from_upstream(
application/vnd.oci.image.manifest.v1+json, \
application/vnd.oci.image.index.v1+json";
// First try without auth
let response = client
// First try with basic auth if configured
let mut request = client
.get(&url)
.timeout(Duration::from_secs(timeout))
.header("Accept", accept_header)
.send()
.await
.map_err(|e| {
tracing::error!(error = %e, url = %url, "Failed to send request to upstream");
})?;
.header("Accept", accept_header);
if let Some(credentials) = basic_auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
let response = request.send().await.map_err(|e| {
tracing::error!(error = %e, url = %url, "Failed to send request to upstream");
})?;
tracing::debug!(status = %response.status(), "Initial upstream response");
@@ -830,7 +1092,7 @@ async fn fetch_manifest_from_upstream(
tracing::debug!(www_auth = ?www_auth, "Got 401, fetching token");
if let Some(token) = docker_auth
.get_token(upstream_url, name, www_auth.as_deref())
.get_token(upstream_url, name, www_auth.as_deref(), basic_auth)
.await
{
tracing::debug!("Token acquired, retrying with auth");

View File

@@ -1,6 +1,7 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
use crate::config::basic_auth_header;
use parking_lot::RwLock;
use std::collections::HashMap;
use std::time::{Duration, Instant};
@@ -36,6 +37,7 @@ impl DockerAuth {
registry_url: &str,
name: &str,
www_authenticate: Option<&str>,
basic_auth: Option<&str>,
) -> Option<String> {
let cache_key = format!("{}:{}", registry_url, name);
@@ -51,7 +53,7 @@ impl DockerAuth {
// Need to fetch a new token
let www_auth = www_authenticate?;
let token = self.fetch_token(www_auth, name).await?;
let token = self.fetch_token(www_auth, name, basic_auth).await?;
// Cache the token (default 5 minute expiry)
{
@@ -70,7 +72,12 @@ impl DockerAuth {
/// Parse Www-Authenticate header and fetch token from auth server
/// Format: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="repository:library/alpine:pull"
async fn fetch_token(&self, www_authenticate: &str, name: &str) -> Option<String> {
async fn fetch_token(
&self,
www_authenticate: &str,
name: &str,
basic_auth: Option<&str>,
) -> Option<String> {
let params = parse_www_authenticate(www_authenticate)?;
let realm = params.get("realm")?;
@@ -82,7 +89,13 @@ impl DockerAuth {
tracing::debug!(url = %url, "Fetching auth token");
let response = self.client.get(&url).send().await.ok()?;
let mut request = self.client.get(&url);
if let Some(credentials) = basic_auth {
request = request.header("Authorization", basic_auth_header(credentials));
tracing::debug!("Using basic auth for token request");
}
let response = request.send().await.ok()?;
if !response.status().is_success() {
tracing::warn!(status = %response.status(), "Token request failed");
@@ -97,44 +110,6 @@ impl DockerAuth {
.and_then(|v| v.as_str())
.map(String::from)
}
/// Make an authenticated request to an upstream registry
pub async fn fetch_with_auth(
&self,
url: &str,
registry_url: &str,
name: &str,
) -> Result<reqwest::Response, ()> {
// First try without auth
let response = self.client.get(url).send().await.map_err(|_| ())?;
if response.status() == reqwest::StatusCode::UNAUTHORIZED {
// Extract Www-Authenticate header
let www_auth = response
.headers()
.get("www-authenticate")
.and_then(|v| v.to_str().ok())
.map(String::from);
// Get token and retry
if let Some(token) = self
.get_token(registry_url, name, www_auth.as_deref())
.await
{
return self
.client
.get(url)
.header("Authorization", format!("Bearer {}", token))
.send()
.await
.map_err(|_| ());
}
return Err(());
}
Ok(response)
}
}
impl Default for DockerAuth {

View File

@@ -0,0 +1,522 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
//! Go module proxy (GOPROXY protocol).
//!
//! Implements the 5 required endpoints:
//! GET /go/{module}/@v/list — list known versions
//! GET /go/{module}/@v/{ver}.info — version metadata (JSON)
//! GET /go/{module}/@v/{ver}.mod — go.mod file
//! GET /go/{module}/@v/{ver}.zip — module zip archive
//! GET /go/{module}/@latest — latest version info
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::{proxy_fetch, proxy_fetch_text, ProxyError};
use crate::AppState;
use axum::{
extract::{Path, State},
http::{header, HeaderValue, StatusCode},
response::{IntoResponse, Response},
routing::get,
Router,
};
use percent_encoding::percent_decode;
use std::sync::Arc;
pub fn routes() -> Router<Arc<AppState>> {
Router::new().route("/go/{*path}", get(handle))
}
/// Main handler — parses the wildcard path and dispatches to the right logic.
async fn handle(State(state): State<Arc<AppState>>, Path(path): Path<String>) -> Response {
// URL-decode the path: Go client sends %21 for !, Axum wildcard may not decode it
let path = percent_decode(path.as_bytes())
.decode_utf8()
.map(|s| s.into_owned())
.unwrap_or(path);
tracing::debug!(path = %path, "Go proxy request");
// Validate path: no traversal, no null bytes
if !is_safe_path(&path) {
tracing::debug!(path = %path, "Go proxy: unsafe path");
return StatusCode::BAD_REQUEST.into_response();
}
// Split: "github.com/!azure/sdk/@v/v1.0.0.info" → module + file
let (module_encoded, file) = match split_go_path(&path) {
Some(parts) => parts,
None => {
tracing::debug!(path = %path, "Go proxy: cannot split path");
return StatusCode::NOT_FOUND.into_response();
}
};
let storage_key = format!("go/{}", path);
let content_type = content_type_for(&file);
// Mutable endpoints: @v/list and @latest can be refreshed from upstream
let is_mutable = file == "@v/list" || file == "@latest";
// Immutable: .info, .mod, .zip — once cached, never overwrite
let is_immutable = !is_mutable;
// 1. Try local cache (for immutable files, this is authoritative)
if let Ok(data) = state.storage.get(&storage_key).await {
state.metrics.record_download("go");
state.metrics.record_cache_hit();
state.activity.push(ActivityEntry::new(
ActionType::CacheHit,
format_artifact(&module_encoded, &file),
"go",
"CACHE",
));
return with_content_type(data.to_vec(), content_type);
}
// 2. Try upstream proxy
let proxy_url = match &state.config.go.proxy {
Some(url) => url.clone(),
None => return StatusCode::NOT_FOUND.into_response(),
};
// Validate module path encoding (but keep encoded for upstream — proxy.golang.org expects ! encoding)
if decode_module_path(&module_encoded).is_err() {
return StatusCode::BAD_REQUEST.into_response();
}
let upstream_url = format!(
"{}/{}",
proxy_url.trim_end_matches('/'),
format_upstream_path(&module_encoded, &file)
);
// Use longer timeout for .zip files
let timeout = if file.ends_with(".zip") {
state.config.go.proxy_timeout_zip
} else {
state.config.go.proxy_timeout
};
// Fetch: binary for .zip, text for everything else
let data = if file.ends_with(".zip") {
proxy_fetch(
&state.http_client,
&upstream_url,
timeout,
state.config.go.proxy_auth.as_deref(),
)
.await
} else {
proxy_fetch_text(
&state.http_client,
&upstream_url,
timeout,
state.config.go.proxy_auth.as_deref(),
None,
)
.await
.map(|s| s.into_bytes())
};
match data {
Ok(bytes) => {
// Enforce size limit for .zip
if file.ends_with(".zip") && bytes.len() as u64 > state.config.go.max_zip_size {
tracing::warn!(
module = module_encoded,
size = bytes.len(),
limit = state.config.go.max_zip_size,
"Go module zip exceeds size limit"
);
return StatusCode::PAYLOAD_TOO_LARGE.into_response();
}
state.metrics.record_download("go");
state.metrics.record_cache_miss();
state.activity.push(ActivityEntry::new(
ActionType::ProxyFetch,
format_artifact(&module_encoded, &file),
"go",
"PROXY",
));
state
.audit
.log(AuditEntry::new("proxy_fetch", "api", "", "go", ""));
// Background cache: immutable = put_if_absent, mutable = always overwrite
let storage = state.storage.clone();
let key = storage_key.clone();
let data_clone = bytes.clone();
tokio::spawn(async move {
if is_immutable {
// Only write if not already cached (immutability guarantee)
if storage.stat(&key).await.is_none() {
let _ = storage.put(&key, &data_clone).await;
}
} else {
let _ = storage.put(&key, &data_clone).await;
}
});
state.repo_index.invalidate("go");
with_content_type(bytes, content_type)
}
Err(ProxyError::NotFound) => StatusCode::NOT_FOUND.into_response(),
Err(e) => {
tracing::debug!(
module = module_encoded,
file = file,
error = ?e,
"Go upstream proxy error"
);
StatusCode::BAD_GATEWAY.into_response()
}
}
}
// ============================================================================
// Module path encoding/decoding
// ============================================================================
/// Decode Go module path: `!x` → `X`
///
/// Go module proxy spec requires uppercase letters to be encoded as `!`
/// followed by the lowercase letter. Raw uppercase in encoded path is invalid.
fn decode_module_path(encoded: &str) -> Result<String, ()> {
let mut result = String::with_capacity(encoded.len());
let mut chars = encoded.chars();
while let Some(c) = chars.next() {
if c == '!' {
match chars.next() {
Some(next) if next.is_ascii_lowercase() => {
result.push(next.to_ascii_uppercase());
}
_ => return Err(()),
}
} else if c.is_ascii_uppercase() {
// Raw uppercase in encoded path is invalid per spec
return Err(());
} else {
result.push(c);
}
}
Ok(result)
}
/// Encode Go module path: `X` → `!x`
#[cfg(test)]
fn encode_module_path(path: &str) -> String {
let mut result = String::with_capacity(path.len() + 8);
for c in path.chars() {
if c.is_ascii_uppercase() {
result.push('!');
result.push(c.to_ascii_lowercase());
} else {
result.push(c);
}
}
result
}
// ============================================================================
// Path parsing helpers
// ============================================================================
/// Split Go path into (encoded_module, file).
///
/// Examples:
/// "github.com/user/repo/@v/v1.0.0.info" → ("github.com/user/repo", "@v/v1.0.0.info")
/// "github.com/user/repo/v2/@v/list" → ("github.com/user/repo/v2", "@v/list")
/// "github.com/user/repo/@latest" → ("github.com/user/repo", "@latest")
fn split_go_path(path: &str) -> Option<(String, String)> {
// Try @latest first (it's simpler)
if let Some(pos) = path.rfind("/@latest") {
let module = &path[..pos];
if !module.is_empty() {
return Some((module.to_string(), "@latest".to_string()));
}
}
// Try @v/ — find the last occurrence (handles /v2/@v/ correctly)
if let Some(pos) = path.rfind("/@v/") {
let module = &path[..pos];
let file = &path[pos + 1..]; // "@v/..."
if !module.is_empty() && !file.is_empty() {
return Some((module.to_string(), file.to_string()));
}
}
None
}
/// Path validation: no traversal attacks
fn is_safe_path(path: &str) -> bool {
!path.contains("..")
&& !path.starts_with('/')
&& !path.contains("//")
&& !path.contains('\0')
&& !path.is_empty()
}
/// Content-Type for Go proxy responses
fn content_type_for(file: &str) -> &'static str {
if file.ends_with(".info") || file == "@latest" {
"application/json"
} else if file.ends_with(".zip") {
"application/zip"
} else {
// .mod, @v/list
"text/plain; charset=utf-8"
}
}
/// Build upstream URL path (uses decoded module path)
fn format_upstream_path(module_decoded: &str, file: &str) -> String {
format!("{}/{}", module_decoded, file)
}
/// Human-readable artifact name for activity log
fn format_artifact(module: &str, file: &str) -> String {
if file == "@v/list" || file == "@latest" {
format!("{} {}", module, file)
} else if let Some(version_file) = file.strip_prefix("@v/") {
// "v1.0.0.info" → "module@v1.0.0"
let version = version_file
.rsplit_once('.')
.map(|(v, _ext)| v)
.unwrap_or(version_file);
format!("{}@{}", module, version)
} else {
format!("{}/{}", module, file)
}
}
/// Build response with Content-Type header
fn with_content_type(data: Vec<u8>, content_type: &'static str) -> Response {
(
StatusCode::OK,
[(header::CONTENT_TYPE, HeaderValue::from_static(content_type))],
data,
)
.into_response()
}
// ============================================================================
// Tests
// ============================================================================
#[cfg(test)]
mod tests {
use super::*;
// ── Encoding/decoding ───────────────────────────────────────────────
#[test]
fn test_decode_azure() {
assert_eq!(
decode_module_path("github.com/!azure/sdk").unwrap(),
"github.com/Azure/sdk"
);
}
#[test]
fn test_decode_multiple_uppercase() {
assert_eq!(
decode_module_path("!google!cloud!platform/foo").unwrap(),
"GoogleCloudPlatform/foo"
);
}
#[test]
fn test_decode_no_uppercase() {
assert_eq!(
decode_module_path("github.com/user/repo").unwrap(),
"github.com/user/repo"
);
}
#[test]
fn test_decode_invalid_bang_at_end() {
assert!(decode_module_path("foo!").is_err());
}
#[test]
fn test_decode_invalid_bang_followed_by_uppercase() {
assert!(decode_module_path("foo!A").is_err());
}
#[test]
fn test_decode_raw_uppercase_is_invalid() {
assert!(decode_module_path("github.com/Azure/sdk").is_err());
}
#[test]
fn test_encode_roundtrip() {
let original = "github.com/Azure/azure-sdk-for-go";
let encoded = encode_module_path(original);
assert_eq!(encoded, "github.com/!azure/azure-sdk-for-go");
assert_eq!(decode_module_path(&encoded).unwrap(), original);
}
#[test]
fn test_encode_no_change() {
assert_eq!(
encode_module_path("github.com/user/repo"),
"github.com/user/repo"
);
}
// ── Path splitting ──────────────────────────────────────────────────
#[test]
fn test_split_version_info() {
let (module, file) = split_go_path("github.com/user/repo/@v/v1.0.0.info").unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@v/v1.0.0.info");
}
#[test]
fn test_split_version_list() {
let (module, file) = split_go_path("github.com/user/repo/@v/list").unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@v/list");
}
#[test]
fn test_split_latest() {
let (module, file) = split_go_path("github.com/user/repo/@latest").unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@latest");
}
#[test]
fn test_split_major_version_suffix() {
let (module, file) = split_go_path("github.com/user/repo/v2/@v/list").unwrap();
assert_eq!(module, "github.com/user/repo/v2");
assert_eq!(file, "@v/list");
}
#[test]
fn test_split_incompatible_version() {
let (module, file) =
split_go_path("github.com/user/repo/@v/v4.1.2+incompatible.info").unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@v/v4.1.2+incompatible.info");
}
#[test]
fn test_split_pseudo_version() {
let (module, file) =
split_go_path("github.com/user/repo/@v/v0.0.0-20210101000000-abcdef123456.info")
.unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@v/v0.0.0-20210101000000-abcdef123456.info");
}
#[test]
fn test_split_no_at() {
assert!(split_go_path("github.com/user/repo/v1.0.0").is_none());
}
#[test]
fn test_split_empty_module() {
assert!(split_go_path("/@v/list").is_none());
}
// ── Path safety ─────────────────────────────────────────────────────
#[test]
fn test_safe_path_normal() {
assert!(is_safe_path("github.com/user/repo/@v/list"));
}
#[test]
fn test_reject_traversal() {
assert!(!is_safe_path("../../etc/passwd"));
}
#[test]
fn test_reject_absolute() {
assert!(!is_safe_path("/etc/passwd"));
}
#[test]
fn test_reject_double_slash() {
assert!(!is_safe_path("github.com//evil/@v/list"));
}
#[test]
fn test_reject_null() {
assert!(!is_safe_path("github.com/\0evil/@v/list"));
}
#[test]
fn test_reject_empty() {
assert!(!is_safe_path(""));
}
// ── Content-Type ────────────────────────────────────────────────────
#[test]
fn test_content_type_info() {
assert_eq!(content_type_for("@v/v1.0.0.info"), "application/json");
}
#[test]
fn test_content_type_latest() {
assert_eq!(content_type_for("@latest"), "application/json");
}
#[test]
fn test_content_type_zip() {
assert_eq!(content_type_for("@v/v1.0.0.zip"), "application/zip");
}
#[test]
fn test_content_type_mod() {
assert_eq!(
content_type_for("@v/v1.0.0.mod"),
"text/plain; charset=utf-8"
);
}
#[test]
fn test_content_type_list() {
assert_eq!(content_type_for("@v/list"), "text/plain; charset=utf-8");
}
// ── Artifact formatting ─────────────────────────────────────────────
#[test]
fn test_format_artifact_version() {
assert_eq!(
format_artifact("github.com/user/repo", "@v/v1.0.0.info"),
"github.com/user/repo@v1.0.0"
);
}
#[test]
fn test_format_artifact_list() {
assert_eq!(
format_artifact("github.com/user/repo", "@v/list"),
"github.com/user/repo @v/list"
);
}
#[test]
fn test_format_artifact_latest() {
assert_eq!(
format_artifact("github.com/user/repo", "@latest"),
"github.com/user/repo @latest"
);
}
#[test]
fn test_format_artifact_zip() {
assert_eq!(
format_artifact("github.com/user/repo", "@v/v1.0.0.zip"),
"github.com/user/repo@v1.0.0"
);
}
}

View File

@@ -3,6 +3,7 @@
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::proxy_fetch;
use crate::AppState;
use axum::{
body::Bytes,
@@ -13,7 +14,6 @@ use axum::{
Router,
};
use std::sync::Arc;
use std::time::Duration;
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
@@ -49,10 +49,17 @@ async fn download(State(state): State<Arc<AppState>>, Path(path): Path<String>)
return with_content_type(&path, data).into_response();
}
for proxy_url in &state.config.maven.proxies {
let url = format!("{}/{}", proxy_url.trim_end_matches('/'), path);
for proxy in &state.config.maven.proxies {
let url = format!("{}/{}", proxy.url().trim_end_matches('/'), path);
match fetch_from_proxy(&state.http_client, &url, state.config.maven.proxy_timeout).await {
match proxy_fetch(
&state.http_client,
&url,
state.config.maven.proxy_timeout,
proxy.auth(),
)
.await
{
Ok(data) => {
state.metrics.record_download("maven");
state.metrics.record_cache_miss();
@@ -120,25 +127,6 @@ async fn upload(
}
}
async fn fetch_from_proxy(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
) -> Result<Vec<u8>, ()> {
let response = client
.get(url)
.timeout(Duration::from_secs(timeout_secs))
.send()
.await
.map_err(|_| ())?;
if !response.status().is_success() {
return Err(());
}
response.bytes().await.map(|b| b.to_vec()).map_err(|_| ())
}
fn with_content_type(
path: &str,
data: Bytes,

View File

@@ -4,6 +4,7 @@
mod cargo_registry;
pub mod docker;
pub mod docker_auth;
mod go;
mod maven;
mod npm;
mod pypi;
@@ -12,7 +13,132 @@ mod raw;
pub use cargo_registry::routes as cargo_routes;
pub use docker::routes as docker_routes;
pub use docker_auth::DockerAuth;
pub use go::routes as go_routes;
pub use maven::routes as maven_routes;
pub use npm::routes as npm_routes;
pub use pypi::routes as pypi_routes;
pub use raw::routes as raw_routes;
use crate::config::basic_auth_header;
use std::time::Duration;
/// Fetch from upstream proxy with timeout and 1 retry.
///
/// On transient errors (timeout, connection reset), retries once after a short delay.
/// Non-retryable errors (4xx) fail immediately.
pub(crate) async fn proxy_fetch(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
auth: Option<&str>,
) -> Result<Vec<u8>, ProxyError> {
for attempt in 0..2 {
let mut request = client.get(url).timeout(Duration::from_secs(timeout_secs));
if let Some(credentials) = auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
match request.send().await {
Ok(response) => {
if response.status().is_success() {
return response
.bytes()
.await
.map(|b| b.to_vec())
.map_err(|e| ProxyError::Network(e.to_string()));
}
let status = response.status().as_u16();
// Don't retry client errors (4xx)
if (400..500).contains(&status) {
return Err(ProxyError::NotFound);
}
// Server error (5xx) — retry
if attempt == 0 {
tracing::debug!(url, status, "upstream 5xx, retrying in 1s");
tokio::time::sleep(Duration::from_secs(1)).await;
continue;
}
return Err(ProxyError::Upstream(status));
}
Err(e) => {
if attempt == 0 {
tracing::debug!(url, error = %e, "upstream error, retrying in 1s");
tokio::time::sleep(Duration::from_secs(1)).await;
continue;
}
return Err(ProxyError::Network(e.to_string()));
}
}
}
Err(ProxyError::Network("max retries exceeded".into()))
}
#[derive(Debug)]
#[allow(dead_code)]
pub(crate) enum ProxyError {
NotFound,
Upstream(u16),
Network(String),
}
/// Fetch text content from upstream proxy with timeout and 1 retry.
/// Same as proxy_fetch but returns String (for HTML pages like PyPI simple index).
pub(crate) async fn proxy_fetch_text(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
auth: Option<&str>,
extra_headers: Option<(&str, &str)>,
) -> Result<String, ProxyError> {
for attempt in 0..2 {
let mut request = client.get(url).timeout(Duration::from_secs(timeout_secs));
if let Some(credentials) = auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
if let Some((key, val)) = extra_headers {
request = request.header(key, val);
}
match request.send().await {
Ok(response) => {
if response.status().is_success() {
return response
.text()
.await
.map_err(|e| ProxyError::Network(e.to_string()));
}
let status = response.status().as_u16();
if (400..500).contains(&status) {
return Err(ProxyError::NotFound);
}
if attempt == 0 {
tracing::debug!(url, status, "upstream 5xx, retrying in 1s");
tokio::time::sleep(Duration::from_secs(1)).await;
continue;
}
return Err(ProxyError::Upstream(status));
}
Err(e) => {
if attempt == 0 {
tracing::debug!(url, error = %e, "upstream error, retrying in 1s");
tokio::time::sleep(Duration::from_secs(1)).await;
continue;
}
return Err(ProxyError::Network(e.to_string()));
}
}
}
Err(ProxyError::Network("max retries exceeded".into()))
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_proxy_fetch_invalid_url() {
let client = reqwest::Client::new();
let result = proxy_fetch(&client, "http://127.0.0.1:1/nonexistent", 2, None).await;
assert!(matches!(result, Err(ProxyError::Network(_))));
}
}

View File

@@ -3,27 +3,70 @@
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::proxy_fetch;
use crate::AppState;
use axum::{
body::Bytes,
extract::{Path, State},
http::{header, StatusCode},
response::{IntoResponse, Response},
routing::get,
routing::{get, put},
Router,
};
use base64::Engine;
use sha2::Digest;
use std::sync::Arc;
use std::time::Duration;
pub fn routes() -> Router<Arc<AppState>> {
Router::new().route("/npm/{*path}", get(handle_request))
Router::new()
.route("/npm/{*path}", get(handle_request))
.route("/npm/{*path}", put(handle_publish))
}
/// Build NORA base URL from config (for URL rewriting)
fn nora_base_url(state: &AppState) -> String {
state.config.server.public_url.clone().unwrap_or_else(|| {
format!(
"http://{}:{}",
state.config.server.host, state.config.server.port
)
})
}
/// Rewrite tarball URLs in npm metadata to point to NORA.
///
/// Replaces upstream registry URLs (e.g. `https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz`)
/// with NORA URLs (e.g. `http://nora:5000/npm/lodash/-/lodash-4.17.21.tgz`).
fn rewrite_tarball_urls(data: &[u8], nora_base: &str, upstream_url: &str) -> Result<Vec<u8>, ()> {
let mut json: serde_json::Value = serde_json::from_slice(data).map_err(|_| ())?;
let upstream_trimmed = upstream_url.trim_end_matches('/');
let nora_npm_base = format!("{}/npm", nora_base.trim_end_matches('/'));
if let Some(versions) = json.get_mut("versions").and_then(|v| v.as_object_mut()) {
for (_ver, version_data) in versions.iter_mut() {
if let Some(tarball_url) = version_data
.get("dist")
.and_then(|d| d.get("tarball"))
.and_then(|t| t.as_str())
.map(|s| s.to_string())
{
let rewritten = tarball_url.replace(upstream_trimmed, &nora_npm_base);
if let Some(dist) = version_data.get_mut("dist") {
dist["tarball"] = serde_json::Value::String(rewritten);
}
}
}
}
serde_json::to_vec(&json).map_err(|_| ())
}
async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<String>) -> Response {
let is_tarball = path.contains("/-/");
let key = if is_tarball {
let parts: Vec<&str> = path.split("/-/").collect();
let parts: Vec<&str> = path.splitn(2, "/-/").collect();
if parts.len() == 2 {
format!("npm/{}/tarballs/{}", parts[0], parts[1])
} else {
@@ -39,30 +82,83 @@ async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<Str
path.clone()
};
// --- Cache hit path ---
if let Ok(data) = state.storage.get(&key).await {
if is_tarball {
state.metrics.record_download("npm");
state.metrics.record_cache_hit();
state.activity.push(ActivityEntry::new(
ActionType::CacheHit,
package_name,
"npm",
"CACHE",
));
state
.audit
.log(AuditEntry::new("cache_hit", "api", "", "npm", ""));
// Metadata TTL: if stale, try to refetch from upstream
if !is_tarball {
let ttl = state.config.npm.metadata_ttl;
if ttl > 0 {
if let Some(meta) = state.storage.stat(&key).await {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_secs())
.unwrap_or(0);
if now.saturating_sub(meta.modified) > ttl {
if let Some(fresh) = refetch_metadata(&state, &path, &key).await {
return with_content_type(false, fresh.into()).into_response();
}
// Upstream failed — serve stale cache
}
}
}
return with_content_type(false, data).into_response();
}
return with_content_type(is_tarball, data).into_response();
// Tarball: integrity check if hash exists
let hash_key = format!("{}.sha256", key);
if let Ok(stored_hash) = state.storage.get(&hash_key).await {
let computed = format!("{:x}", sha2::Sha256::digest(&data));
let expected = String::from_utf8_lossy(&stored_hash);
if computed != expected.as_ref() {
tracing::error!(
key = %key,
expected = %expected,
computed = %computed,
"SECURITY: npm tarball integrity check FAILED — possible tampering"
);
return (StatusCode::INTERNAL_SERVER_ERROR, "Integrity check failed")
.into_response();
}
}
state.metrics.record_download("npm");
state.metrics.record_cache_hit();
state.activity.push(ActivityEntry::new(
ActionType::CacheHit,
package_name,
"npm",
"CACHE",
));
state
.audit
.log(AuditEntry::new("cache_hit", "api", "", "npm", ""));
return with_content_type(true, data).into_response();
}
// --- Proxy fetch path ---
if let Some(proxy_url) = &state.config.npm.proxy {
let url = format!("{}/{}", proxy_url.trim_end_matches('/'), path);
if let Ok(data) =
fetch_from_proxy(&state.http_client, &url, state.config.npm.proxy_timeout).await
if let Ok(data) = proxy_fetch(
&state.http_client,
&url,
state.config.npm.proxy_timeout,
state.config.npm.proxy_auth.as_deref(),
)
.await
{
let data_to_cache;
let data_to_serve;
if is_tarball {
// Compute and store sha256
let hash = format!("{:x}", sha2::Sha256::digest(&data));
let hash_key = format!("{}.sha256", key);
let storage = state.storage.clone();
tokio::spawn(async move {
let _ = storage.put(&hash_key, hash.as_bytes()).await;
});
state.metrics.record_download("npm");
state.metrics.record_cache_miss();
state.activity.push(ActivityEntry::new(
@@ -74,45 +170,254 @@ async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<Str
state
.audit
.log(AuditEntry::new("proxy_fetch", "api", "", "npm", ""));
data_to_cache = data.clone();
data_to_serve = data;
} else {
// Metadata: rewrite tarball URLs to point to NORA
let nora_base = nora_base_url(&state);
let rewritten = rewrite_tarball_urls(&data, &nora_base, proxy_url)
.unwrap_or_else(|_| data.clone());
data_to_cache = rewritten.clone();
data_to_serve = rewritten;
}
// Cache in background
let storage = state.storage.clone();
let key_clone = key.clone();
let data_clone = data.clone();
tokio::spawn(async move {
let _ = storage.put(&key_clone, &data_clone).await;
let _ = storage.put(&key_clone, &data_to_cache).await;
});
if is_tarball {
state.repo_index.invalidate("npm");
}
return with_content_type(is_tarball, data.into()).into_response();
return with_content_type(is_tarball, data_to_serve.into()).into_response();
}
}
StatusCode::NOT_FOUND.into_response()
}
async fn fetch_from_proxy(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
) -> Result<Vec<u8>, ()> {
let response = client
.get(url)
.timeout(Duration::from_secs(timeout_secs))
.send()
.await
.map_err(|_| ())?;
/// Refetch metadata from upstream, rewrite URLs, update cache.
/// Returns None if upstream is unavailable (caller serves stale cache).
async fn refetch_metadata(state: &Arc<AppState>, path: &str, key: &str) -> Option<Vec<u8>> {
let proxy_url = state.config.npm.proxy.as_ref()?;
let url = format!("{}/{}", proxy_url.trim_end_matches('/'), path);
if !response.status().is_success() {
return Err(());
let data = proxy_fetch(
&state.http_client,
&url,
state.config.npm.proxy_timeout,
state.config.npm.proxy_auth.as_deref(),
)
.await
.ok()?;
let nora_base = nora_base_url(state);
let rewritten =
rewrite_tarball_urls(&data, &nora_base, proxy_url).unwrap_or_else(|_| data.clone());
let storage = state.storage.clone();
let key_clone = key.to_string();
let cache_data = rewritten.clone();
tokio::spawn(async move {
let _ = storage.put(&key_clone, &cache_data).await;
});
Some(rewritten)
}
// ============================================================================
// npm publish
// ============================================================================
/// Validate attachment filename: only safe characters, no path traversal.
fn is_valid_attachment_name(name: &str) -> bool {
!name.is_empty()
&& !name.contains("..")
&& !name.contains('/')
&& !name.contains('\\')
&& !name.contains('\0')
&& name
.chars()
.all(|c| c.is_ascii_alphanumeric() || matches!(c, '.' | '-' | '_' | '@'))
}
async fn handle_publish(
State(state): State<Arc<AppState>>,
Path(path): Path<String>,
body: Bytes,
) -> Response {
let package_name = path;
let payload: serde_json::Value = match serde_json::from_slice(&body) {
Ok(v) => v,
Err(e) => return (StatusCode::BAD_REQUEST, format!("Invalid JSON: {}", e)).into_response(),
};
// Security: verify payload name matches URL path
if let Some(payload_name) = payload.get("name").and_then(|n| n.as_str()) {
if payload_name != package_name {
tracing::warn!(
url_name = %package_name,
payload_name = %payload_name,
"SECURITY: npm publish name mismatch — possible spoofing attempt"
);
return (
StatusCode::BAD_REQUEST,
"Package name in URL does not match payload",
)
.into_response();
}
}
response.bytes().await.map(|b| b.to_vec()).map_err(|_| ())
let attachments = match payload.get("_attachments").and_then(|a| a.as_object()) {
Some(a) => a,
None => return (StatusCode::BAD_REQUEST, "Missing _attachments").into_response(),
};
let new_versions = match payload.get("versions").and_then(|v| v.as_object()) {
Some(v) => v,
None => return (StatusCode::BAD_REQUEST, "Missing versions").into_response(),
};
// Load or create metadata
let metadata_key = format!("npm/{}/metadata.json", package_name);
let mut metadata = if let Ok(existing) = state.storage.get(&metadata_key).await {
serde_json::from_slice::<serde_json::Value>(&existing)
.unwrap_or_else(|_| serde_json::json!({}))
} else {
serde_json::json!({})
};
// Version immutability
if let Some(existing_versions) = metadata.get("versions").and_then(|v| v.as_object()) {
for ver in new_versions.keys() {
if existing_versions.contains_key(ver) {
return (
StatusCode::CONFLICT,
format!("Version {} already exists", ver),
)
.into_response();
}
}
}
// Store tarballs
for (filename, attachment_data) in attachments {
if !is_valid_attachment_name(filename) {
tracing::warn!(
filename = %filename,
package = %package_name,
"SECURITY: npm publish rejected — invalid attachment filename"
);
return (StatusCode::BAD_REQUEST, "Invalid attachment filename").into_response();
}
let base64_data = match attachment_data.get("data").and_then(|d| d.as_str()) {
Some(d) => d,
None => continue,
};
let tarball_bytes = match base64::engine::general_purpose::STANDARD.decode(base64_data) {
Ok(b) => b,
Err(_) => {
return (StatusCode::BAD_REQUEST, "Invalid base64 in attachment").into_response()
}
};
let tarball_key = format!("npm/{}/tarballs/{}", package_name, filename);
if state
.storage
.put(&tarball_key, &tarball_bytes)
.await
.is_err()
{
return StatusCode::INTERNAL_SERVER_ERROR.into_response();
}
// Store sha256
let hash = format!("{:x}", sha2::Sha256::digest(&tarball_bytes));
let hash_key = format!("{}.sha256", tarball_key);
let _ = state.storage.put(&hash_key, hash.as_bytes()).await;
}
// Merge versions
let meta_obj = metadata.as_object_mut().unwrap();
let stored_versions = meta_obj.entry("versions").or_insert(serde_json::json!({}));
if let Some(sv) = stored_versions.as_object_mut() {
for (ver, ver_data) in new_versions {
sv.insert(ver.clone(), ver_data.clone());
}
}
// Copy standard fields
for field in &["name", "_id", "description", "readme", "license"] {
if let Some(val) = payload.get(*field) {
meta_obj.insert(field.to_string(), val.clone());
}
}
// Merge dist-tags
if let Some(new_dist_tags) = payload.get("dist-tags").and_then(|d| d.as_object()) {
let stored_dist_tags = meta_obj.entry("dist-tags").or_insert(serde_json::json!({}));
if let Some(sdt) = stored_dist_tags.as_object_mut() {
for (tag, ver) in new_dist_tags {
sdt.insert(tag.clone(), ver.clone());
}
}
}
// Rewrite tarball URLs for published packages
let nora_base = nora_base_url(&state);
if let Some(versions) = metadata.get_mut("versions").and_then(|v| v.as_object_mut()) {
for (ver, ver_data) in versions.iter_mut() {
if let Some(dist) = ver_data.get_mut("dist") {
let short_name = package_name.split('/').next_back().unwrap_or(&package_name);
let tarball_url = format!(
"{}/npm/{}/-/{}-{}.tgz",
nora_base.trim_end_matches('/'),
package_name,
short_name,
ver
);
dist["tarball"] = serde_json::Value::String(tarball_url);
}
}
}
// Store metadata
match serde_json::to_vec(&metadata) {
Ok(bytes) => {
if state.storage.put(&metadata_key, &bytes).await.is_err() {
return StatusCode::INTERNAL_SERVER_ERROR.into_response();
}
}
Err(_) => return StatusCode::INTERNAL_SERVER_ERROR.into_response(),
}
state.metrics.record_upload("npm");
state.activity.push(ActivityEntry::new(
ActionType::Push,
package_name,
"npm",
"LOCAL",
));
state
.audit
.log(AuditEntry::new("push", "api", "", "npm", ""));
state.repo_index.invalidate("npm");
StatusCode::CREATED.into_response()
}
// ============================================================================
// Helpers
// ============================================================================
fn with_content_type(
is_tarball: bool,
data: Bytes,
@@ -125,3 +430,129 @@ fn with_content_type(
(StatusCode::OK, [(header::CONTENT_TYPE, content_type)], data)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_rewrite_tarball_urls_regular_package() {
let metadata = serde_json::json!({
"name": "lodash",
"versions": {
"4.17.21": {
"dist": {
"tarball": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
"shasum": "abc123"
}
}
}
});
let data = serde_json::to_vec(&metadata).unwrap();
let result =
rewrite_tarball_urls(&data, "http://nora:5000", "https://registry.npmjs.org").unwrap();
let json: serde_json::Value = serde_json::from_slice(&result).unwrap();
assert_eq!(
json["versions"]["4.17.21"]["dist"]["tarball"],
"http://nora:5000/npm/lodash/-/lodash-4.17.21.tgz"
);
assert_eq!(json["versions"]["4.17.21"]["dist"]["shasum"], "abc123");
}
#[test]
fn test_rewrite_tarball_urls_scoped_package() {
let metadata = serde_json::json!({
"name": "@babel/core",
"versions": {
"7.26.0": {
"dist": {
"tarball": "https://registry.npmjs.org/@babel/core/-/core-7.26.0.tgz",
"integrity": "sha512-test"
}
}
}
});
let data = serde_json::to_vec(&metadata).unwrap();
let result =
rewrite_tarball_urls(&data, "http://nora:5000", "https://registry.npmjs.org").unwrap();
let json: serde_json::Value = serde_json::from_slice(&result).unwrap();
assert_eq!(
json["versions"]["7.26.0"]["dist"]["tarball"],
"http://nora:5000/npm/@babel/core/-/core-7.26.0.tgz"
);
}
#[test]
fn test_rewrite_tarball_urls_multiple_versions() {
let metadata = serde_json::json!({
"name": "express",
"versions": {
"4.18.2": { "dist": { "tarball": "https://registry.npmjs.org/express/-/express-4.18.2.tgz" } },
"4.19.0": { "dist": { "tarball": "https://registry.npmjs.org/express/-/express-4.19.0.tgz" } }
}
});
let data = serde_json::to_vec(&metadata).unwrap();
let result = rewrite_tarball_urls(
&data,
"https://demo.getnora.io",
"https://registry.npmjs.org",
)
.unwrap();
let json: serde_json::Value = serde_json::from_slice(&result).unwrap();
assert_eq!(
json["versions"]["4.18.2"]["dist"]["tarball"],
"https://demo.getnora.io/npm/express/-/express-4.18.2.tgz"
);
assert_eq!(
json["versions"]["4.19.0"]["dist"]["tarball"],
"https://demo.getnora.io/npm/express/-/express-4.19.0.tgz"
);
}
#[test]
fn test_rewrite_tarball_urls_no_versions() {
let metadata = serde_json::json!({ "name": "empty-pkg" });
let data = serde_json::to_vec(&metadata).unwrap();
let result =
rewrite_tarball_urls(&data, "http://nora:5000", "https://registry.npmjs.org").unwrap();
let json: serde_json::Value = serde_json::from_slice(&result).unwrap();
assert_eq!(json["name"], "empty-pkg");
}
#[test]
fn test_rewrite_invalid_json() {
assert!(rewrite_tarball_urls(
b"not json",
"http://nora:5000",
"https://registry.npmjs.org"
)
.is_err());
}
#[test]
fn test_valid_attachment_names() {
assert!(is_valid_attachment_name("lodash-4.17.21.tgz"));
assert!(is_valid_attachment_name("core-7.26.0.tgz"));
assert!(is_valid_attachment_name("my_package-1.0.0.tgz"));
assert!(is_valid_attachment_name("@scope-pkg-1.0.0.tgz"));
}
#[test]
fn test_path_traversal_attachment_names() {
assert!(!is_valid_attachment_name("../../etc/passwd"));
assert!(!is_valid_attachment_name(
"../docker/nginx/manifests/latest.json"
));
assert!(!is_valid_attachment_name("foo/bar.tgz"));
assert!(!is_valid_attachment_name("foo\\bar.tgz"));
}
#[test]
fn test_empty_and_null_attachment_names() {
assert!(!is_valid_attachment_name(""));
assert!(!is_valid_attachment_name("foo\0bar.tgz"));
}
}

View File

@@ -3,6 +3,7 @@
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::{proxy_fetch, proxy_fetch_text};
use crate::AppState;
use axum::{
extract::{Path, State},
@@ -12,7 +13,6 @@ use axum::{
Router,
};
use std::sync::Arc;
use std::time::Duration;
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
@@ -86,8 +86,14 @@ async fn package_versions(
if let Some(proxy_url) = &state.config.pypi.proxy {
let url = format!("{}/{}/", proxy_url.trim_end_matches('/'), normalized);
if let Ok(html) =
fetch_package_page(&state.http_client, &url, state.config.pypi.proxy_timeout).await
if let Ok(html) = proxy_fetch_text(
&state.http_client,
&url,
state.config.pypi.proxy_timeout,
state.config.pypi.proxy_auth.as_deref(),
Some(("Accept", "text/html")),
)
.await
{
// Rewrite URLs in the HTML to point to our registry
let rewritten = rewrite_pypi_links(&html, &normalized);
@@ -136,19 +142,22 @@ async fn download_file(
// First, fetch the package page to find the actual download URL
let page_url = format!("{}/{}/", proxy_url.trim_end_matches('/'), normalized);
if let Ok(html) = fetch_package_page(
if let Ok(html) = proxy_fetch_text(
&state.http_client,
&page_url,
state.config.pypi.proxy_timeout,
state.config.pypi.proxy_auth.as_deref(),
Some(("Accept", "text/html")),
)
.await
{
// Find the URL for this specific file
if let Some(file_url) = find_file_url(&html, &filename) {
if let Ok(data) = fetch_file(
if let Ok(data) = proxy_fetch(
&state.http_client,
&file_url,
state.config.pypi.proxy_timeout,
state.config.pypi.proxy_auth.as_deref(),
)
.await
{
@@ -197,43 +206,6 @@ fn normalize_name(name: &str) -> String {
name.to_lowercase().replace(['-', '_', '.'], "-")
}
/// Fetch package page from upstream
async fn fetch_package_page(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
) -> Result<String, ()> {
let response = client
.get(url)
.timeout(Duration::from_secs(timeout_secs))
.header("Accept", "text/html")
.send()
.await
.map_err(|_| ())?;
if !response.status().is_success() {
return Err(());
}
response.text().await.map_err(|_| ())
}
/// Fetch file from upstream
async fn fetch_file(client: &reqwest::Client, url: &str, timeout_secs: u64) -> Result<Vec<u8>, ()> {
let response = client
.get(url)
.timeout(Duration::from_secs(timeout_secs))
.send()
.await
.map_err(|_| ())?;
if !response.status().is_success() {
return Err(());
}
response.bytes().await.map(|b| b.to_vec()).map_err(|_| ())
}
/// Rewrite PyPI links to point to our registry
fn rewrite_pypi_links(html: &str, package_name: &str) -> String {
// Simple regex-free approach: find href="..." and rewrite

View File

@@ -3,7 +3,7 @@
//! In-memory repository index with lazy rebuild on invalidation.
//!
//! Design (designed for efficiency):
//! Design:
//! - Rebuild happens ONLY on write operations, not TTL
//! - Double-checked locking prevents duplicate rebuilds
//! - Arc<Vec> for zero-cost reads
@@ -80,6 +80,8 @@ pub struct RepoIndex {
pub npm: RegistryIndex,
pub cargo: RegistryIndex,
pub pypi: RegistryIndex,
pub go: RegistryIndex,
pub raw: RegistryIndex,
}
impl RepoIndex {
@@ -90,6 +92,8 @@ impl RepoIndex {
npm: RegistryIndex::new(),
cargo: RegistryIndex::new(),
pypi: RegistryIndex::new(),
go: RegistryIndex::new(),
raw: RegistryIndex::new(),
}
}
@@ -101,6 +105,8 @@ impl RepoIndex {
"npm" => self.npm.invalidate(),
"cargo" => self.cargo.invalidate(),
"pypi" => self.pypi.invalidate(),
"go" => self.go.invalidate(),
"raw" => self.raw.invalidate(),
_ => {}
}
}
@@ -113,6 +119,8 @@ impl RepoIndex {
"npm" => &self.npm,
"cargo" => &self.cargo,
"pypi" => &self.pypi,
"go" => &self.go,
"raw" => &self.raw,
_ => return Arc::new(Vec::new()),
};
@@ -132,6 +140,8 @@ impl RepoIndex {
"npm" => build_npm_index(storage).await,
"cargo" => build_cargo_index(storage).await,
"pypi" => build_pypi_index(storage).await,
"go" => build_go_index(storage).await,
"raw" => build_raw_index(storage).await,
_ => Vec::new(),
};
info!(registry = registry, count = data.len(), "Index rebuilt");
@@ -142,13 +152,15 @@ impl RepoIndex {
}
/// Get counts for stats (no rebuild, just current state)
pub fn counts(&self) -> (usize, usize, usize, usize, usize) {
pub fn counts(&self) -> (usize, usize, usize, usize, usize, usize, usize) {
(
self.docker.count(),
self.maven.count(),
self.npm.count(),
self.cargo.count(),
self.pypi.count(),
self.go.count(),
self.raw.count(),
)
}
}
@@ -173,35 +185,41 @@ async fn build_docker_index(storage: &Storage) -> Vec<RepoInfo> {
}
if let Some(rest) = key.strip_prefix("docker/") {
// Support both single-segment and namespaced images:
// docker/alpine/manifests/latest.json → name="alpine"
// docker/library/alpine/manifests/latest.json → name="library/alpine"
let parts: Vec<_> = rest.split('/').collect();
if parts.len() >= 3 && parts[1] == "manifests" && key.ends_with(".json") {
let name = parts[0].to_string();
let entry = repos.entry(name).or_insert((0, 0, 0));
entry.0 += 1;
let manifest_pos = parts.iter().position(|&p| p == "manifests");
if let Some(pos) = manifest_pos {
if pos >= 1 && key.ends_with(".json") {
let name = parts[..pos].join("/");
let entry = repos.entry(name).or_insert((0, 0, 0));
entry.0 += 1;
if let Ok(data) = storage.get(key).await {
if let Ok(m) = serde_json::from_slice::<serde_json::Value>(&data) {
let cfg = m
.get("config")
.and_then(|c| c.get("size"))
.and_then(|s| s.as_u64())
.unwrap_or(0);
let layers: u64 = m
.get("layers")
.and_then(|l| l.as_array())
.map(|arr| {
arr.iter()
.filter_map(|l| l.get("size").and_then(|s| s.as_u64()))
.sum()
})
.unwrap_or(0);
entry.1 += cfg + layers;
if let Ok(data) = storage.get(key).await {
if let Ok(m) = serde_json::from_slice::<serde_json::Value>(&data) {
let cfg = m
.get("config")
.and_then(|c| c.get("size"))
.and_then(|s| s.as_u64())
.unwrap_or(0);
let layers: u64 = m
.get("layers")
.and_then(|l| l.as_array())
.map(|arr| {
arr.iter()
.filter_map(|l| l.get("size").and_then(|s| s.as_u64()))
.sum()
})
.unwrap_or(0);
entry.1 += cfg + layers;
}
}
}
if let Some(meta) = storage.stat(key).await {
if meta.modified > entry.2 {
entry.2 = meta.modified;
if let Some(meta) = storage.stat(key).await {
if meta.modified > entry.2 {
entry.2 = meta.modified;
}
}
}
}
@@ -244,10 +262,16 @@ async fn build_npm_index(storage: &Storage) -> Vec<RepoInfo> {
for key in &keys {
if let Some(rest) = key.strip_prefix("npm/") {
// Pattern: npm/{package}/tarballs/{file}.tgz
// Scoped: npm/@scope/package/tarballs/{file}.tgz
if rest.contains("/tarballs/") && key.ends_with(".tgz") {
let parts: Vec<_> = rest.split('/').collect();
if !parts.is_empty() {
let name = parts[0].to_string();
// Scoped packages: @scope/package → parts[0]="@scope", parts[1]="package"
let name = if parts[0].starts_with('@') && parts.len() >= 4 {
format!("{}/{}", parts[0], parts[1])
} else {
parts[0].to_string()
};
let entry = packages.entry(name).or_insert((0, 0, 0));
entry.0 += 1;
@@ -317,6 +341,57 @@ async fn build_pypi_index(storage: &Storage) -> Vec<RepoInfo> {
to_sorted_vec(packages)
}
async fn build_go_index(storage: &Storage) -> Vec<RepoInfo> {
let keys = storage.list("go/").await;
let mut modules: HashMap<String, (usize, u64, u64)> = HashMap::new();
for key in &keys {
if let Some(rest) = key.strip_prefix("go/") {
// Pattern: go/{module}/@v/{version}.zip
// Count .zip files as versions (authoritative artifacts)
if rest.contains("/@v/") && key.ends_with(".zip") {
// Extract module path: everything before /@v/
if let Some(pos) = rest.rfind("/@v/") {
let module = &rest[..pos];
let entry = modules.entry(module.to_string()).or_insert((0, 0, 0));
entry.0 += 1;
if let Some(meta) = storage.stat(key).await {
entry.1 += meta.size;
if meta.modified > entry.2 {
entry.2 = meta.modified;
}
}
}
}
}
}
to_sorted_vec(modules)
}
async fn build_raw_index(storage: &Storage) -> Vec<RepoInfo> {
let keys = storage.list("raw/").await;
let mut files: HashMap<String, (usize, u64, u64)> = HashMap::new();
for key in &keys {
if let Some(rest) = key.strip_prefix("raw/") {
// Group by top-level directory
let group = rest.split('/').next().unwrap_or(rest).to_string();
let entry = files.entry(group).or_insert((0, 0, 0));
entry.0 += 1;
if let Some(meta) = storage.stat(key).await {
entry.1 += meta.size;
if meta.modified > entry.2 {
entry.2 = meta.modified;
}
}
}
}
to_sorted_vec(files)
}
/// Convert HashMap to sorted Vec<RepoInfo>
fn to_sorted_vec(map: HashMap<String, (usize, u64, u64)>) -> Vec<RepoInfo> {
let mut result: Vec<_> = map
@@ -349,3 +424,165 @@ pub fn paginate<T: Clone>(data: &[T], page: usize, limit: usize) -> (Vec<T>, usi
let end = (start + limit).min(total);
(data[start..end].to_vec(), total)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_paginate_first_page() {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let (page, total) = paginate(&data, 1, 3);
assert_eq!(page, vec![1, 2, 3]);
assert_eq!(total, 10);
}
#[test]
fn test_paginate_second_page() {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let (page, total) = paginate(&data, 2, 3);
assert_eq!(page, vec![4, 5, 6]);
assert_eq!(total, 10);
}
#[test]
fn test_paginate_last_page_partial() {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let (page, total) = paginate(&data, 4, 3);
assert_eq!(page, vec![10]);
assert_eq!(total, 10);
}
#[test]
fn test_paginate_beyond_range() {
let data = vec![1, 2, 3];
let (page, total) = paginate(&data, 5, 3);
assert!(page.is_empty());
assert_eq!(total, 3);
}
#[test]
fn test_paginate_empty_data() {
let data: Vec<i32> = vec![];
let (page, total) = paginate(&data, 1, 10);
assert!(page.is_empty());
assert_eq!(total, 0);
}
#[test]
fn test_paginate_page_zero() {
// page 0 with saturating_sub becomes 0, so start = 0
let data = vec![1, 2, 3];
let (page, _) = paginate(&data, 0, 2);
assert_eq!(page, vec![1, 2]);
}
#[test]
fn test_paginate_large_limit() {
let data = vec![1, 2, 3];
let (page, total) = paginate(&data, 1, 100);
assert_eq!(page, vec![1, 2, 3]);
assert_eq!(total, 3);
}
#[test]
fn test_registry_index_new() {
let idx = RegistryIndex::new();
assert_eq!(idx.count(), 0);
assert!(idx.is_dirty());
}
#[test]
fn test_registry_index_invalidate() {
let idx = RegistryIndex::new();
// Initially dirty
assert!(idx.is_dirty());
// Set data clears dirty
idx.set(vec![RepoInfo {
name: "test".to_string(),
versions: 1,
size: 100,
updated: "2026-01-01".to_string(),
}]);
assert!(!idx.is_dirty());
assert_eq!(idx.count(), 1);
// Invalidate makes it dirty again
idx.invalidate();
assert!(idx.is_dirty());
}
#[test]
fn test_registry_index_get_cached() {
let idx = RegistryIndex::new();
idx.set(vec![
RepoInfo {
name: "a".to_string(),
versions: 2,
size: 200,
updated: "today".to_string(),
},
RepoInfo {
name: "b".to_string(),
versions: 1,
size: 100,
updated: "yesterday".to_string(),
},
]);
let cached = idx.get_cached();
assert_eq!(cached.len(), 2);
assert_eq!(cached[0].name, "a");
}
#[test]
fn test_registry_index_default() {
let idx = RegistryIndex::default();
assert_eq!(idx.count(), 0);
}
#[test]
fn test_repo_index_new() {
let idx = RepoIndex::new();
let (d, m, n, c, p, g, r) = idx.counts();
assert_eq!((d, m, n, c, p, g, r), (0, 0, 0, 0, 0, 0, 0));
}
#[test]
fn test_repo_index_invalidate() {
let idx = RepoIndex::new();
// Should not panic for any registry
idx.invalidate("docker");
idx.invalidate("maven");
idx.invalidate("npm");
idx.invalidate("cargo");
idx.invalidate("pypi");
idx.invalidate("raw");
idx.invalidate("unknown"); // should be a no-op
}
#[test]
fn test_repo_index_default() {
let idx = RepoIndex::default();
let (d, m, n, c, p, g, r) = idx.counts();
assert_eq!((d, m, n, c, p, g, r), (0, 0, 0, 0, 0, 0, 0));
}
#[test]
fn test_to_sorted_vec() {
let mut map = std::collections::HashMap::new();
map.insert("zebra".to_string(), (3usize, 100u64, 0u64));
map.insert("alpha".to_string(), (1, 50, 1700000000));
let result = to_sorted_vec(map);
assert_eq!(result.len(), 2);
assert_eq!(result[0].name, "alpha");
assert_eq!(result[0].versions, 1);
assert_eq!(result[0].size, 50);
assert_ne!(result[0].updated, "N/A");
assert_eq!(result[1].name, "zebra");
assert_eq!(result[1].versions, 3);
assert_eq!(result[1].updated, "N/A"); // modified = 0
}
}

View File

@@ -1,7 +1,6 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
//! Secrets management for NORA
//!
//! Provides a trait-based architecture for secrets providers:

View File

@@ -1,9 +1,14 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
use argon2::{
password_hash::{rand_core::OsRng, PasswordHash, PasswordHasher, PasswordVerifier, SaltString},
Argon2,
};
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
use std::fs;
use std::os::unix::fs::PermissionsExt;
use std::path::{Path, PathBuf};
use std::time::{SystemTime, UNIX_EPOCH};
use thiserror::Error;
@@ -66,8 +71,12 @@ pub struct TokenStore {
impl TokenStore {
/// Create a new token store
pub fn new(storage_path: &Path) -> Self {
// Ensure directory exists
// Ensure directory exists with restricted permissions
let _ = fs::create_dir_all(storage_path);
#[cfg(unix)]
{
let _ = fs::set_permissions(storage_path, fs::Permissions::from_mode(0o700));
}
Self {
storage_path: storage_path.to_path_buf(),
}
@@ -87,7 +96,9 @@ impl TokenStore {
TOKEN_PREFIX,
Uuid::new_v4().to_string().replace("-", "")
);
let token_hash = hash_token(&raw_token);
let token_hash = hash_token_argon2(&raw_token)?;
// Use SHA256 of token as filename (deterministic, for lookup)
let file_id = sha256_hex(&raw_token);
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
@@ -97,7 +108,7 @@ impl TokenStore {
let expires_at = now + (ttl_days * 24 * 60 * 60);
let info = TokenInfo {
token_hash: token_hash.clone(),
token_hash,
user: user.to_string(),
created_at: now,
expires_at,
@@ -106,13 +117,12 @@ impl TokenStore {
role,
};
// Save to file
let file_path = self
.storage_path
.join(format!("{}.json", &token_hash[..16]));
// Save to file with restricted permissions
let file_path = self.storage_path.join(format!("{}.json", &file_id[..16]));
let json =
serde_json::to_string_pretty(&info).map_err(|e| TokenError::Storage(e.to_string()))?;
fs::write(&file_path, json).map_err(|e| TokenError::Storage(e.to_string()))?;
fs::write(&file_path, &json).map_err(|e| TokenError::Storage(e.to_string()))?;
set_file_permissions_600(&file_path);
Ok(raw_token)
}
@@ -123,22 +133,43 @@ impl TokenStore {
return Err(TokenError::InvalidFormat);
}
let token_hash = hash_token(token);
let file_path = self
.storage_path
.join(format!("{}.json", &token_hash[..16]));
let file_id = sha256_hex(token);
let file_path = self.storage_path.join(format!("{}.json", &file_id[..16]));
if !file_path.exists() {
return Err(TokenError::NotFound);
}
// TOCTOU fix: read directly, handle NotFound from IO error
let content = match fs::read_to_string(&file_path) {
Ok(c) => c,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
return Err(TokenError::NotFound);
}
Err(e) => return Err(TokenError::Storage(e.to_string())),
};
let content =
fs::read_to_string(&file_path).map_err(|e| TokenError::Storage(e.to_string()))?;
let mut info: TokenInfo =
serde_json::from_str(&content).map_err(|e| TokenError::Storage(e.to_string()))?;
// Verify hash matches
if info.token_hash != token_hash {
// Verify hash: try Argon2id first, fall back to legacy SHA256
let hash_valid = if info.token_hash.starts_with("$argon2") {
verify_token_argon2(token, &info.token_hash)
} else {
// Legacy SHA256 hash (no salt) — verify and migrate
let legacy_hash = sha256_hex(token);
if info.token_hash == legacy_hash {
// Migrate to Argon2id
if let Ok(new_hash) = hash_token_argon2(token) {
info.token_hash = new_hash;
if let Ok(json) = serde_json::to_string_pretty(&info) {
let _ = fs::write(&file_path, &json);
set_file_permissions_600(&file_path);
}
}
true
} else {
false
}
};
if !hash_valid {
return Err(TokenError::NotFound);
}
@@ -155,7 +186,8 @@ impl TokenStore {
// Update last_used
info.last_used = Some(now);
if let Ok(json) = serde_json::to_string_pretty(&info) {
let _ = fs::write(&file_path, json);
let _ = fs::write(&file_path, &json);
set_file_permissions_600(&file_path);
}
Ok((info.user, info.role))
@@ -185,13 +217,12 @@ impl TokenStore {
pub fn revoke_token(&self, hash_prefix: &str) -> Result<(), TokenError> {
let file_path = self.storage_path.join(format!("{}.json", hash_prefix));
if !file_path.exists() {
return Err(TokenError::NotFound);
// TOCTOU fix: try remove directly
match fs::remove_file(&file_path) {
Ok(()) => Ok(()),
Err(e) if e.kind() == std::io::ErrorKind::NotFound => Err(TokenError::NotFound),
Err(e) => Err(TokenError::Storage(e.to_string())),
}
fs::remove_file(&file_path).map_err(|e| TokenError::Storage(e.to_string()))?;
Ok(())
}
/// Revoke all tokens for a user
@@ -214,13 +245,41 @@ impl TokenStore {
}
}
/// Hash a token using SHA256
fn hash_token(token: &str) -> String {
/// Hash a token using Argon2id with random salt
fn hash_token_argon2(token: &str) -> Result<String, TokenError> {
let salt = SaltString::generate(&mut OsRng);
let argon2 = Argon2::default();
argon2
.hash_password(token.as_bytes(), &salt)
.map(|h| h.to_string())
.map_err(|e| TokenError::Storage(format!("hash error: {e}")))
}
/// Verify a token against an Argon2id hash
fn verify_token_argon2(token: &str, hash: &str) -> bool {
match PasswordHash::new(hash) {
Ok(parsed) => Argon2::default()
.verify_password(token.as_bytes(), &parsed)
.is_ok(),
Err(_) => false,
}
}
/// SHA256 hex digest (used for file naming and legacy hash verification)
fn sha256_hex(input: &str) -> String {
let mut hasher = Sha256::new();
hasher.update(token.as_bytes());
hasher.update(input.as_bytes());
format!("{:x}", hasher.finalize())
}
/// Set file permissions to 600 (owner read/write only)
fn set_file_permissions_600(path: &Path) {
#[cfg(unix)]
{
let _ = fs::set_permissions(path, fs::Permissions::from_mode(0o600));
}
}
#[derive(Debug, Error)]
pub enum TokenError {
#[error("Invalid token format")]
@@ -254,6 +313,19 @@ mod tests {
assert_eq!(token.len(), 4 + 32); // prefix + uuid without dashes
}
#[test]
fn test_token_hash_is_argon2() {
let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path());
let token = store
.create_token("testuser", 30, None, Role::Write)
.unwrap();
let tokens = store.list_tokens("testuser");
assert!(tokens[0].token_hash.starts_with("$argon2"));
}
#[test]
fn test_verify_valid_token() {
let temp_dir = TempDir::new().unwrap();
@@ -291,24 +363,80 @@ mod tests {
let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path());
// Create token and manually set it as expired
let token = store
.create_token("testuser", 1, None, Role::Write)
.unwrap();
let token_hash = hash_token(&token);
let file_path = temp_dir.path().join(format!("{}.json", &token_hash[..16]));
let file_id = sha256_hex(&token);
let file_path = temp_dir.path().join(format!("{}.json", &file_id[..16]));
// Read and modify the token to be expired
let content = std::fs::read_to_string(&file_path).unwrap();
let mut info: TokenInfo = serde_json::from_str(&content).unwrap();
info.expires_at = 0; // Set to epoch (definitely expired)
info.expires_at = 0;
std::fs::write(&file_path, serde_json::to_string(&info).unwrap()).unwrap();
// Token should now be expired
let result = store.verify_token(&token);
assert!(matches!(result, Err(TokenError::Expired)));
}
#[test]
fn test_legacy_sha256_migration() {
let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path());
// Simulate a legacy token with SHA256 hash
let raw_token = "nra_00112233445566778899aabbccddeeff";
let legacy_hash = sha256_hex(raw_token);
let file_id = sha256_hex(raw_token);
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let info = TokenInfo {
token_hash: legacy_hash.clone(),
user: "legacyuser".to_string(),
created_at: now,
expires_at: now + 86400,
last_used: None,
description: None,
role: Role::Read,
};
let file_path = temp_dir.path().join(format!("{}.json", &file_id[..16]));
fs::write(&file_path, serde_json::to_string_pretty(&info).unwrap()).unwrap();
// Verify should work with legacy hash
let (user, role) = store.verify_token(raw_token).unwrap();
assert_eq!(user, "legacyuser");
assert_eq!(role, Role::Read);
// After verification, hash should be migrated to Argon2id
let content = fs::read_to_string(&file_path).unwrap();
let updated: TokenInfo = serde_json::from_str(&content).unwrap();
assert!(updated.token_hash.starts_with("$argon2"));
}
#[test]
fn test_file_permissions() {
let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path());
let token = store
.create_token("testuser", 30, None, Role::Write)
.unwrap();
let file_id = sha256_hex(&token);
let file_path = temp_dir.path().join(format!("{}.json", &file_id[..16]));
#[cfg(unix)]
{
let metadata = fs::metadata(&file_path).unwrap();
let mode = metadata.permissions().mode() & 0o777;
assert_eq!(mode, 0o600);
}
}
#[test]
fn test_list_tokens() {
let temp_dir = TempDir::new().unwrap();
@@ -336,16 +464,13 @@ mod tests {
let token = store
.create_token("testuser", 30, None, Role::Write)
.unwrap();
let token_hash = hash_token(&token);
let hash_prefix = &token_hash[..16];
let file_id = sha256_hex(&token);
let hash_prefix = &file_id[..16];
// Verify token works
assert!(store.verify_token(&token).is_ok());
// Revoke
store.revoke_token(hash_prefix).unwrap();
// Verify token no longer works
let result = store.verify_token(&token);
assert!(matches!(result, Err(TokenError::NotFound)));
}
@@ -384,10 +509,8 @@ mod tests {
.create_token("testuser", 30, None, Role::Write)
.unwrap();
// First verification
store.verify_token(&token).unwrap();
// Check last_used is set
let tokens = store.list_tokens("testuser");
assert!(tokens[0].last_used.is_some());
}

View File

@@ -23,6 +23,8 @@ pub struct RegistryStats {
pub npm: usize,
pub cargo: usize,
pub pypi: usize,
pub go: usize,
pub raw: usize,
}
#[derive(Serialize)]
@@ -114,14 +116,18 @@ pub async fn api_stats(State(state): State<Arc<AppState>>) -> Json<RegistryStats
let _ = state.repo_index.get("npm", &state.storage).await;
let _ = state.repo_index.get("cargo", &state.storage).await;
let _ = state.repo_index.get("pypi", &state.storage).await;
let _ = state.repo_index.get("go", &state.storage).await;
let _ = state.repo_index.get("raw", &state.storage).await;
let (docker, maven, npm, cargo, pypi) = state.repo_index.counts();
let (docker, maven, npm, cargo, pypi, go, raw) = state.repo_index.counts();
Json(RegistryStats {
docker,
maven,
npm,
cargo,
pypi,
go,
raw,
})
}
@@ -132,6 +138,8 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
let npm_repos = state.repo_index.get("npm", &state.storage).await;
let cargo_repos = state.repo_index.get("cargo", &state.storage).await;
let pypi_repos = state.repo_index.get("pypi", &state.storage).await;
let go_repos = state.repo_index.get("go", &state.storage).await;
let raw_repos = state.repo_index.get("raw", &state.storage).await;
// Calculate sizes from cached index
let docker_size: u64 = docker_repos.iter().map(|r| r.size).sum();
@@ -139,7 +147,10 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
let npm_size: u64 = npm_repos.iter().map(|r| r.size).sum();
let cargo_size: u64 = cargo_repos.iter().map(|r| r.size).sum();
let pypi_size: u64 = pypi_repos.iter().map(|r| r.size).sum();
let total_storage = docker_size + maven_size + npm_size + cargo_size + pypi_size;
let go_size: u64 = go_repos.iter().map(|r| r.size).sum();
let raw_size: u64 = raw_repos.iter().map(|r| r.size).sum();
let total_storage =
docker_size + maven_size + npm_size + cargo_size + pypi_size + go_size + raw_size;
// Count total versions/tags, not just repositories
let docker_versions: usize = docker_repos.iter().map(|r| r.versions).sum();
@@ -147,8 +158,15 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
let npm_versions: usize = npm_repos.iter().map(|r| r.versions).sum();
let cargo_versions: usize = cargo_repos.iter().map(|r| r.versions).sum();
let pypi_versions: usize = pypi_repos.iter().map(|r| r.versions).sum();
let total_artifacts =
docker_versions + maven_versions + npm_versions + cargo_versions + pypi_versions;
let go_versions: usize = go_repos.iter().map(|r| r.versions).sum();
let raw_versions: usize = raw_repos.iter().map(|r| r.versions).sum();
let total_artifacts = docker_versions
+ maven_versions
+ npm_versions
+ cargo_versions
+ pypi_versions
+ go_versions
+ raw_versions;
let global_stats = GlobalStats {
downloads: state.metrics.downloads.load(Ordering::Relaxed),
@@ -194,18 +212,37 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
uploads: 0,
size_bytes: pypi_size,
},
RegistryCardStats {
name: "go".to_string(),
artifact_count: go_versions,
downloads: state.metrics.get_registry_downloads("go"),
uploads: 0,
size_bytes: go_size,
},
RegistryCardStats {
name: "raw".to_string(),
artifact_count: raw_versions,
downloads: state.metrics.get_registry_downloads("raw"),
uploads: state.metrics.get_registry_uploads("raw"),
size_bytes: raw_size,
},
];
let mount_points = vec![
MountPoint {
registry: "Docker".to_string(),
mount_path: "/v2/".to_string(),
proxy_upstream: None,
proxy_upstream: state.config.docker.upstreams.first().map(|u| u.url.clone()),
},
MountPoint {
registry: "Maven".to_string(),
mount_path: "/maven2/".to_string(),
proxy_upstream: state.config.maven.proxies.first().cloned(),
proxy_upstream: state
.config
.maven
.proxies
.first()
.map(|p| p.url().to_string()),
},
MountPoint {
registry: "npm".to_string(),
@@ -222,6 +259,16 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
mount_path: "/simple/".to_string(),
proxy_upstream: state.config.pypi.proxy.clone(),
},
MountPoint {
registry: "Go".to_string(),
mount_path: "/go/".to_string(),
proxy_upstream: state.config.go.proxy.clone(),
},
MountPoint {
registry: "Raw".to_string(),
mount_path: "/raw/".to_string(),
proxy_upstream: None,
},
];
let activity = state.activity.recent(20);
@@ -368,12 +415,32 @@ pub async fn get_registry_stats(storage: &Storage) -> RegistryStats {
.collect::<HashSet<_>>()
.len();
let go = all_keys
.iter()
.filter(|k| k.starts_with("go/") && k.ends_with(".zip"))
.filter_map(|k| {
let rest = k.strip_prefix("go/")?;
let pos = rest.rfind("/@v/")?;
Some(rest[..pos].to_string())
})
.collect::<HashSet<_>>()
.len();
let raw = all_keys
.iter()
.filter(|k| k.starts_with("raw/"))
.filter_map(|k| k.strip_prefix("raw/")?.split('/').next())
.collect::<HashSet<_>>()
.len();
RegistryStats {
docker,
maven,
npm,
cargo,
pypi,
go,
raw,
}
}
@@ -869,6 +936,32 @@ pub async fn get_pypi_detail(storage: &Storage, name: &str) -> PackageDetail {
PackageDetail { versions }
}
pub async fn get_go_detail(storage: &Storage, module: &str) -> PackageDetail {
let prefix = format!("go/{}/@v/", module);
let keys = storage.list(&prefix).await;
let mut versions = Vec::new();
for key in keys.iter().filter(|k| k.ends_with(".zip")) {
if let Some(rest) = key.strip_prefix(&prefix) {
if let Some(version) = rest.strip_suffix(".zip") {
let (size, published) = if let Some(meta) = storage.stat(key).await {
(meta.size, format_timestamp(meta.modified))
} else {
(0, "N/A".to_string())
};
versions.push(VersionInfo {
version: version.to_string(),
size,
published,
});
}
}
}
versions.sort_by(|a, b| b.version.cmp(&a.version));
PackageDetail { versions }
}
fn extract_pypi_version(name: &str, filename: &str) -> Option<String> {
// Handle both .tar.gz and .whl files
let clean_name = name.replace('-', "_");
@@ -892,3 +985,26 @@ fn extract_pypi_version(name: &str, filename: &str) -> Option<String> {
None
}
}
pub async fn get_raw_detail(storage: &Storage, group: &str) -> PackageDetail {
let prefix = format!("raw/{}/", group);
let keys = storage.list(&prefix).await;
let mut versions = Vec::new();
for key in &keys {
if let Some(filename) = key.strip_prefix(&prefix) {
let (size, published) = if let Some(meta) = storage.stat(key).await {
(meta.size, format_timestamp(meta.modified))
} else {
(0, "N/A".to_string())
};
versions.push(VersionInfo {
version: filename.to_string(),
size,
published,
});
}
}
PackageDetail { versions }
}

View File

@@ -90,7 +90,7 @@ fn sidebar_dark(active_page: Option<&str>, t: &Translations) -> String {
let docker_icon = r#"<path fill="currentColor" d="M13.983 11.078h2.119a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.119a.185.185 0 00-.185.185v1.888c0 .102.083.185.185.185m-2.954-5.43h2.118a.186.186 0 00.186-.186V3.574a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.186m0 2.716h2.118a.187.187 0 00.186-.186V6.29a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.887c0 .102.082.185.185.186m-2.93 0h2.12a.186.186 0 00.184-.186V6.29a.185.185 0 00-.185-.185H8.1a.185.185 0 00-.185.185v1.887c0 .102.083.185.185.186m-2.964 0h2.119a.186.186 0 00.185-.186V6.29a.185.185 0 00-.185-.185H5.136a.186.186 0 00-.186.185v1.887c0 .102.084.185.186.186m5.893 2.715h2.118a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.185m-2.93 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.083.185.185.185m-2.964 0h2.119a.185.185 0 00.185-.185V9.006a.185.185 0 00-.185-.186h-2.12a.186.186 0 00-.185.186v1.887c0 .102.084.185.186.185m-2.92 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.082.185.185.185M23.763 9.89c-.065-.051-.672-.51-1.954-.51-.338.001-.676.03-1.01.087-.248-1.7-1.653-2.53-1.716-2.566l-.344-.199-.226.327c-.284.438-.49.922-.612 1.43-.23.97-.09 1.882.403 2.661-.595.332-1.55.413-1.744.42H.751a.751.751 0 00-.75.748 11.376 11.376 0 00.692 4.062c.545 1.428 1.355 2.48 2.41 3.124 1.18.723 3.1 1.137 5.275 1.137.983.003 1.963-.086 2.93-.266a12.248 12.248 0 003.823-1.389c.98-.567 1.86-1.288 2.61-2.136 1.252-1.418 1.998-2.997 2.553-4.4h.221c1.372 0 2.215-.549 2.68-1.009.309-.293.55-.65.707-1.046l.098-.288Z"/>"#;
let maven_icon = r#"<path fill="currentColor" d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm-1 17.93c-3.95-.49-7-3.85-7-7.93 0-.62.08-1.21.21-1.79L9 15v1c0 1.1.9 2 2 2v1.93zm6.9-2.54c-.26-.81-1-1.39-1.9-1.39h-1v-3c0-.55-.45-1-1-1H8v-2h2c.55 0 1-.45 1-1V7h2c1.1 0 2-.9 2-2v-.41c2.93 1.19 5 4.06 5 7.41 0 2.08-.8 3.97-2.1 5.39z"/>"#;
let npm_icon = r#"<path fill="currentColor" d="M0 7.334v8h6.666v1.332H12v-1.332h12v-8H0zm6.666 6.664H5.334v-4H3.999v4H1.335V8.667h5.331v5.331zm4 0v1.336H8.001V8.667h5.334v5.332h-2.669v-.001zm12.001 0h-1.33v-4h-1.336v4h-1.335v-4h-1.33v4h-2.671V8.667h8.002v5.331zM10.665 10H12v2.667h-1.335V10z"/>"#;
let cargo_icon = r#"<path fill="currentColor" d="M20 8h-3V4H3c-1.1 0-2 .9-2 2v11h2c0 1.66 1.34 3 3 3s3-1.34 3-3h6c0 1.66 1.34 3 3 3s3-1.34 3-3h2v-5l-3-4zM6 18.5c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5zm13.5-9l1.96 2.5H17V9.5h2.5zm-1.5 9c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5z"/>"#;
let cargo_icon = r#"<path fill="currentColor" d="M6 2h12a1 1 0 011 1v8a1 1 0 01-1 1H6a1 1 0 01-1-1V3a1 1 0 011-1zm0 2v2h12V4H6zm0 3v2h12V7H6zM2 14h8a1 1 0 011 1v6a1 1 0 01-1 1H2a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16H2zM14 14h8a1 1 0 011 1v6a1 1 0 01-1 1h-8a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16h-8z"/>"#;
let pypi_icon = r#"<path fill="currentColor" d="M14.25.18l.9.2.73.26.59.3.45.32.34.34.25.34.16.33.1.3.04.26.02.2-.01.13V8.5l-.05.63-.13.55-.21.46-.26.38-.3.31-.33.25-.35.19-.35.14-.33.1-.3.07-.26.04-.21.02H8.83l-.69.05-.59.14-.5.22-.41.27-.33.32-.27.35-.2.36-.15.37-.1.35-.07.32-.04.27-.02.21v3.06H3.23l-.21-.03-.28-.07-.32-.12-.35-.18-.36-.26-.36-.36-.35-.46-.32-.59-.28-.73-.21-.88-.14-1.05L0 11.97l.06-1.22.16-1.04.24-.87.32-.71.36-.57.4-.44.42-.33.42-.24.4-.16.36-.1.32-.05.24-.01h.16l.06.01h8.16v-.83H6.24l-.01-2.75-.02-.37.05-.34.11-.31.17-.28.25-.26.31-.23.38-.2.44-.18.51-.15.58-.12.64-.1.71-.06.77-.04.84-.02 1.27.05 1.07.13zm-6.3 1.98l-.23.33-.08.41.08.41.23.34.33.22.41.09.41-.09.33-.22.23-.34.08-.41-.08-.41-.23-.33-.33-.22-.41-.09-.41.09-.33.22zM21.1 6.11l.28.06.32.12.35.18.36.27.36.35.35.47.32.59.28.73.21.88.14 1.04.05 1.23-.06 1.23-.16 1.04-.24.86-.32.71-.36.57-.4.45-.42.33-.42.24-.4.16-.36.09-.32.05-.24.02-.16-.01h-8.22v.82h5.84l.01 2.76.02.36-.05.34-.11.31-.17.29-.25.25-.31.24-.38.2-.44.17-.51.15-.58.13-.64.09-.71.07-.77.04-.84.01-1.27-.04-1.07-.14-.9-.2-.73-.25-.59-.3-.45-.33-.34-.34-.25-.34-.16-.33-.1-.3-.04-.25-.02-.2.01-.13v-5.34l.05-.64.13-.54.21-.46.26-.38.3-.32.33-.24.35-.2.35-.14.33-.1.3-.06.26-.04.21-.02.13-.01h5.84l.69-.05.59-.14.5-.21.41-.28.33-.32.27-.35.2-.36.15-.36.1-.35.07-.32.04-.28.02-.21V6.07h2.09l.14.01.21.03zm-6.47 14.25l-.23.33-.08.41.08.41.23.33.33.23.41.08.41-.08.33-.23.23-.33.08-.41-.08-.41-.23-.33-.33-.23-.41-.08-.41.08-.33.23z"/>"#;
// Dashboard label is translated, registry names stay as-is
@@ -109,6 +109,13 @@ fn sidebar_dark(active_page: Option<&str>, t: &Translations) -> String {
("npm", "/ui/npm", "npm", npm_icon, false),
("cargo", "/ui/cargo", "Cargo", cargo_icon, false),
("pypi", "/ui/pypi", "PyPI", pypi_icon, false),
(
"go",
"/ui/go",
"Go",
r#"<path fill="currentColor" d="M2.64 9.56s.24-.14.65-.38c.41-.24.97-.5 1.63-.7A7.85 7.85 0 017.53 8c.86 0 1.67.17 2.37.52.7.35 1.26.87 1.63 1.51.37.64.54 1.41.54 2.27v.2h-2.7v-.16c0-.47-.09-.86-.28-1.15a1.7 1.7 0 00-.77-.67 2.7 2.7 0 00-1.14-.22c-.56 0-1.06.13-1.46.4-.41.27-.72.66-.93 1.16-.21.5-.31 1.1-.31 1.8 0 .69.1 1.28.32 1.78.21.5.53.88.94 1.15.41.27.9.4 1.47.4.38 0 .73-.06 1.04-.17.31-.12.56-.29.74-.52.19-.23.29-.51.29-.84v-.14H7.15v-1.76h5.07v1.3c0 .8-.17 1.48-.52 2.04a3.46 3.46 0 01-1.5 1.3c-.66.3-1.44.45-2.35.45-.99 0-1.87-.18-2.63-.55a4.2 4.2 0 01-1.77-1.59C3.15 14.82 3 13.94 3 12.89v-.28c0-1.04.16-1.93.48-2.65a3.08 3.08 0 01-.84-.4zm12.1-1.34c.92 0 1.74.18 2.44.55a3.96 3.96 0 011.66 1.59c.4.7.6 1.54.6 2.53v.28c0 .99-.2 1.83-.6 2.53a3.96 3.96 0 01-1.66 1.59c-.7.37-1.52.55-2.44.55s-1.74-.18-2.44-.55a3.96 3.96 0 01-1.66-1.59c-.4-.7-.6-1.54-.6-2.53v-.28c0-.99.2-1.83.6-2.53a3.96 3.96 0 011.66-1.59c.7-.37 1.52-.55 2.44-.55zm0 2.12c-.44 0-.82.12-1.14.37-.32.24-.56.6-.73 1.06-.17.46-.26 1.01-.26 1.65v.28c0 .64.09 1.19.26 1.65.17.46.41.82.73 1.06.32.25.7.37 1.14.37.44 0 .82-.12 1.14-.37.32-.24.56-.6.73-1.06.17-.46.26-1.01.26-1.65v-.28c0-.64-.09-1.19-.26-1.65a2.17 2.17 0 00-.73-1.06 1.78 1.78 0 00-1.14-.37z"/>"#,
false,
),
];
let nav_html: String = nav_items.iter().map(|(id, href, label, icon_path, is_stroke)| {
@@ -277,15 +284,15 @@ pub fn render_registry_card(
) -> String {
format!(
r##"
<a href="{}" id="registry-{}" class="block bg-[#1e293b] rounded-lg border border-slate-700 p-4 md:p-6 hover:border-blue-400 transition-all">
<div class="flex items-center justify-between mb-3">
<svg class="w-8 h-8 text-slate-400" fill="currentColor" viewBox="0 0 24 24">
<a href="{}" id="registry-{}" class="block bg-[#1e293b] rounded-lg border border-slate-700 p-3 hover:border-blue-400 transition-all">
<div class="flex items-center justify-between mb-2">
<svg class="w-6 h-6 text-slate-400" fill="currentColor" viewBox="0 0 24 24">
{}
</svg>
<span class="text-xs font-medium text-green-400 bg-green-400/10 px-2 py-1 rounded-full">{}</span>
<span class="text-[10px] font-medium text-green-400 bg-green-400/10 px-1.5 py-0.5 rounded-full">{}</span>
</div>
<div class="text-lg font-semibold text-slate-200 mb-2">{}</div>
<div class="grid grid-cols-2 gap-2 text-sm">
<div class="text-sm font-semibold text-slate-200 mb-2">{}</div>
<div class="grid grid-cols-2 gap-1 text-xs">
<div>
<span class="text-slate-500">{}</span>
<div class="text-slate-300 font-medium">{}</div>
@@ -333,9 +340,9 @@ pub fn render_mount_points_table(
format!(
r##"
<tr class="border-b border-slate-700">
<td class="py-3 text-slate-300">{}</td>
<td class="py-3 font-mono text-blue-400">{}</td>
<td class="py-3 text-slate-400">{}</td>
<td class="px-4 py-3 text-slate-300">{}</td>
<td class="px-4 py-3 font-mono text-blue-400">{}</td>
<td class="px-4 py-3 text-slate-400">{}</td>
</tr>
"##,
registry, mount_path, proxy_display
@@ -358,7 +365,7 @@ pub fn render_mount_points_table(
<th class="px-4 py-2">{}</th>
</tr>
</thead>
<tbody class="px-4">
<tbody>
{}
</tbody>
</table>
@@ -388,11 +395,11 @@ pub fn render_activity_row(
format!(
r##"
<tr class="border-b border-slate-700/50 text-sm">
<td class="py-2 text-slate-500">{}</td>
<td class="py-2 font-medium {}"><span class="px-2 py-0.5 bg-slate-700 rounded">{}</span></td>
<td class="py-2 text-slate-300 font-mono text-xs">{}</td>
<td class="py-2 text-slate-400">{}</td>
<td class="py-2 text-slate-500">{}</td>
<td class="px-4 py-2 text-slate-500">{}</td>
<td class="px-4 py-2 font-medium {}"><span class="px-2 py-0.5 bg-slate-700 rounded">{}</span></td>
<td class="px-4 py-2 text-slate-300 font-mono text-xs">{}</td>
<td class="px-4 py-2 text-slate-400">{}</td>
<td class="px-4 py-2 text-slate-500">{}</td>
</tr>
"##,
timestamp,
@@ -424,7 +431,7 @@ pub fn render_activity_log(rows: &str, t: &Translations) -> String {
<th class="px-4 py-2">{}</th>
</tr>
</thead>
<tbody class="px-4">
<tbody>
{}
</tbody>
</table>
@@ -490,7 +497,7 @@ fn sidebar(active_page: Option<&str>) -> String {
let docker_icon = r#"<path fill="currentColor" d="M13.983 11.078h2.119a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.119a.185.185 0 00-.185.185v1.888c0 .102.083.185.185.185m-2.954-5.43h2.118a.186.186 0 00.186-.186V3.574a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.186m0 2.716h2.118a.187.187 0 00.186-.186V6.29a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.887c0 .102.082.185.185.186m-2.93 0h2.12a.186.186 0 00.184-.186V6.29a.185.185 0 00-.185-.185H8.1a.185.185 0 00-.185.185v1.887c0 .102.083.185.185.186m-2.964 0h2.119a.186.186 0 00.185-.186V6.29a.185.185 0 00-.185-.185H5.136a.186.186 0 00-.186.185v1.887c0 .102.084.185.186.186m5.893 2.715h2.118a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.185m-2.93 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.083.185.185.185m-2.964 0h2.119a.185.185 0 00.185-.185V9.006a.185.185 0 00-.185-.186h-2.12a.186.186 0 00-.185.186v1.887c0 .102.084.185.186.185m-2.92 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.082.185.185.185M23.763 9.89c-.065-.051-.672-.51-1.954-.51-.338.001-.676.03-1.01.087-.248-1.7-1.653-2.53-1.716-2.566l-.344-.199-.226.327c-.284.438-.49.922-.612 1.43-.23.97-.09 1.882.403 2.661-.595.332-1.55.413-1.744.42H.751a.751.751 0 00-.75.748 11.376 11.376 0 00.692 4.062c.545 1.428 1.355 2.48 2.41 3.124 1.18.723 3.1 1.137 5.275 1.137.983.003 1.963-.086 2.93-.266a12.248 12.248 0 003.823-1.389c.98-.567 1.86-1.288 2.61-2.136 1.252-1.418 1.998-2.997 2.553-4.4h.221c1.372 0 2.215-.549 2.68-1.009.309-.293.55-.65.707-1.046l.098-.288Z"/>"#;
let maven_icon = r#"<path fill="currentColor" d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm-1 17.93c-3.95-.49-7-3.85-7-7.93 0-.62.08-1.21.21-1.79L9 15v1c0 1.1.9 2 2 2v1.93zm6.9-2.54c-.26-.81-1-1.39-1.9-1.39h-1v-3c0-.55-.45-1-1-1H8v-2h2c.55 0 1-.45 1-1V7h2c1.1 0 2-.9 2-2v-.41c2.93 1.19 5 4.06 5 7.41 0 2.08-.8 3.97-2.1 5.39z"/>"#;
let npm_icon = r#"<path fill="currentColor" d="M0 7.334v8h6.666v1.332H12v-1.332h12v-8H0zm6.666 6.664H5.334v-4H3.999v4H1.335V8.667h5.331v5.331zm4 0v1.336H8.001V8.667h5.334v5.332h-2.669v-.001zm12.001 0h-1.33v-4h-1.336v4h-1.335v-4h-1.33v4h-2.671V8.667h8.002v5.331zM10.665 10H12v2.667h-1.335V10z"/>"#;
let cargo_icon = r#"<path fill="currentColor" d="M20 8h-3V4H3c-1.1 0-2 .9-2 2v11h2c0 1.66 1.34 3 3 3s3-1.34 3-3h6c0 1.66 1.34 3 3 3s3-1.34 3-3h2v-5l-3-4zM6 18.5c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5zm13.5-9l1.96 2.5H17V9.5h2.5zm-1.5 9c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5z"/>"#;
let cargo_icon = r#"<path fill="currentColor" d="M6 2h12a1 1 0 011 1v8a1 1 0 01-1 1H6a1 1 0 01-1-1V3a1 1 0 011-1zm0 2v2h12V4H6zm0 3v2h12V7H6zM2 14h8a1 1 0 011 1v6a1 1 0 01-1 1H2a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16H2zM14 14h8a1 1 0 011 1v6a1 1 0 01-1 1h-8a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16h-8z"/>"#;
let pypi_icon = r#"<path fill="currentColor" d="M14.25.18l.9.2.73.26.59.3.45.32.34.34.25.34.16.33.1.3.04.26.02.2-.01.13V8.5l-.05.63-.13.55-.21.46-.26.38-.3.31-.33.25-.35.19-.35.14-.33.1-.3.07-.26.04-.21.02H8.83l-.69.05-.59.14-.5.22-.41.27-.33.32-.27.35-.2.36-.15.37-.1.35-.07.32-.04.27-.02.21v3.06H3.23l-.21-.03-.28-.07-.32-.12-.35-.18-.36-.26-.36-.36-.35-.46-.32-.59-.28-.73-.21-.88-.14-1.05L0 11.97l.06-1.22.16-1.04.24-.87.32-.71.36-.57.4-.44.42-.33.42-.24.4-.16.36-.1.32-.05.24-.01h.16l.06.01h8.16v-.83H6.24l-.01-2.75-.02-.37.05-.34.11-.31.17-.28.25-.26.31-.23.38-.2.44-.18.51-.15.58-.12.64-.1.71-.06.77-.04.84-.02 1.27.05 1.07.13zm-6.3 1.98l-.23.33-.08.41.08.41.23.34.33.22.41.09.41-.09.33-.22.23-.34.08-.41-.08-.41-.23-.33-.33-.22-.41-.09-.41.09-.33.22zM21.1 6.11l.28.06.32.12.35.18.36.27.36.35.35.47.32.59.28.73.21.88.14 1.04.05 1.23-.06 1.23-.16 1.04-.24.86-.32.71-.36.57-.4.45-.42.33-.42.24-.4.16-.36.09-.32.05-.24.02-.16-.01h-8.22v.82h5.84l.01 2.76.02.36-.05.34-.11.31-.17.29-.25.25-.31.24-.38.2-.44.17-.51.15-.58.13-.64.09-.71.07-.77.04-.84.01-1.27-.04-1.07-.14-.9-.2-.73-.25-.59-.3-.45-.33-.34-.34-.25-.34-.16-.33-.1-.3-.04-.25-.02-.2.01-.13v-5.34l.05-.64.13-.54.21-.46.26-.38.3-.32.33-.24.35-.2.35-.14.33-.1.3-.06.26-.04.21-.02.13-.01h5.84l.69-.05.59-.14.5-.21.41-.28.33-.32.27-.35.2-.36.15-.36.1-.35.07-.32.04-.28.02-.21V6.07h2.09l.14.01.21.03zm-6.47 14.25l-.23.33-.08.41.08.41.23.33.33.23.41.08.41-.08.33-.23.23-.33.08-.41-.08-.41-.23-.33-.33-.23-.41-.08-.41.08-.33.23z"/>"#;
let nav_items = [
@@ -506,6 +513,13 @@ fn sidebar(active_page: Option<&str>) -> String {
("npm", "/ui/npm", "npm", npm_icon, false),
("cargo", "/ui/cargo", "Cargo", cargo_icon, false),
("pypi", "/ui/pypi", "PyPI", pypi_icon, false),
(
"go",
"/ui/go",
"Go",
r#"<path fill="currentColor" d="M2.64 9.56s.24-.14.65-.38c.41-.24.97-.5 1.63-.7A7.85 7.85 0 017.53 8c.86 0 1.67.17 2.37.52.7.35 1.26.87 1.63 1.51.37.64.54 1.41.54 2.27v.2h-2.7v-.16c0-.47-.09-.86-.28-1.15a1.7 1.7 0 00-.77-.67 2.7 2.7 0 00-1.14-.22c-.56 0-1.06.13-1.46.4-.41.27-.72.66-.93 1.16-.21.5-.31 1.1-.31 1.8 0 .69.1 1.28.32 1.78.21.5.53.88.94 1.15.41.27.9.4 1.47.4.38 0 .73-.06 1.04-.17.31-.12.56-.29.74-.52.19-.23.29-.51.29-.84v-.14H7.15v-1.76h5.07v1.3c0 .8-.17 1.48-.52 2.04a3.46 3.46 0 01-1.5 1.3c-.66.3-1.44.45-2.35.45-.99 0-1.87-.18-2.63-.55a4.2 4.2 0 01-1.77-1.59C3.15 14.82 3 13.94 3 12.89v-.28c0-1.04.16-1.93.48-2.65a3.08 3.08 0 01-.84-.4zm12.1-1.34c.92 0 1.74.18 2.44.55a3.96 3.96 0 011.66 1.59c.4.7.6 1.54.6 2.53v.28c0 .99-.2 1.83-.6 2.53a3.96 3.96 0 01-1.66 1.59c-.7.37-1.52.55-2.44.55s-1.74-.18-2.44-.55a3.96 3.96 0 01-1.66-1.59c-.4-.7-.6-1.54-.6-2.53v-.28c0-.99.2-1.83.6-2.53a3.96 3.96 0 011.66-1.59c.7-.37 1.52-.55 2.44-.55zm0 2.12c-.44 0-.82.12-1.14.37-.32.24-.56.6-.73 1.06-.17.46-.26 1.01-.26 1.65v.28c0 .64.09 1.19.26 1.65.17.46.41.82.73 1.06.32.25.7.37 1.14.37.44 0 .82-.12 1.14-.37.32-.24.56-.6.73-1.06.17-.46.26-1.01.26-1.65v-.28c0-.64-.09-1.19-.26-1.65a2.17 2.17 0 00-.73-1.06 1.78 1.78 0 00-1.14-.37z"/>"#,
false,
),
];
let nav_html: String = nav_items.iter().map(|(id, href, label, icon_path, is_stroke)| {
@@ -613,7 +627,9 @@ pub mod icons {
pub const DOCKER: &str = r#"<path fill="currentColor" d="M13.983 11.078h2.119a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.119a.185.185 0 00-.185.185v1.888c0 .102.083.185.185.185m-2.954-5.43h2.118a.186.186 0 00.186-.186V3.574a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.186m0 2.716h2.118a.187.187 0 00.186-.186V6.29a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.887c0 .102.082.185.185.186m-2.93 0h2.12a.186.186 0 00.184-.186V6.29a.185.185 0 00-.185-.185H8.1a.185.185 0 00-.185.185v1.887c0 .102.083.185.185.186m-2.964 0h2.119a.186.186 0 00.185-.186V6.29a.185.185 0 00-.185-.185H5.136a.186.186 0 00-.186.185v1.887c0 .102.084.185.186.186m5.893 2.715h2.118a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.185m-2.93 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.083.185.185.185m-2.964 0h2.119a.185.185 0 00.185-.185V9.006a.185.185 0 00-.185-.186h-2.12a.186.186 0 00-.185.186v1.887c0 .102.084.185.186.185m-2.92 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.082.185.185.185M23.763 9.89c-.065-.051-.672-.51-1.954-.51-.338.001-.676.03-1.01.087-.248-1.7-1.653-2.53-1.716-2.566l-.344-.199-.226.327c-.284.438-.49.922-.612 1.43-.23.97-.09 1.882.403 2.661-.595.332-1.55.413-1.744.42H.751a.751.751 0 00-.75.748 11.376 11.376 0 00.692 4.062c.545 1.428 1.355 2.48 2.41 3.124 1.18.723 3.1 1.137 5.275 1.137.983.003 1.963-.086 2.93-.266a12.248 12.248 0 003.823-1.389c.98-.567 1.86-1.288 2.61-2.136 1.252-1.418 1.998-2.997 2.553-4.4h.221c1.372 0 2.215-.549 2.68-1.009.309-.293.55-.65.707-1.046l.098-.288Z"/>"#;
pub const MAVEN: &str = r#"<path fill="currentColor" d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm-1 17.93c-3.95-.49-7-3.85-7-7.93 0-.62.08-1.21.21-1.79L9 15v1c0 1.1.9 2 2 2v1.93zm6.9-2.54c-.26-.81-1-1.39-1.9-1.39h-1v-3c0-.55-.45-1-1-1H8v-2h2c.55 0 1-.45 1-1V7h2c1.1 0 2-.9 2-2v-.41c2.93 1.19 5 4.06 5 7.41 0 2.08-.8 3.97-2.1 5.39z"/>"#;
pub const NPM: &str = r#"<path fill="currentColor" d="M0 7.334v8h6.666v1.332H12v-1.332h12v-8H0zm6.666 6.664H5.334v-4H3.999v4H1.335V8.667h5.331v5.331zm4 0v1.336H8.001V8.667h5.334v5.332h-2.669v-.001zm12.001 0h-1.33v-4h-1.336v4h-1.335v-4h-1.33v4h-2.671V8.667h8.002v5.331zM10.665 10H12v2.667h-1.335V10z"/>"#;
pub const CARGO: &str = r#"<path fill="currentColor" d="M20 8h-3V4H3c-1.1 0-2 .9-2 2v11h2c0 1.66 1.34 3 3 3s3-1.34 3-3h6c0 1.66 1.34 3 3 3s3-1.34 3-3h2v-5l-3-4zM6 18.5c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5zm13.5-9l1.96 2.5H17V9.5h2.5zm-1.5 9c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5z"/>"#;
pub const CARGO: &str = r#"<path fill="currentColor" d="M6 2h12a1 1 0 011 1v8a1 1 0 01-1 1H6a1 1 0 01-1-1V3a1 1 0 011-1zm0 2v2h12V4H6zm0 3v2h12V7H6zM2 14h8a1 1 0 011 1v6a1 1 0 01-1 1H2a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16H2zM14 14h8a1 1 0 011 1v6a1 1 0 01-1 1h-8a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16h-8z"/>"#;
pub const GO: &str = r#"<path fill="currentColor" d="M2.64 9.56s.24-.14.65-.38c.41-.24.97-.5 1.63-.7A7.85 7.85 0 017.53 8c.86 0 1.67.17 2.37.52.7.35 1.26.87 1.63 1.51.37.64.54 1.41.54 2.27v.2h-2.7v-.16c0-.47-.09-.86-.28-1.15a1.7 1.7 0 00-.77-.67 2.7 2.7 0 00-1.14-.22c-.56 0-1.06.13-1.46.4-.41.27-.72.66-.93 1.16-.21.5-.31 1.1-.31 1.8 0 .69.1 1.28.32 1.78.21.5.53.88.94 1.15.41.27.9.4 1.47.4.38 0 .73-.06 1.04-.17.31-.12.56-.29.74-.52.19-.23.29-.51.29-.84v-.14H7.15v-1.76h5.07v1.3c0 .8-.17 1.48-.52 2.04a3.46 3.46 0 01-1.5 1.3c-.66.3-1.44.45-2.35.45-.99 0-1.87-.18-2.63-.55a4.2 4.2 0 01-1.77-1.59C3.15 14.82 3 13.94 3 12.89v-.28c0-1.04.16-1.93.48-2.65a3.08 3.08 0 01-.84-.4zm12.1-1.34c.92 0 1.74.18 2.44.55a3.96 3.96 0 011.66 1.59c.4.7.6 1.54.6 2.53v.28c0 .99-.2 1.83-.6 2.53a3.96 3.96 0 01-1.66 1.59c-.7.37-1.52.55-2.44.55s-1.74-.18-2.44-.55a3.96 3.96 0 01-1.66-1.59c-.4-.7-.6-1.54-.6-2.53v-.28c0-.99.2-1.83.6-2.53a3.96 3.96 0 011.66-1.59c.7-.37 1.52-.55 2.44-.55zm0 2.12c-.44 0-.82.12-1.14.37-.32.24-.56.6-.73 1.06-.17.46-.26 1.01-.26 1.65v.28c0 .64.09 1.19.26 1.65.17.46.41.82.73 1.06.32.25.7.37 1.14.37.44 0 .82-.12 1.14-.37.32-.24.56-.6.73-1.06.17-.46.26-1.01.26-1.65v-.28c0-.64-.09-1.19-.26-1.65a2.17 2.17 0 00-.73-1.06 1.78 1.78 0 00-1.14-.37z"/>"#;
pub const RAW: &str = r#"<path fill="currentColor" d="M14 2H6a2 2 0 00-2 2v16a2 2 0 002 2h12a2 2 0 002-2V8l-6-6zm4 18H6V4h7v5h5v11z"/>"#;
pub const PYPI: &str = r#"<path fill="currentColor" d="M14.25.18l.9.2.73.26.59.3.45.32.34.34.25.34.16.33.1.3.04.26.02.2-.01.13V8.5l-.05.63-.13.55-.21.46-.26.38-.3.31-.33.25-.35.19-.35.14-.33.1-.3.07-.26.04-.21.02H8.83l-.69.05-.59.14-.5.22-.41.27-.33.32-.27.35-.2.36-.15.37-.1.35-.07.32-.04.27-.02.21v3.06H3.23l-.21-.03-.28-.07-.32-.12-.35-.18-.36-.26-.36-.36-.35-.46-.32-.59-.28-.73-.21-.88-.14-1.05L0 11.97l.06-1.22.16-1.04.24-.87.32-.71.36-.57.4-.44.42-.33.42-.24.4-.16.36-.1.32-.05.24-.01h.16l.06.01h8.16v-.83H6.24l-.01-2.75-.02-.37.05-.34.11-.31.17-.28.25-.26.31-.23.38-.2.44-.18.51-.15.58-.12.64-.1.71-.06.77-.04.84-.02 1.27.05 1.07.13zm-6.3 1.98l-.23.33-.08.41.08.41.23.34.33.22.41.09.41-.09.33-.22.23-.34.08-.41-.08-.41-.23-.33-.33-.22-.41-.09-.41.09-.33.22zM21.1 6.11l.28.06.32.12.35.18.36.27.36.35.35.47.32.59.28.73.21.88.14 1.04.05 1.23-.06 1.23-.16 1.04-.24.86-.32.71-.36.57-.4.45-.42.33-.42.24-.4.16-.36.09-.32.05-.24.02-.16-.01h-8.22v.82h5.84l.01 2.76.02.36-.05.34-.11.31-.17.29-.25.25-.31.24-.38.2-.44.17-.51.15-.58.13-.64.09-.71.07-.77.04-.84.01-1.27-.04-1.07-.14-.9-.2-.73-.25-.59-.3-.45-.33-.34-.34-.25-.34-.16-.33-.1-.3-.04-.25-.02-.2.01-.13v-5.34l.05-.64.13-.54.21-.46.26-.38.3-.32.33-.24.35-.2.35-.14.33-.1.3-.06.26-.04.21-.02.13-.01h5.84l.69-.05.59-.14.5-.21.41-.28.33-.32.27-.35.2-.36.15-.36.1-.35.07-.32.04-.28.02-.21V6.07h2.09l.14.01.21.03zm-6.47 14.25l-.23.33-.08.41.08.41.23.33.33.23.41.08.41-.08.33-.23.23-.33.08-.41-.08-.41-.23-.33-.33-.23-.41-.08-.41.08-.33.23z"/>"#;
}

View File

@@ -87,6 +87,10 @@ pub fn routes() -> Router<Arc<AppState>> {
.route("/ui/cargo/{name}", get(cargo_detail))
.route("/ui/pypi", get(pypi_list))
.route("/ui/pypi/{name}", get(pypi_detail))
.route("/ui/go", get(go_list))
.route("/ui/go/{*name}", get(go_detail))
.route("/ui/raw", get(raw_list))
.route("/ui/raw/{*name}", get(raw_detail))
// API endpoints for HTMX
.route("/api/ui/stats", get(api_stats))
.route("/api/ui/dashboard", get(api_dashboard))
@@ -298,3 +302,79 @@ async fn pypi_detail(
let detail = get_pypi_detail(&state.storage, &name).await;
Html(render_package_detail("pypi", &name, &detail, lang))
}
// Go pages
async fn go_list(
State(state): State<Arc<AppState>>,
Query(query): Query<ListQuery>,
headers: axum::http::HeaderMap,
) -> impl IntoResponse {
let lang = extract_lang_from_list(&query, headers.get("cookie").and_then(|v| v.to_str().ok()));
let page = query.page.unwrap_or(1).max(1);
let limit = query.limit.unwrap_or(DEFAULT_PAGE_SIZE).min(100);
let all_modules = state.repo_index.get("go", &state.storage).await;
let (modules, total) = paginate(&all_modules, page, limit);
Html(render_registry_list_paginated(
"go",
"Go Modules",
&modules,
page,
limit,
total,
lang,
))
}
async fn go_detail(
State(state): State<Arc<AppState>>,
Path(name): Path<String>,
Query(query): Query<LangQuery>,
headers: axum::http::HeaderMap,
) -> impl IntoResponse {
let lang = extract_lang(
&Query(query),
headers.get("cookie").and_then(|v| v.to_str().ok()),
);
let detail = get_go_detail(&state.storage, &name).await;
Html(render_package_detail("go", &name, &detail, lang))
}
// Raw pages
async fn raw_list(
State(state): State<Arc<AppState>>,
Query(query): Query<ListQuery>,
headers: axum::http::HeaderMap,
) -> impl IntoResponse {
let lang = extract_lang_from_list(&query, headers.get("cookie").and_then(|v| v.to_str().ok()));
let page = query.page.unwrap_or(1).max(1);
let limit = query.limit.unwrap_or(DEFAULT_PAGE_SIZE).min(100);
let all_files = state.repo_index.get("raw", &state.storage).await;
let (files, total) = paginate(&all_files, page, limit);
Html(render_registry_list_paginated(
"raw",
"Raw Storage",
&files,
page,
limit,
total,
lang,
))
}
async fn raw_detail(
State(state): State<Arc<AppState>>,
Path(name): Path<String>,
Query(query): Query<LangQuery>,
headers: axum::http::HeaderMap,
) -> impl IntoResponse {
let lang = extract_lang(
&Query(query),
headers.get("cookie").and_then(|v| v.to_str().ok()),
);
let detail = get_raw_detail(&state.storage, &name).await;
Html(render_package_detail("raw", &name, &detail, lang))
}

View File

@@ -24,22 +24,8 @@ pub fn render_dashboard(data: &DashboardResponse, lang: Lang) -> String {
.registry_stats
.iter()
.map(|r| {
let icon = match r.name.as_str() {
"docker" => icons::DOCKER,
"maven" => icons::MAVEN,
"npm" => icons::NPM,
"cargo" => icons::CARGO,
"pypi" => icons::PYPI,
_ => icons::DOCKER,
};
let display_name = match r.name.as_str() {
"docker" => "Docker",
"maven" => "Maven",
"npm" => "npm",
"cargo" => "Cargo",
"pypi" => "PyPI",
_ => &r.name,
};
let icon = get_registry_icon(&r.name);
let display_name = get_registry_title(&r.name);
render_registry_card(
display_name,
icon,
@@ -74,16 +60,56 @@ pub fn render_dashboard(data: &DashboardResponse, lang: Lang) -> String {
t.no_activity
)
} else {
data.activity
// Group consecutive identical entries (same action+artifact+registry+source)
struct GroupedActivity {
time: String,
action: String,
artifact: String,
registry: String,
source: String,
count: usize,
}
let mut grouped: Vec<GroupedActivity> = Vec::new();
for entry in &data.activity {
let action = entry.action.to_string();
let is_repeat = grouped.last().is_some_and(|last| {
last.action == action
&& last.artifact == entry.artifact
&& last.registry == entry.registry
&& last.source == entry.source
});
if is_repeat {
if let Some(last) = grouped.last_mut() {
last.count += 1;
}
} else {
grouped.push(GroupedActivity {
time: format_relative_time(&entry.timestamp),
action,
artifact: entry.artifact.clone(),
registry: entry.registry.clone(),
source: entry.source.clone(),
count: 1,
});
}
}
grouped
.iter()
.map(|entry| {
let time_ago = format_relative_time(&entry.timestamp);
.map(|g| {
let display_artifact = if g.count > 1 {
format!("{} (x{})", g.artifact, g.count)
} else {
g.artifact.clone()
};
render_activity_row(
&time_ago,
&entry.action.to_string(),
&entry.artifact,
&entry.registry,
&entry.source,
&g.time,
&g.action,
&display_artifact,
&g.registry,
&g.source,
)
})
.collect()
@@ -115,7 +141,7 @@ pub fn render_dashboard(data: &DashboardResponse, lang: Lang) -> String {
{}
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-5 gap-4 mb-6">
<div class="grid grid-cols-2 md:grid-cols-4 lg:grid-cols-7 gap-3 mb-6">
{}
</div>
@@ -629,6 +655,8 @@ pub fn render_package_detail(
"pip install {} --index-url http://127.0.0.1:4000/simple",
name
),
"go" => format!("GOPROXY=http://127.0.0.1:4000/go go get {}", name),
"raw" => format!("curl -O http://127.0.0.1:4000/raw/{}/<file>", name),
_ => String::new(),
};
@@ -795,6 +823,8 @@ fn get_registry_icon(registry_type: &str) -> &'static str {
"npm" => icons::NPM,
"cargo" => icons::CARGO,
"pypi" => icons::PYPI,
"go" => icons::GO,
"raw" => icons::RAW,
_ => {
r#"<path fill="currentColor" d="M10 4H4c-1.1 0-1.99.9-1.99 2L2 18c0 1.1.9 2 2 2h16c1.1 0 2-.9 2-2V8c0-1.1-.9-2-2-2h-8l-2-2z"/>"#
}
@@ -808,6 +838,8 @@ fn get_registry_title(registry_type: &str) -> &'static str {
"npm" => "npm Registry",
"cargo" => "Cargo Registry",
"pypi" => "PyPI Repository",
"go" => "Go Modules",
"raw" => "Raw Storage",
_ => "Registry",
}
}

View File

@@ -1,28 +0,0 @@
[package]
name = "nora-storage"
version.workspace = true
edition.workspace = true
license.workspace = true
authors.workspace = true
repository.workspace = true
homepage.workspace = true
description = "S3-compatible storage server for NORA"
[[bin]]
name = "nora-storage"
path = "src/main.rs"
[dependencies]
tokio.workspace = true
axum.workspace = true
serde.workspace = true
serde_json.workspace = true
tracing.workspace = true
tracing-subscriber.workspace = true
toml = "1.0"
uuid = { version = "1", features = ["v4"] }
sha2 = "0.10"
base64 = "0.22"
httpdate = "1"
chrono = { version = "0.4", features = ["serde"] }
quick-xml = { version = "0.39", features = ["serialize"] }

View File

@@ -1,47 +0,0 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
use serde::{Deserialize, Serialize};
use std::fs;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
pub server: ServerConfig,
pub storage: StorageConfig,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServerConfig {
pub host: String,
pub port: u16,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StorageConfig {
pub data_dir: String,
pub max_body_size: usize,
}
impl Config {
pub fn load() -> Self {
fs::read_to_string("config.toml")
.ok()
.and_then(|content| toml::from_str(&content).ok())
.unwrap_or_default()
}
}
impl Default for Config {
fn default() -> Self {
Self {
server: ServerConfig {
host: String::from("127.0.0.1"),
port: 3000,
},
storage: StorageConfig {
data_dir: String::from("data"),
max_body_size: 1024 * 1024 * 1024, // 1GB
},
}
}
}

View File

@@ -1,311 +0,0 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
mod config;
use axum::extract::DefaultBodyLimit;
use axum::{
body::Bytes,
extract::{Path, State},
http::StatusCode,
response::{IntoResponse, Response},
routing::{delete, get, put},
Router,
};
use chrono::Utc;
use config::Config;
use quick_xml::se::to_string as to_xml;
use serde::Serialize;
use std::fs;
use std::sync::Arc;
use tracing::info;
pub struct AppState {
pub config: Config,
}
#[derive(Serialize)]
#[serde(rename = "ListAllMyBucketsResult")]
struct ListBucketsResult {
#[serde(rename = "Buckets")]
buckets: Buckets,
}
#[derive(Serialize)]
struct Buckets {
#[serde(rename = "Bucket")]
bucket: Vec<BucketInfo>,
}
#[derive(Serialize)]
struct BucketInfo {
#[serde(rename = "Name")]
name: String,
#[serde(rename = "CreationDate")]
creation_date: String,
}
#[derive(Serialize)]
#[serde(rename = "ListBucketResult")]
struct ListObjectsResult {
#[serde(rename = "Name")]
name: String,
#[serde(rename = "Contents")]
contents: Vec<ObjectInfo>,
}
#[derive(Serialize)]
struct ObjectInfo {
#[serde(rename = "Key")]
key: String,
#[serde(rename = "Size")]
size: u64,
#[serde(rename = "LastModified")]
last_modified: String,
}
#[derive(Serialize)]
#[serde(rename = "Error")]
struct S3Error {
#[serde(rename = "Code")]
code: String,
#[serde(rename = "Message")]
message: String,
}
fn xml_response<T: Serialize>(data: T) -> Response {
let xml = format!(
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n{}",
to_xml(&data).unwrap_or_default()
);
(
StatusCode::OK,
[(axum::http::header::CONTENT_TYPE, "application/xml")],
xml,
)
.into_response()
}
fn error_response(status: StatusCode, code: &str, message: &str) -> Response {
let error = S3Error {
code: code.to_string(),
message: message.to_string(),
};
let xml = format!(
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n{}",
to_xml(&error).unwrap_or_default()
);
(
status,
[(axum::http::header::CONTENT_TYPE, "application/xml")],
xml,
)
.into_response()
}
#[tokio::main]
async fn main() {
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::from_default_env()
.add_directive("nora_storage=info".parse().expect("valid directive")),
)
.init();
let config = Config::load();
fs::create_dir_all(&config.storage.data_dir).expect("Failed to create data directory");
let state = Arc::new(AppState {
config: config.clone(),
});
let app = Router::new()
.route("/", get(list_buckets))
.route("/{bucket}", get(list_objects))
.route("/{bucket}", put(create_bucket))
.route("/{bucket}", delete(delete_bucket))
.route("/{bucket}/{*key}", put(put_object))
.route("/{bucket}/{*key}", get(get_object))
.route("/{bucket}/{*key}", delete(delete_object))
.layer(DefaultBodyLimit::max(config.storage.max_body_size))
.with_state(state);
let addr = format!("{}:{}", config.server.host, config.server.port);
let listener = tokio::net::TcpListener::bind(&addr)
.await
.expect("Failed to bind to address");
info!("nora-storage (S3 compatible) running on http://{}", addr);
axum::serve(listener, app).await.expect("Server error");
}
async fn list_buckets(State(state): State<Arc<AppState>>) -> Response {
let data_dir = &state.config.storage.data_dir;
let entries = match fs::read_dir(data_dir) {
Ok(e) => e,
Err(_) => {
return error_response(
StatusCode::INTERNAL_SERVER_ERROR,
"InternalError",
"Failed to read data",
)
}
};
let bucket_list: Vec<BucketInfo> = entries
.filter_map(|e| e.ok())
.filter(|e| e.path().is_dir())
.filter_map(|e| {
let name = e.file_name().into_string().ok()?;
let modified = e.metadata().ok()?.modified().ok()?;
let datetime: chrono::DateTime<Utc> = modified.into();
Some(BucketInfo {
name,
creation_date: datetime.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
})
})
.collect();
xml_response(ListBucketsResult {
buckets: Buckets {
bucket: bucket_list,
},
})
}
async fn list_objects(State(state): State<Arc<AppState>>, Path(bucket): Path<String>) -> Response {
let bucket_path = format!("{}/{}", state.config.storage.data_dir, bucket);
if !std::path::Path::new(&bucket_path).is_dir() {
return error_response(
StatusCode::NOT_FOUND,
"NoSuchBucket",
"The specified bucket does not exist",
);
}
let objects = collect_files(std::path::Path::new(&bucket_path), "");
xml_response(ListObjectsResult {
name: bucket,
contents: objects,
})
}
fn collect_files(dir: &std::path::Path, prefix: &str) -> Vec<ObjectInfo> {
let mut objects = Vec::new();
if let Ok(entries) = fs::read_dir(dir) {
for entry in entries.filter_map(|e| e.ok()) {
let path = entry.path();
let name = entry.file_name().into_string().unwrap_or_default();
let key = if prefix.is_empty() {
name.clone()
} else {
format!("{}/{}", prefix, name)
};
if path.is_dir() {
objects.extend(collect_files(&path, &key));
} else if let Ok(metadata) = entry.metadata() {
if let Ok(modified) = metadata.modified() {
let datetime: chrono::DateTime<Utc> = modified.into();
objects.push(ObjectInfo {
key,
size: metadata.len(),
last_modified: datetime.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
});
}
}
}
}
objects
}
async fn create_bucket(State(state): State<Arc<AppState>>, Path(bucket): Path<String>) -> Response {
let bucket_path = format!("{}/{}", state.config.storage.data_dir, bucket);
match fs::create_dir(&bucket_path) {
Ok(_) => (StatusCode::OK, "").into_response(),
Err(_) => error_response(
StatusCode::CONFLICT,
"BucketAlreadyExists",
"Bucket already exists",
),
}
}
async fn put_object(
State(state): State<Arc<AppState>>,
Path((bucket, key)): Path<(String, String)>,
body: Bytes,
) -> Response {
let file_path = format!("{}/{}/{}", state.config.storage.data_dir, bucket, key);
if let Some(parent) = std::path::Path::new(&file_path).parent() {
let _ = fs::create_dir_all(parent);
}
match fs::write(&file_path, &body) {
Ok(_) => {
println!("PUT {}/{} ({} bytes)", bucket, key, body.len());
(StatusCode::OK, "").into_response()
}
Err(e) => {
println!("ERROR writing {}/{}: {}", bucket, key, e);
error_response(
StatusCode::INTERNAL_SERVER_ERROR,
"InternalError",
"Failed to write object",
)
}
}
}
async fn get_object(
State(state): State<Arc<AppState>>,
Path((bucket, key)): Path<(String, String)>,
) -> Response {
let file_path = format!("{}/{}/{}", state.config.storage.data_dir, bucket, key);
match fs::read(&file_path) {
Ok(data) => (StatusCode::OK, data).into_response(),
Err(_) => error_response(
StatusCode::NOT_FOUND,
"NoSuchKey",
"The specified key does not exist",
),
}
}
async fn delete_object(
State(state): State<Arc<AppState>>,
Path((bucket, key)): Path<(String, String)>,
) -> Response {
let file_path = format!("{}/{}/{}", state.config.storage.data_dir, bucket, key);
match fs::remove_file(&file_path) {
Ok(_) => {
println!("DELETE {}/{}", bucket, key);
(StatusCode::NO_CONTENT, "").into_response()
}
Err(_) => error_response(
StatusCode::NOT_FOUND,
"NoSuchKey",
"The specified key does not exist",
),
}
}
async fn delete_bucket(State(state): State<Arc<AppState>>, Path(bucket): Path<String>) -> Response {
let bucket_path = format!("{}/{}", state.config.storage.data_dir, bucket);
match fs::remove_dir(&bucket_path) {
Ok(_) => {
println!("DELETE bucket {}", bucket);
(StatusCode::NO_CONTENT, "").into_response()
}
Err(_) => error_response(
StatusCode::CONFLICT,
"BucketNotEmpty",
"The bucket is not empty",
),
}
}

3
tests/e2e/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
node_modules/
test-results/
playwright-report/

76
tests/e2e/package-lock.json generated Normal file
View File

@@ -0,0 +1,76 @@
{
"name": "nora-e2e",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "nora-e2e",
"devDependencies": {
"@playwright/test": "^1.50.0"
}
},
"node_modules/@playwright/test": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.58.2.tgz",
"integrity": "sha512-akea+6bHYBBfA9uQqSYmlJXn61cTa+jbO87xVLCWbTqbWadRVmhxlXATaOjOgcBaWU4ePo0wB41KMFv3o35IXA==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright": "1.58.2"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=18"
}
},
"node_modules/fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"dev": true,
"hasInstallScript": true,
"license": "MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/playwright": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.58.2.tgz",
"integrity": "sha512-vA30H8Nvkq/cPBnNw4Q8TWz1EJyqgpuinBcHET0YVJVFldr8JDNiU9LaWAE1KqSkRYazuaBhTpB5ZzShOezQ6A==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright-core": "1.58.2"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=18"
},
"optionalDependencies": {
"fsevents": "2.3.2"
}
},
"node_modules/playwright-core": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.58.2.tgz",
"integrity": "sha512-yZkEtftgwS8CsfYo7nm0KE8jsvm6i/PTgVtB8DL726wNf6H2IMsDuxCpJj59KDaxCtSnrWan2AeDqM7JBaultg==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"playwright-core": "cli.js"
},
"engines": {
"node": ">=18"
}
}
}
}

11
tests/e2e/package.json Normal file
View File

@@ -0,0 +1,11 @@
{
"name": "nora-e2e",
"private": true,
"scripts": {
"test": "npx playwright test",
"test:ui": "npx playwright test --ui"
},
"devDependencies": {
"@playwright/test": "^1.50.0"
}
}

View File

@@ -0,0 +1,18 @@
import { defineConfig } from '@playwright/test';
export default defineConfig({
testDir: './tests',
timeout: 30000,
retries: 1,
use: {
baseURL: process.env.NORA_URL || 'http://localhost:4000',
screenshot: 'only-on-failure',
trace: 'retain-on-failure',
},
projects: [
{
name: 'chromium',
use: { browserName: 'chromium' },
},
],
});

View File

@@ -0,0 +1,82 @@
import { test, expect } from '@playwright/test';
test.describe('NORA Dashboard', () => {
test('dashboard page loads and shows title', async ({ page }) => {
await page.goto('/ui/');
await expect(page).toHaveTitle(/NORA|nora/i);
});
test('dashboard shows registry sections', async ({ page }) => {
await page.goto('/ui/');
// All registry types should be visible
await expect(page.getByText(/Docker/i).first()).toBeVisible();
await expect(page.getByText(/npm/i).first()).toBeVisible();
await expect(page.getByText(/Maven/i).first()).toBeVisible();
await expect(page.getByText(/PyPI/i).first()).toBeVisible();
await expect(page.getByText(/Cargo/i).first()).toBeVisible();
});
test('dashboard shows non-zero npm count after proxy fetch', async ({ page, request }) => {
// Trigger npm proxy cache by fetching a package
await request.get('/npm/chalk');
await request.get('/npm/chalk/-/chalk-5.4.1.tgz');
// Wait a moment for index rebuild
await page.waitForTimeout(1000);
await page.goto('/ui/');
// npm section should show at least 1 package
// Look for a number > 0 near npm section
const statsResponse = await request.get('/api/ui/stats');
const stats = await statsResponse.json();
expect(stats.npm).toBeGreaterThan(0);
// Verify it's actually rendered on the page
await page.goto('/ui/');
await page.waitForTimeout(500);
// The page should contain the package count somewhere
const content = await page.textContent('body');
expect(content).not.toBeNull();
// Should not show all zeros for npm
expect(content).toContain('npm');
});
test('dashboard shows Docker images after proxy fetch', async ({ page, request }) => {
// Check stats API
const statsResponse = await request.get('/api/ui/stats');
const stats = await statsResponse.json();
// Docker count should be accessible (may be 0 if no images pulled yet)
expect(stats).toHaveProperty('docker');
});
test('health endpoint returns healthy', async ({ request }) => {
const response = await request.get('/health');
expect(response.ok()).toBeTruthy();
const health = await response.json();
expect(health.status).toBe('healthy');
expect(health.registries.npm).toBe('ok');
expect(health.registries.docker).toBe('ok');
expect(health.registries.maven).toBe('ok');
expect(health.registries.pypi).toBe('ok');
expect(health.registries.cargo).toBe('ok');
});
test('OpenAPI docs endpoint accessible', async ({ request }) => {
const response = await request.get('/api-docs', { maxRedirects: 0 });
// api-docs redirects to swagger UI
expect([200, 303]).toContain(response.status());
});
test('metrics endpoint returns prometheus format', async ({ request }) => {
const response = await request.get('/metrics');
expect(response.ok()).toBeTruthy();
const text = await response.text();
expect(text).toContain('nora_http_request_duration_seconds');
});
});

View File

@@ -0,0 +1,74 @@
import { test, expect } from '@playwright/test';
test.describe('Docker Registry', () => {
test('v2 check returns empty JSON', async ({ request }) => {
const response = await request.get('/v2/');
expect(response.ok()).toBeTruthy();
const body = await response.json();
expect(body).toEqual({});
});
test('catalog endpoint returns 200', async ({ request }) => {
const response = await request.get('/v2/_catalog');
expect(response.ok()).toBeTruthy();
});
test('put and get manifest works', async ({ request }) => {
// Push a simple blob
const blobData = 'test-blob-content';
const crypto = require('crypto');
const blobDigest = 'sha256:' + crypto.createHash('sha256').update(blobData).digest('hex');
await request.post(`/v2/e2e-test/blobs/uploads/?digest=${blobDigest}`, {
data: blobData,
headers: { 'Content-Type': 'application/octet-stream' },
});
// Push config blob
const configData = '{}';
const configDigest = 'sha256:' + crypto.createHash('sha256').update(configData).digest('hex');
await request.post(`/v2/e2e-test/blobs/uploads/?digest=${configDigest}`, {
data: configData,
headers: { 'Content-Type': 'application/octet-stream' },
});
// Push manifest
const manifest = {
schemaVersion: 2,
mediaType: 'application/vnd.oci.image.manifest.v1+json',
config: {
mediaType: 'application/vnd.oci.image.config.v1+json',
digest: configDigest,
size: configData.length,
},
layers: [
{
mediaType: 'application/vnd.oci.image.layer.v1.tar+gzip',
digest: blobDigest,
size: blobData.length,
},
],
};
const putResponse = await request.put('/v2/e2e-test/manifests/1.0.0', {
data: manifest,
headers: { 'Content-Type': 'application/vnd.oci.image.manifest.v1+json' },
});
expect(putResponse.status()).toBe(201);
// Pull manifest back
const getResponse = await request.get('/v2/e2e-test/manifests/1.0.0');
expect(getResponse.ok()).toBeTruthy();
const pulled = await getResponse.json();
expect(pulled.schemaVersion).toBe(2);
expect(pulled.layers).toHaveLength(1);
});
test('tags list returns pushed tags', async ({ request }) => {
const response = await request.get('/v2/e2e-test/tags/list');
// May or may not have tags depending on test order
expect([200, 404]).toContain(response.status());
});
});

View File

@@ -0,0 +1,132 @@
import { test, expect } from '@playwright/test';
test.describe('npm Proxy', () => {
test('metadata proxy returns rewritten tarball URLs', async ({ request }) => {
const response = await request.get('/npm/chalk');
expect(response.ok()).toBeTruthy();
const metadata = await response.json();
expect(metadata.name).toBe('chalk');
expect(metadata.versions).toBeDefined();
// Tarball URL must point to NORA, not npmjs.org
const version = metadata.versions['5.4.1'];
expect(version).toBeDefined();
expect(version.dist.tarball).not.toContain('registry.npmjs.org');
expect(version.dist.tarball).toContain('/npm/chalk/-/chalk-5.4.1.tgz');
});
test('scoped package @babel/parser works', async ({ request }) => {
const response = await request.get('/npm/@babel/parser');
expect(response.ok()).toBeTruthy();
const metadata = await response.json();
expect(metadata.name).toBe('@babel/parser');
// Check tarball URL rewriting for scoped package
const versions = Object.keys(metadata.versions);
expect(versions.length).toBeGreaterThan(0);
const firstVersion = metadata.versions[versions[0]];
if (firstVersion?.dist?.tarball) {
expect(firstVersion.dist.tarball).toContain('/npm/@babel/parser/-/');
expect(firstVersion.dist.tarball).not.toContain('registry.npmjs.org');
}
});
test('tarball download returns gzip data', async ({ request }) => {
// Ensure metadata is cached first
await request.get('/npm/chalk');
const response = await request.get('/npm/chalk/-/chalk-5.4.1.tgz');
expect(response.ok()).toBeTruthy();
expect(response.headers()['content-type']).toBe('application/octet-stream');
const body = await response.body();
expect(body.length).toBeGreaterThan(100);
// gzip magic bytes
expect(body[0]).toBe(0x1f);
expect(body[1]).toBe(0x8b);
});
test('npm publish creates package', async ({ request }) => {
const pkgName = `e2e-pub-${Date.now()}`;
const publishBody = {
name: pkgName,
versions: {
'1.0.0': {
name: pkgName,
version: '1.0.0',
dist: {},
},
},
'dist-tags': { latest: '1.0.0' },
_attachments: {
[`${pkgName}-1.0.0.tgz`]: {
data: 'dGVzdA==',
content_type: 'application/octet-stream',
},
},
};
const response = await request.put(`/npm/${pkgName}`, {
data: publishBody,
headers: { 'Content-Type': 'application/json' },
});
expect(response.status()).toBe(201);
// Verify published package is accessible
const getResponse = await request.get(`/npm/${pkgName}`);
expect(getResponse.ok()).toBeTruthy();
const metadata = await getResponse.json();
expect(metadata.name).toBe(pkgName);
expect(metadata.versions['1.0.0']).toBeDefined();
});
test('npm publish rejects duplicate version (409)', async ({ request }) => {
const pkgName = `e2e-dupe-${Date.now()}`;
const body = {
name: pkgName,
versions: { '1.0.0': { name: pkgName, version: '1.0.0', dist: {} } },
'dist-tags': { latest: '1.0.0' },
_attachments: { [`${pkgName}-1.0.0.tgz`]: { data: 'dGVzdA==' } },
};
await request.put(`/npm/${pkgName}`, {
data: body,
headers: { 'Content-Type': 'application/json' },
});
// Publish same version again
const response = await request.put(`/npm/${pkgName}`, {
data: body,
headers: { 'Content-Type': 'application/json' },
});
expect(response.status()).toBe(409);
});
test('npm publish rejects name mismatch (400)', async ({ request }) => {
const response = await request.put('/npm/legitimate-pkg', {
data: {
name: 'evil-pkg',
versions: { '1.0.0': {} },
_attachments: { 'a.tgz': { data: 'dGVzdA==' } },
},
headers: { 'Content-Type': 'application/json' },
});
expect(response.status()).toBe(400);
});
test('npm publish rejects path traversal filename (400)', async ({ request }) => {
const response = await request.put('/npm/safe-pkg', {
data: {
name: 'safe-pkg',
versions: { '1.0.0': {} },
_attachments: { '../../etc/passwd': { data: 'dGVzdA==' } },
},
headers: { 'Content-Type': 'application/json' },
});
expect(response.status()).toBe(400);
});
});

View File

@@ -0,0 +1,51 @@
import { test, expect } from '@playwright/test';
test.describe('Maven Proxy', () => {
test('download Maven artifact', async ({ request }) => {
const response = await request.get(
'/maven2/org/apache/commons/commons-lang3/3.17.0/commons-lang3-3.17.0.pom'
);
expect(response.ok()).toBeTruthy();
const text = await response.text();
expect(text).toContain('commons-lang3');
});
test('Maven upload works', async ({ request }) => {
const response = await request.put('/maven2/com/test/smoke/1.0/smoke-1.0.jar', {
data: 'test-jar-content',
});
expect(response.status()).toBe(201);
});
});
test.describe('PyPI Proxy', () => {
test('simple index returns HTML', async ({ request }) => {
const response = await request.get('/simple/');
expect(response.ok()).toBeTruthy();
const text = await response.text();
expect(text).toContain('Simple Index');
});
test('package page returns links', async ({ request }) => {
const response = await request.get('/simple/requests/');
expect(response.ok()).toBeTruthy();
const text = await response.text();
expect(text).toContain('requests');
});
});
test.describe('Raw Storage', () => {
test('upload and download file', async ({ request }) => {
const data = 'raw-e2e-test-content-' + Date.now();
const putResponse = await request.put('/raw/e2e/test.txt', {
data: data,
});
expect(putResponse.status()).toBe(201);
const getResponse = await request.get('/raw/e2e/test.txt');
expect(getResponse.ok()).toBeTruthy();
const body = await getResponse.text();
expect(body).toBe(data);
});
});

352
tests/smoke.sh Executable file
View File

@@ -0,0 +1,352 @@
#!/usr/bin/env bash
set -euo pipefail
# NORA E2E Smoke Test
# Starts NORA, runs real-world scenarios, verifies results.
# Exit code 0 = all passed, non-zero = failures.
NORA_BIN="${NORA_BIN:-./target/release/nora}"
PORT="${NORA_TEST_PORT:-14000}"
BASE="http://localhost:${PORT}"
STORAGE_DIR=$(mktemp -d)
PASSED=0
FAILED=0
NORA_PID=""
cleanup() {
[ -n "$NORA_PID" ] && kill "$NORA_PID" 2>/dev/null || true
rm -rf "$STORAGE_DIR"
}
trap cleanup EXIT
fail() {
echo " FAIL: $1"
FAILED=$((FAILED + 1))
}
pass() {
echo " PASS: $1"
PASSED=$((PASSED + 1))
}
check() {
local desc="$1"
shift
if "$@" >/dev/null 2>&1; then
pass "$desc"
else
fail "$desc"
fi
}
echo "=== NORA Smoke Test ==="
echo "Binary: $NORA_BIN"
echo "Port: $PORT"
echo "Storage: $STORAGE_DIR"
echo ""
# Start NORA
NORA_HOST=127.0.0.1 \
NORA_PORT=$PORT \
NORA_STORAGE_PATH="$STORAGE_DIR" \
NORA_RATE_LIMIT_ENABLED=false \
NORA_PUBLIC_URL="$BASE" \
"$NORA_BIN" serve &
NORA_PID=$!
# Wait for startup
for i in $(seq 1 20); do
curl -sf "$BASE/health" >/dev/null 2>&1 && break
sleep 0.5
done
echo "--- Health & Monitoring ---"
check "GET /health returns healthy" \
curl -sf "$BASE/health"
check "GET /ready returns 200" \
curl -sf "$BASE/ready"
check "GET /metrics returns prometheus" \
curl -sf "$BASE/metrics"
echo ""
echo "--- npm Proxy ---"
# Fetch metadata — triggers proxy cache
METADATA=$(curl -sf "$BASE/npm/chalk" 2>/dev/null || echo "{}")
check "npm metadata returns 200" \
curl -sf "$BASE/npm/chalk"
# URL rewriting check
TARBALL_URL=$(echo "$METADATA" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('versions',{}).get('5.4.1',{}).get('dist',{}).get('tarball',''))" 2>/dev/null || echo "")
if echo "$TARBALL_URL" | grep -q "localhost:${PORT}/npm"; then
pass "npm tarball URL rewritten to NORA"
else
fail "npm tarball URL not rewritten: $TARBALL_URL"
fi
# Fetch tarball
check "npm tarball download" \
curl -sf "$BASE/npm/chalk/-/chalk-5.4.1.tgz" -o /dev/null
# Scoped package
check "npm scoped package @babel/parser" \
curl -sf "$BASE/npm/@babel/parser"
# Publish
PUBLISH_RESULT=$(curl -s -o /dev/null -w "%{http_code}" -X PUT \
-H "Content-Type: application/json" \
-d '{"name":"smoke-test-pkg","versions":{"1.0.0":{"name":"smoke-test-pkg","version":"1.0.0","dist":{}}},"dist-tags":{"latest":"1.0.0"},"_attachments":{"smoke-test-pkg-1.0.0.tgz":{"data":"dGVzdA==","content_type":"application/octet-stream"}}}' \
"$BASE/npm/smoke-test-pkg")
if [ "$PUBLISH_RESULT" = "201" ]; then
pass "npm publish returns 201"
else
fail "npm publish returned $PUBLISH_RESULT"
fi
# Version immutability
DUPE_RESULT=$(curl -s -o /dev/null -w "%{http_code}" -X PUT \
-H "Content-Type: application/json" \
-d '{"name":"smoke-test-pkg","versions":{"1.0.0":{"name":"smoke-test-pkg","version":"1.0.0","dist":{}}},"dist-tags":{"latest":"1.0.0"},"_attachments":{"smoke-test-pkg-1.0.0.tgz":{"data":"dGVzdA==","content_type":"application/octet-stream"}}}' \
"$BASE/npm/smoke-test-pkg")
if [ "$DUPE_RESULT" = "409" ]; then
pass "npm version immutability (409 on duplicate)"
else
fail "npm duplicate publish returned $DUPE_RESULT, expected 409"
fi
# Security: name mismatch
MISMATCH_RESULT=$(curl -s -o /dev/null -w "%{http_code}" -X PUT \
-H "Content-Type: application/json" \
-d '{"name":"evil-pkg","versions":{"1.0.0":{}},"_attachments":{"a.tgz":{"data":"dGVzdA=="}}}' \
"$BASE/npm/lodash")
if [ "$MISMATCH_RESULT" = "400" ]; then
pass "npm name mismatch rejected (400)"
else
fail "npm name mismatch returned $MISMATCH_RESULT, expected 400"
fi
# Security: path traversal
TRAVERSAL_RESULT=$(curl -s -o /dev/null -w "%{http_code}" -X PUT \
-H "Content-Type: application/json" \
-d '{"name":"test-pkg","versions":{"1.0.0":{}},"_attachments":{"../../etc/passwd":{"data":"dGVzdA=="}}}' \
"$BASE/npm/test-pkg")
if [ "$TRAVERSAL_RESULT" = "400" ]; then
pass "npm path traversal rejected (400)"
else
fail "npm path traversal returned $TRAVERSAL_RESULT, expected 400"
fi
echo ""
echo "--- Maven ---"
check "Maven proxy download" \
curl -sf "$BASE/maven2/org/apache/commons/commons-lang3/3.17.0/commons-lang3-3.17.0.pom" -o /dev/null
echo ""
echo "--- PyPI ---"
check "PyPI simple index" \
curl -sf "$BASE/simple/"
check "PyPI package page" \
curl -sf "$BASE/simple/requests/"
echo ""
echo "--- Docker ---"
check "Docker v2 check" \
curl -sf "$BASE/v2/"
echo ""
echo "--- Raw ---"
echo "raw-test-data" | curl -sf -X PUT --data-binary @- "$BASE/raw/smoke/test.txt" >/dev/null 2>&1
check "Raw upload" \
curl -sf "$BASE/raw/smoke/test.txt" -o /dev/null
echo ""
echo "--- UI & API ---"
check "UI dashboard loads" \
curl -sf "$BASE/ui/"
check "OpenAPI docs" \
curl -sf "$BASE/api-docs" -o /dev/null
# Dashboard stats — check npm count > 0 after proxy fetches
sleep 1
STATS=$(curl -sf "$BASE/ui/api/stats" 2>/dev/null || echo "{}")
NPM_COUNT=$(echo "$STATS" | python3 -c "import sys,json; print(json.load(sys.stdin).get('npm',0))" 2>/dev/null || echo "0")
if [ "$NPM_COUNT" -gt 0 ] 2>/dev/null; then
pass "Dashboard npm count > 0 (got $NPM_COUNT)"
else
fail "Dashboard npm count is $NPM_COUNT, expected > 0"
fi
echo ""
echo "--- Docker Push/Pull + Digest Verify ---"
# Create a minimal Docker image, push, pull, verify digest
DOCKER_AVAILABLE=true
if ! docker info >/dev/null 2>&1; then
echo " SKIP: Docker daemon not available"
DOCKER_AVAILABLE=false
fi
if [ "$DOCKER_AVAILABLE" = true ]; then
DOCKER_IMG="localhost:${PORT}/smoke-test/hello:v1"
# Create tiny image from scratch
DOCKER_BUILD_DIR=$(mktemp -d)
echo "FROM scratch" > "$DOCKER_BUILD_DIR/Dockerfile"
echo "smoke-test" > "$DOCKER_BUILD_DIR/data.txt"
echo "COPY data.txt /data.txt" >> "$DOCKER_BUILD_DIR/Dockerfile"
if docker build -t "$DOCKER_IMG" "$DOCKER_BUILD_DIR" >/dev/null 2>&1; then
pass "docker build smoke image"
else
fail "docker build smoke image"
fi
rm -rf "$DOCKER_BUILD_DIR"
# Push
if docker push "$DOCKER_IMG" >/dev/null 2>&1; then
pass "docker push to NORA"
else
fail "docker push to NORA"
fi
# Get digest from registry
MANIFEST=$(curl -sf -H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
"$BASE/v2/smoke-test/hello/manifests/v1" 2>/dev/null || echo "")
if [ -n "$MANIFEST" ] && echo "$MANIFEST" | python3 -c "import sys,json; json.load(sys.stdin)" >/dev/null 2>&1; then
pass "docker manifest retrievable from NORA"
else
fail "docker manifest not retrievable"
fi
# Remove local image and pull back
docker rmi "$DOCKER_IMG" >/dev/null 2>&1 || true
if docker pull "$DOCKER_IMG" >/dev/null 2>&1; then
pass "docker pull from NORA"
else
fail "docker pull from NORA"
fi
# Verify digest matches: push digest == pull digest
PUSH_DIGEST=$(docker inspect "$DOCKER_IMG" --format='{{index .RepoDigests 0}}' 2>/dev/null | cut -d@ -f2)
if [ -n "$PUSH_DIGEST" ] && echo "$PUSH_DIGEST" | grep -q "^sha256:"; then
pass "docker digest verified (${PUSH_DIGEST:0:20}...)"
else
fail "docker digest verification failed"
fi
# Cleanup
docker rmi "$DOCKER_IMG" >/dev/null 2>&1 || true
fi
echo ""
echo "--- npm Install + Integrity Verify ---"
# Test real npm client against NORA (not just curl)
NPM_TEST_DIR=$(mktemp -d)
cd "$NPM_TEST_DIR"
# Create minimal package.json
cat > package.json << 'PKGJSON'
{
"name": "nora-smoke-test",
"version": "1.0.0",
"dependencies": {
"chalk": "5.4.1"
}
}
PKGJSON
# npm install using NORA as registry
if npm install --registry "$BASE/npm/" --prefer-online --no-audit --no-fund >/dev/null 2>&1; then
pass "npm install chalk via NORA registry"
else
fail "npm install chalk via NORA registry"
fi
# Verify package was installed
if [ -f "node_modules/chalk/package.json" ]; then
INSTALLED_VER=$(python3 -c "import json; print(json.load(open('node_modules/chalk/package.json'))['version'])" 2>/dev/null || echo "")
if [ "$INSTALLED_VER" = "5.4.1" ]; then
pass "npm installed correct version (5.4.1)"
else
fail "npm installed wrong version: $INSTALLED_VER"
fi
else
fail "npm node_modules/chalk not found"
fi
# Verify integrity: check that package-lock.json has sha512 integrity
if [ -f "package-lock.json" ]; then
INTEGRITY=$(python3 -c "
import json
lock = json.load(open('package-lock.json'))
pkgs = lock.get('packages', {})
chalk = pkgs.get('node_modules/chalk', pkgs.get('chalk', {}))
print(chalk.get('integrity', ''))
" 2>/dev/null || echo "")
if echo "$INTEGRITY" | grep -q "^sha512-"; then
pass "npm integrity hash present (sha512)"
else
fail "npm integrity hash missing: $INTEGRITY"
fi
else
fail "npm package-lock.json not created"
fi
cd /tmp
rm -rf "$NPM_TEST_DIR"
echo ""
echo "--- Upstream Timeout Handling ---"
# Verify that requesting a non-existent package from upstream returns 404 quickly (not hang)
TIMEOUT_START=$(date +%s)
TIMEOUT_RESULT=$(curl -s -o /dev/null -w "%{http_code}" --max-time 15 \
"$BASE/npm/@nora-smoke-test/nonexistent-package-xyz-12345")
TIMEOUT_END=$(date +%s)
TIMEOUT_DURATION=$((TIMEOUT_END - TIMEOUT_START))
if [ "$TIMEOUT_RESULT" = "404" ]; then
pass "upstream 404 returned correctly"
else
fail "upstream returned $TIMEOUT_RESULT, expected 404"
fi
if [ "$TIMEOUT_DURATION" -lt 10 ]; then
pass "upstream 404 returned in ${TIMEOUT_DURATION}s (< 10s)"
else
fail "upstream 404 took ${TIMEOUT_DURATION}s (too slow, retry may hang)"
fi
echo ""
echo "--- Mirror CLI ---"
# Create a minimal lockfile
LOCKFILE=$(mktemp)
cat > "$LOCKFILE" << 'EOF'
{
"lockfileVersion": 3,
"packages": {
"": { "name": "test" },
"node_modules/chalk": { "version": "5.4.1" }
}
}
EOF
MIRROR_RESULT=$("$NORA_BIN" mirror --registry "$BASE" npm --lockfile "$LOCKFILE" 2>&1)
if echo "$MIRROR_RESULT" | grep -q "Failed: 0"; then
pass "nora mirror npm --lockfile (0 failures)"
else
fail "nora mirror: $MIRROR_RESULT"
fi
rm -f "$LOCKFILE"
echo ""
echo "================================"
echo "Results: $PASSED passed, $FAILED failed"
echo "================================"
[ "$FAILED" -eq 0 ] && exit 0 || exit 1