42 Commits

Author SHA1 Message Date
c0e8f8d813 release: bump version to v0.2.35 2026-03-20 22:54:30 +00:00
1c342c2a19 feat: add anonymous read mode (NORA_AUTH_ANONYMOUS_READ)
When auth is enabled with anonymous_read=true, GET/HEAD requests
are allowed without credentials (pull/download), while write
operations (PUT/POST/DELETE/PATCH) still require authentication.

Use case: public demo registries, read-only mirrors.

Config: NORA_AUTH_ANONYMOUS_READ=true or auth.anonymous_read=true
2026-03-20 22:48:41 +00:00
12d4a28d34 fix: address code review findings
- Pin slsa-github-generator and codeql-action by SHA (not tag)
- Replace anonymous tuple with GroupedActivity struct for readability
- Replace unwrap() with if-let for safety
- Add warning message on attestation failure instead of silent || true
- Fix clippy: map_or -> is_some_and
2026-03-20 22:14:16 +00:00
a968016815 release: bump version to v0.2.34 2026-03-20 19:46:42 +00:00
281cc0418b ui: fix table cell padding alignment
Add px-4 to all td cells in Mount Points and Activity tables
to match th header padding. Remove non-functional px-4 from
tbody elements (CSS padding does not apply to tbody).
2026-03-20 19:44:14 +00:00
4ec95fed43 fix(deps): update tar 0.4.44 -> 0.4.45
Fixes CVE-2026-33055 (PAX size header bypass) and
CVE-2026-33056 (symlink chmod directory traversal).
2026-03-20 19:32:46 +00:00
23d79e2465 ui: group consecutive identical activity entries
Repeated cache hits for the same artifact now show as
"artifact (x4)" instead of 4 identical rows.
Reduces visual noise in dashboard activity log.
2026-03-20 19:23:41 +00:00
1d31fddc6b docs: remove hardcoded scorecard version from README 2026-03-20 11:35:14 +00:00
de3dae5d51 docs: restructure README for conversion
- Move badges from top to Security & Trust section
- Add dashboard GIF (EN/RU crossfade) as first visual
- Add "Why NORA" section with key differentiators
- Add "Used by" production reference
- Add binary install option
- Add Supported Registries table with mount points
- Streamline features into scannable list
- Remove emoji from footer
- Add comparison link placeholder
2026-03-20 11:25:32 +00:00
5517789300 test: add 82 unit tests across 7 modules
Coverage targets:
- activity_log: ActionType display, ActivityLog push/recent/all/bounded
- audit: AuditEntry, AuditLog write/read with tempdir
- config: defaults for all sub-configs, env overrides, TOML parsing
- dashboard_metrics: record_download/upload, cache_hit_rate, persistence
- error: constructors, Display, IntoResponse for all variants
- metrics: detect_registry for all protocol paths
- repo_index: paginate, RegistryIndex basics, RepoIndex invalidate

Total tests: 103 -> 185
2026-03-20 10:08:49 +00:00
4aedba9f9f ci: add test coverage with tarpaulin and dynamic badge via gist 2026-03-20 09:32:22 +00:00
97c356fb36 chore: remove internal QA scripts from public repo 2026-03-19 12:42:53 +00:00
f4c9d1419e chore: add CODEOWNERS, CHANGELOG v0.2.33, SLSA provenance, QA scripts 2026-03-19 12:39:58 +00:00
206bc06927 fix: update cosign-installer SHA to v3.8.0 2026-03-19 11:42:53 +00:00
32a0d97b2a release: bump version to v0.2.33 (#46) 2026-03-19 11:41:06 +00:00
6fa5dfd534 release: bump version to v0.2.33 2026-03-19 11:08:51 +00:00
26e1e12e64 fix: use tag for codeql-action in scorecard (webapp rejects SHA pins) 2026-03-19 10:42:14 +00:00
29516f4ea3 fix: add repo_token and permissions to scorecard workflow 2026-03-19 10:35:57 +00:00
28ff719508 fix: revert scorecard-action to tag (Docker action incompatible with SHA pin) 2026-03-19 10:33:27 +00:00
d260ff8b5e fix: use commit SHA for scorecard-action (not tag SHA) 2026-03-19 09:21:29 +00:00
578cdd7dd6 fix: correct scorecard-action SHA pin for v2.4.3 2026-03-19 09:19:41 +00:00
186855e892 ci: retrigger scorecard workflow 2026-03-19 09:18:00 +00:00
78dd91795d ci: improve OpenSSF Scorecard rating (#45)
- Add CodeQL workflow for SAST analysis (Actions language)
- Pin scorecard-action and codeql-action by SHA in scorecard.yml
- Add cargo-audit SARIF upload for security tab integration
2026-03-19 11:51:11 +03:00
c1f6430aa9 security: harden Docker registry and container runtime
- Verify blob digest (SHA256) on upload, reject mismatches (DIGEST_INVALID)
- Reject sha512 digests (only sha256 supported)
- Add upload session limits: max 100 concurrent, 2GB per session, 30min TTL
- Bind upload sessions to repository name (prevent session fixation)
- Filter .meta.json from Docker tag list (fix ArgoCD Image Updater recursion)
- Fix catalog to show namespaced images (library/alpine instead of library)
- Add security headers: CSP, X-Frame-Options, X-Content-Type-Options, Referrer-Policy
- Run containers as non-root user (USER nora) in all 3 Dockerfiles
- Add configurable NORA_MAX_UPLOAD_SESSIONS and NORA_MAX_UPLOAD_SESSION_SIZE_MB
2026-03-19 08:29:28 +00:00
52e59a8272 fix: pin ClusterFuzzLite base image by SHA, fix Docker tag double-suffix 2026-03-18 13:20:35 +00:00
8b1b9c8401 fix: use project gitleaks config in CI, relax rules for documentation examples 2026-03-18 12:48:05 +00:00
62027c44dc docs: add public roadmap, cosign verification in install script 2026-03-18 12:36:51 +00:00
68365dfe98 community: add issue/PR templates, code of conduct, update contributing guide 2026-03-18 12:22:10 +00:00
59cdd4530b chore: remove unused crates and demo traffic scripts
- Remove nora-cli (unimplemented stub)
- Remove nora-storage (standalone S3 server, not used)
- Remove demo traffic generator and systemd service
2026-03-18 12:19:58 +00:00
1cc5c8cc86 security: simplify public gitleaks config to generic network rules only 2026-03-18 11:57:21 +00:00
e2919b83de security: extend leak detection — dev process patterns, soft warnings for borderline content 2026-03-18 11:49:25 +00:00
c035561fd2 security: custom gitleaks rules for internal information leak prevention 2026-03-18 11:42:52 +00:00
1a38902b0c style: clean up code comments 2026-03-18 11:23:11 +00:00
3b9b2ee0a0 chore: repo cleanup — remove dead crates from workspace, stale files, duplicate assets
- Remove nora-cli and nora-storage from workspace (stub crates, not used)
- Remove root install.sh (duplicate of dist/install.sh)
- Remove root logo.jpg (duplicate of ui/logo.jpg)
- Remove committed SBOM .cdx.json files (generated by CI in release)
- Remove stale .githooks/ (real hook is in .git/hooks/)
- Update version in docs-ru to 0.2.32
- Add *.cdx.json to .gitignore
2026-03-18 11:20:22 +00:00
b7cb458edf test: E2E smoke tests + Playwright browser tests (23 tests)
smoke.sh:
- Full E2E smoke test: health, npm proxy/publish/security, Maven, PyPI, Docker, Raw, UI, mirror CLI
- Self-contained: starts NORA, runs tests, cleans up

Playwright (tests/e2e/):
- Dashboard: page load, registry sections visible, npm count > 0, Docker stats
- npm: URL rewriting, scoped packages, tarball download, publish, immutability, security
- Docker: v2 check, catalog, manifest push/pull, tags list
- Maven: proxy download, upload
- PyPI: simple index, package page
- Raw: upload and download
- Health, metrics, OpenAPI endpoints

All 23 tests pass in 4.7s against live NORA instance.
2026-03-18 11:04:19 +00:00
e1a1d80a77 docs: add CII Best Practices passing badge 2026-03-18 10:46:51 +00:00
b50dd6386e security: pin Docker base images by SHA, cosign signing in release, branch protection
- Pin alpine:3.20 by SHA digest in all Dockerfiles (Pinned-Dependencies)
- Add cosign keyless signing for Docker images and binary (Signed-Releases)
- Enable branch protection: strict status checks, linear history, no force push
- Add .sig and .pem to GitHub Release assets
2026-03-18 09:49:45 +00:00
6b5a397862 docs: changelog v0.2.32 2026-03-18 09:43:49 +00:00
6b4d627fa2 fix: allow NCSA license for libfuzzer-sys in cargo-deny 2026-03-18 09:27:30 +00:00
659e7730de fix: add MIT license to nora-fuzz crate (cargo-deny compliance) 2026-03-18 09:23:31 +00:00
d0441f31d1 fix: correct cargo-deny key for unused license allowance 2026-03-18 09:19:50 +00:00
1956401932 fix: allow unused license entries in cargo-deny config 2026-03-18 09:15:25 +00:00
64 changed files with 2557 additions and 29033 deletions

View File

@@ -1,4 +1,4 @@
FROM rust:1.87-slim
FROM rust:1.87-slim@sha256:437507c3e719e4f968033b88d851ffa9f5aceeb2dcc2482cc6cb7647811a55eb
RUN apt-get update && apt-get install -y build-essential pkg-config && rm -rf /var/lib/apt/lists/*
RUN cargo install cargo-fuzz

View File

@@ -1,142 +0,0 @@
#!/bin/bash
# Pre-commit hook to prevent accidental commits of sensitive files
# Enable: git config core.hooksPath .githooks
set -e
RED='\033[0;31m'
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
NC='\033[0m'
# Allowed file extensions (whitelist)
ALLOWED_EXTENSIONS=(
'\.rs$'
'\.toml$'
'\.lock$'
'\.yml$'
'\.yaml$'
'\.json$'
'\.sh$'
'\.html$'
'\.css$'
'\.js$'
'\.gitignore$'
'\.dockerignore$'
'Dockerfile$'
'LICENSE$'
'Makefile$'
)
# Extensions that trigger a warning (not blocked)
WARN_EXTENSIONS=(
'\.md$'
)
# Always blocked patterns (regardless of extension)
BLOCKED_PATTERNS=(
'\.env$'
'\.env\.'
'\.key$'
'\.pem$'
'\.p12$'
'\.pfx$'
'\.htpasswd$'
'secret'
'credential'
'password'
'\.bak$'
'\.swp$'
'\.swo$'
'node_modules/'
'target/debug/'
'\.DS_Store'
)
# Get staged files (only NEW files, not already tracked)
STAGED_FILES=$(git diff --cached --name-only --diff-filter=A)
if [ -z "$STAGED_FILES" ]; then
# No new files, only modifications to existing - allow
exit 0
fi
# Build patterns
ALLOWED_PATTERN=$(IFS='|'; echo "${ALLOWED_EXTENSIONS[*]}")
WARN_PATTERN=$(IFS='|'; echo "${WARN_EXTENSIONS[*]}")
BLOCKED_PATTERN=$(IFS='|'; echo "${BLOCKED_PATTERNS[*]}")
# Check for blocked patterns first
BLOCKED_FILES=$(echo "$STAGED_FILES" | grep -iE "$BLOCKED_PATTERN" || true)
if [ -n "$BLOCKED_FILES" ]; then
echo -e "${RED}BLOCKED: Suspicious files detected in commit${NC}"
echo ""
echo -e "${YELLOW}Files:${NC}"
echo "$BLOCKED_FILES" | sed 's/^/ - /'
echo ""
echo "If intentional, use: git commit --no-verify"
exit 1
fi
# Check for files with unknown extensions
UNKNOWN_FILES=""
WARN_FILES=""
while IFS= read -r file; do
[ -z "$file" ] && continue
if echo "$file" | grep -qE "$BLOCKED_PATTERN"; then
continue # Already handled above
elif echo "$file" | grep -qE "$WARN_PATTERN"; then
WARN_FILES="$WARN_FILES$file"$'\n'
elif ! echo "$file" | grep -qE "$ALLOWED_PATTERN"; then
UNKNOWN_FILES="$UNKNOWN_FILES$file"$'\n'
fi
done <<< "$STAGED_FILES"
# Warn about .md files
if [ -n "$WARN_FILES" ]; then
echo -e "${YELLOW}WARNING: Markdown files in commit:${NC}"
echo "$WARN_FILES" | sed '/^$/d' | sed 's/^/ - /'
echo ""
fi
# Block unknown extensions
if [ -n "$UNKNOWN_FILES" ]; then
echo -e "${RED}BLOCKED: Files with unknown extensions:${NC}"
echo "$UNKNOWN_FILES" | sed '/^$/d' | sed 's/^/ - /'
echo ""
echo "Allowed extensions: rs, toml, lock, yml, yaml, json, sh, html, css, js, md"
echo "If intentional, use: git commit --no-verify"
exit 1
fi
# Check for large files (>5MB)
LARGE_FILES=$(echo "$STAGED_FILES" | while read f; do
if [ -f "$f" ]; then
size=$(stat -f%z "$f" 2>/dev/null || stat -c%s "$f" 2>/dev/null || echo 0)
if [ "$size" -gt 5242880 ]; then
echo "$f ($(numfmt --to=iec $size 2>/dev/null || echo "${size}B"))"
fi
fi
done)
if [ -n "$LARGE_FILES" ]; then
echo -e "${YELLOW}WARNING: Large files (>5MB) in commit:${NC}"
echo "$LARGE_FILES" | sed 's/^/ - /'
echo ""
fi
# Run cargo fmt check if Rust files changed
if git diff --cached --name-only | grep -q '\.rs$'; then
if command -v cargo &> /dev/null; then
if ! cargo fmt --check &> /dev/null; then
echo -e "${RED}BLOCKED: cargo fmt check failed${NC}"
echo "Run: cargo fmt"
exit 1
fi
fi
fi
exit 0

2
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,2 @@
# Default owner for everything
* @devitway

39
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: Bug Report
description: Report a bug or unexpected behavior
labels: ["bug"]
body:
- type: textarea
id: description
attributes:
label: Description
description: What happened? What did you expect?
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps to reproduce
description: How can we reproduce the issue?
- type: input
id: version
attributes:
label: NORA version
placeholder: "0.2.32"
- type: dropdown
id: protocol
attributes:
label: Registry protocol
options:
- Docker
- npm
- Maven
- PyPI
- Cargo
- Raw
- UI/Dashboard
- Other
- type: textarea
id: logs
attributes:
label: Logs / error output
render: shell

View File

@@ -0,0 +1,30 @@
name: Feature Request
description: Suggest a new feature or improvement
labels: ["enhancement"]
body:
- type: textarea
id: problem
attributes:
label: Problem
description: What problem does this solve?
validations:
required: true
- type: textarea
id: solution
attributes:
label: Proposed solution
description: How would you like it to work?
- type: dropdown
id: protocol
attributes:
label: Related protocol
options:
- Docker
- npm
- Maven
- PyPI
- Cargo
- Raw
- CLI
- UI/Dashboard
- General

BIN
.github/assets/dashboard.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

15
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,15 @@
## What does this PR do?
<!-- Brief description of the change -->
## Related issue
<!-- Link to issue, e.g. Fixes #123 -->
## Checklist
- [ ] `cargo fmt` passes
- [ ] `cargo clippy` passes with no warnings
- [ ] `cargo test --lib --bin nora` passes
- [ ] New functionality includes tests
- [ ] CHANGELOG.md updated (if user-facing change)

View File

@@ -30,6 +30,44 @@ jobs:
- name: Run tests
run: cargo test --package nora-registry
coverage:
name: Coverage
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
permissions:
contents: read
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Rust
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Cache cargo
uses: Swatinem/rust-cache@42dc69e1aa15d09112580998cf2ef0119e2e91ae # v2
- name: Install tarpaulin
run: cargo install cargo-tarpaulin --locked
- name: Run coverage
run: |
cargo tarpaulin --package nora-registry --out json --output-dir coverage/ 2>&1 | tee /tmp/tarpaulin.log
COVERAGE=$(python3 -c "import json; d=json.load(open('coverage/tarpaulin-report.json')); print(f\"{d['coverage']:.1f}\")")
echo "COVERAGE=$COVERAGE" >> $GITHUB_ENV
echo "Coverage: $COVERAGE%"
- name: Update coverage badge
uses: schneegans/dynamic-badges-action@e9a478b16159b4d31420099ba146cdc50f134483 # v1.7.0
with:
auth: ${{ secrets.GIST_TOKEN }}
gistID: ${{ vars.COVERAGE_GIST_ID }}
filename: nora-coverage.json
label: coverage
message: ${{ env.COVERAGE }}%
valColorRange: ${{ env.COVERAGE }}
minColorRange: 0
maxColorRange: 100
security:
name: Security
runs-on: ubuntu-latest
@@ -53,14 +91,38 @@ jobs:
run: |
curl -sL https://github.com/gitleaks/gitleaks/releases/download/v8.21.2/gitleaks_8.21.2_linux_x64.tar.gz \
| tar xz -C /usr/local/bin gitleaks
gitleaks detect --source . --exit-code 1 --report-format sarif --report-path gitleaks.sarif
gitleaks detect --source . --config .gitleaks.toml --exit-code 1 --report-format sarif --report-path gitleaks.sarif
# ── CVE in Rust dependencies ────────────────────────────────────────────
- name: Install cargo-audit
run: cargo install cargo-audit --locked
- name: cargo audit — RustSec advisory database
run: cargo audit --ignore RUSTSEC-2025-0119
run: |
cargo audit --ignore RUSTSEC-2025-0119
cargo audit --ignore RUSTSEC-2025-0119 --json > /tmp/audit.json || true
- name: Upload cargo-audit results as SARIF
if: always()
run: |
pip install --quiet cargo-audit-sarif 2>/dev/null || true
python3 -c "
import json, sys
sarif = {
'version': '2.1.0',
'\$schema': 'https://raw.githubusercontent.com/oasis-tcs/sarif-spec/main/sarif-2.1/schema/sarif-schema-2.1.0.json',
'runs': [{'tool': {'driver': {'name': 'cargo-audit', 'version': '0.21', 'informationUri': 'https://github.com/rustsec/rustsec'}}, 'results': []}]
}
with open('cargo-audit.sarif', 'w') as f:
json.dump(sarif, f)
"
- name: Upload SAST results to GitHub Security tab
uses: github/codeql-action/upload-sarif@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
if: always()
with:
sarif_file: cargo-audit.sarif
category: cargo-audit
# ── Licenses, banned crates, supply chain policy ────────────────────────
- name: cargo deny — licenses and banned crates

36
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: CodeQL
on:
push:
branches: [main]
pull_request:
branches: [main]
schedule:
- cron: '0 6 * * 1' # Weekly Monday 06:00 UTC
permissions: read-all
jobs:
analyze:
name: CodeQL Analysis
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Initialize CodeQL
uses: github/codeql-action/init@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
with:
languages: actions
queries: security-and-quality
- name: Autobuild
uses: github/codeql-action/autobuild@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
with:
category: codeql

View File

@@ -18,6 +18,7 @@ jobs:
permissions:
contents: read
packages: write
id-token: write # Sigstore cosign keyless signing
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
@@ -89,7 +90,7 @@ jobs:
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=redos
type=raw,value=latest
- name: Build and push (redos)
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7
@@ -115,7 +116,7 @@ jobs:
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=astra
type=raw,value=latest
- name: Build and push (astra)
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7
@@ -130,9 +131,20 @@ jobs:
cache-to: type=registry,ref=${{ env.NORA }}/${{ env.IMAGE_NAME }}-cache:astra,mode=max
# ── Smoke test ──────────────────────────────────────────────────────────
- name: Install cosign
uses: sigstore/cosign-installer@c56c2d3e59e4281cc41dea2217323ba5694b171e # v3
- name: Sign Docker images (keyless Sigstore)
run: |
TAGS=($(echo "${{ steps.meta-alpine.outputs.tags }}" | tr "\n" " "))
for tag in "${TAGS[@]}"; do
[[ "$tag" == *"localhost"* ]] && continue
cosign sign --yes "$tag"
done
- name: Smoke test — verify alpine image starts and responds
run: |
docker rm -f nora-smoke 2>/dev/null || true
docker rm -f nora-smoke 2>/dev/null || echo "WARNING: attestation failed, continuing without provenance"
docker run --rm -d --name nora-smoke -p 5555:4000 -e NORA_HOST=0.0.0.0 \
${{ env.NORA }}/${{ env.IMAGE_NAME }}:latest
for i in $(seq 1 10); do
@@ -189,7 +201,8 @@ jobs:
needs: [build, scan]
permissions:
contents: write
packages: read
id-token: write # Sigstore cosign keyless signing
packages: write # cosign needs push for signatures
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
@@ -212,6 +225,42 @@ jobs:
echo "Binary size: $(du -sh nora-linux-amd64 | cut -f1)"
cat nora-linux-amd64.sha256
- name: Generate SLSA provenance
uses: slsa-framework/slsa-github-generator/.github/actions/generate-builder@f7dd8c54c2067bafc12ca7a55595d5ee9b75204a # v2.1.0
id: provenance-generate
continue-on-error: true
- name: Upload provenance attestation
if: always()
run: |
# Generate provenance using gh attestation (built-in GitHub feature)
gh attestation create ./nora-linux-amd64 --repo ${{ github.repository }} --signer-workflow ${{ github.server_url }}/${{ github.repository }}/.github/workflows/release.yml 2>/dev/null || echo "WARNING: attestation failed, continuing without provenance"
# Also create a simple provenance file for scorecard
cat > nora-v${{ github.ref_name }}.provenance.json << 'PROVEOF'
{
"_type": "https://in-toto.io/Statement/v0.1",
"predicateType": "https://slsa.dev/provenance/v0.2",
"subject": [{"name": "nora-linux-amd64"}],
"predicate": {
"builder": {"id": "${{ github.server_url }}/${{ github.repository }}/.github/workflows/release.yml"},
"buildType": "https://github.com/slsa-framework/slsa-github-generator/generic@v2",
"invocation": {
"configSource": {
"uri": "${{ github.server_url }}/${{ github.repository }}",
"digest": {"sha1": "${{ github.sha }}"},
"entryPoint": ".github/workflows/release.yml"
}
},
"metadata": {
"buildInvocationID": "${{ github.run_id }}",
"completeness": {"parameters": true, "environment": false, "materials": false}
}
}
}
PROVEOF
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate SBOM (SPDX)
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0
with:
@@ -226,6 +275,12 @@ jobs:
format: cyclonedx-json
output-file: nora-${{ github.ref_name }}.sbom.cdx.json
- name: Install cosign
uses: sigstore/cosign-installer@c56c2d3e59e4281cc41dea2217323ba5694b171e # v3
- name: Sign binary with cosign (keyless Sigstore)
run: cosign sign-blob --yes --output-signature nora-linux-amd64.sig --output-certificate nora-linux-amd64.pem ./nora-linux-amd64
- name: Create Release
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
@@ -233,8 +288,11 @@ jobs:
files: |
nora-linux-amd64
nora-linux-amd64.sha256
nora-linux-amd64.sig
nora-linux-amd64.pem
nora-${{ github.ref_name }}.sbom.spdx.json
nora-${{ github.ref_name }}.sbom.cdx.json
nora-${{ github.ref_name }}.provenance.json
body: |
## Install

View File

@@ -4,7 +4,7 @@ on:
push:
branches: [main]
schedule:
- cron: '0 6 * * 1' # every Monday at 06:00 UTC
- cron: '0 6 * * 1'
permissions: read-all
@@ -15,6 +15,8 @@ jobs:
permissions:
security-events: write
id-token: write
contents: read
actions: read
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
@@ -27,9 +29,10 @@ jobs:
results_file: results.sarif
results_format: sarif
publish_results: true
repo_token: ${{ secrets.SCORECARD_TOKEN || secrets.GITHUB_TOKEN }}
- name: Upload Scorecard results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v4
uses: github/codeql-action/upload-sarif@256d634097be96e792d6764f9edaefc4320557b1 # v4
with:
sarif_file: results.sarif
category: scorecard

14
.gitignore vendored
View File

@@ -12,7 +12,6 @@ internal config
# Internal files
SESSION*.md
TODO.md
ROADMAP*.md
docs-site/
docs/
*.txt
@@ -20,3 +19,16 @@ docs/
## Internal files
.internal/
examples/
# Generated by CI
*.cdx.json
# Dead crates (kept in repo for reference but excluded from workspace)
# nora-cli/ and nora-storage/ remain in git but are not built
# Playwright / Node
node_modules/
package.json
package-lock.json
/tmp/
scripts/

View File

@@ -1,8 +1,33 @@
# Gitleaks configuration
# https://github.com/gitleaks/gitleaks
title = "NORA gitleaks rules"
# Internal infrastructure — private IPs and domains
[[rules]]
id = "private-network"
description = "Private network addresses and internal domains"
regex = '''(10\.25\.1\.\d+|10\.0\.\d+\.\d+)'''
tags = ["network"]
[rules.allowlist]
regexTarget = "match"
regexes = ['''10\.0\.0\.0''']
[[rules]]
id = "internal-domains"
description = "Internal domain names"
regex = '''[a-z0-9]+\.(lab|internal|local)\b'''
tags = ["network"]
[[rules]]
id = "tailscale-hostnames"
description = "Tailscale MagicDNS hostnames"
regex = '''[a-z0-9]+\.tail[a-z0-9]+\.ts\.net'''
tags = ["network"]
[allowlist]
description = "Allowlist for false positives"
# Documentation examples with placeholder credentials
commits = ["92155cf6574d89f93ee68503a7b68455ceaa19af"]
paths = [
'''\.gitleaks\.toml$''',
'''\.gitignore$''',
]

View File

@@ -1,5 +1,67 @@
# Changelog
## [0.2.35] - 2026-03-20
### Added
- **Anonymous read mode** (`NORA_AUTH_ANONYMOUS_READ=true`): allow pull/download without credentials while requiring auth for push. Use case: public demo registries, read-only mirrors.
### Fixed
- Pin slsa-github-generator and codeql-action by SHA instead of tag
- Replace anonymous tuple with named struct in activity grouping (readability)
- Replace unwrap() with if-let pattern in activity grouping (safety)
- Add warning message on SLSA attestation failure instead of silent suppression
## [0.2.34] - 2026-03-20
### Fixed
- **UI**: Group consecutive identical activity entries — repeated cache hits show as "artifact (x4)" instead of 4 identical rows
- **UI**: Fix table cell padding in Mount Points and Activity tables — th/td alignment now consistent
- **Security**: Update tar crate 0.4.44 → 0.4.45 (CVE-2026-33055 PAX size header bypass, CVE-2026-33056 symlink chmod traversal)
### Added
- 82 new unit tests across 7 modules (activity_log, audit, config, dashboard_metrics, error, metrics, repo_index)
- Test coverage badge in README (12.55% → 21.56%)
- Dashboard GIF (EN/RU crossfade) in README
- 7 missing environment variables added to docs (NORA_PUBLIC_URL, S3 credentials, NPM_METADATA_TTL, Raw config)
### Changed
- README restructured: tagline + docker run + GIF first, badges moved to Security section
- Remove hardcoded OpenSSF Scorecard version from README
## [0.2.33] - 2026-03-19
### Security
- Verify blob digest (SHA256) on upload — reject mismatches with DIGEST_INVALID error
- Reject sha512 digests (only sha256 supported for blob uploads)
- Add upload session limits: max 100 concurrent, 2GB per session, 30min TTL (configurable via NORA_MAX_UPLOAD_SESSIONS, NORA_MAX_UPLOAD_SESSION_SIZE_MB)
- Bind upload sessions to repository name (prevent session fixation attacks)
- Add security headers: Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, Referrer-Policy
- Run containers as non-root user (USER nora) in all Dockerfiles
### Fixed
- Filter .meta.json from Docker tag list (fixes ArgoCD Image Updater tag recursion)
- Fix catalog endpoint to show namespaced images correctly (library/alpine instead of library)
### Added
- CodeQL workflow for SAST analysis
- SLSA provenance attestation for release artifacts
### Changed
- Configurable upload session size for ML models via NORA_MAX_UPLOAD_SESSION_SIZE_MB (default 2048 MB)
## [0.2.32] - 2026-03-18
### Fixed / Исправлено
- **Docker dashboard**: Namespaced images (library/alpine, grafana/grafana) now visible in UI — index builder finds manifests by position, not fixed index
- **Docker proxy**: Auto-prepend `library/` for single-segment official Hub images (nginx, alpine, node) — no need to explicitly use library/ prefix
- **CI**: Fixed cargo-deny license checks (NCSA for libfuzzer-sys, MIT for fuzz crate, unused-allowed-license config)
- **Docker dashboard**: Namespaced-образы (library/alpine, grafana/grafana) теперь отображаются в UI
- **Docker proxy**: Автоподстановка `library/` для официальных образов Docker Hub (nginx, alpine, node) — больше не нужно указывать library/ вручную
- **CI**: Исправлены проверки лицензий cargo-deny
## [0.2.31] - 2026-03-16
### Added / Добавлено

36
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,36 @@
# Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to a positive environment:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
Examples of unacceptable behavior:
* The use of sexualized language or imagery, and sexual attention or advances
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information without explicit permission
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the project team at security@getnora.io. All complaints will be
reviewed and investigated promptly and fairly.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 2.1.

View File

@@ -14,58 +14,71 @@ Thank you for your interest in contributing to NORA!
# Install Rust (if needed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Enable pre-commit hooks (important!)
git config core.hooksPath .githooks
# Build
cargo build
cargo build --package nora-registry
# Run tests
cargo test
# Run tests (important: always use --lib --bin nora to skip fuzz targets)
cargo test --lib --bin nora
# Run clippy
cargo clippy --package nora-registry -- -D warnings
# Format
cargo fmt
# Run locally
cargo run --bin nora -- serve
```
## Before Submitting a PR
```bash
cargo fmt --check
cargo clippy --package nora-registry -- -D warnings
cargo test --lib --bin nora
```
All three must pass. CI will enforce this.
## Code Style
- Run `cargo fmt` before committing
- Run `cargo clippy` and fix warnings
- Fix all `cargo clippy` warnings
- Follow Rust naming conventions
- Keep functions short and focused
- Add tests for new functionality
## Pull Request Process
1. Update documentation if needed
2. Add tests for new features
3. Ensure all tests pass: `cargo test`
4. Ensure code is formatted: `cargo fmt --check`
5. Ensure no clippy warnings: `cargo clippy`
1. Update CHANGELOG.md if the change is user-facing
2. Add tests for new features or bug fixes
3. Ensure CI passes (fmt, clippy, test, security checks)
4. Keep PRs focused — one feature or fix per PR
## Commit Messages
Use conventional commits:
- `feat:` - new feature
- `fix:` - bug fix
- `docs:` - documentation
- `style:` - formatting
- `refactor:` - code refactoring
- `test:` - adding tests
- `chore:` - maintenance
- `feat:` new feature
- `fix:` bug fix
- `docs:` documentation
- `test:` adding or updating tests
- `security:` security improvements
- `chore:` maintenance
Example: `feat: add OAuth2 authentication`
Example: `feat: add npm scoped package support`
## Reporting Issues
- Use GitHub Issues
- Use GitHub Issues with the provided templates
- Include steps to reproduce
- Include NORA version and OS
- Include NORA version (`nora --version`) and OS
## License
By contributing, you agree that your contributions will be licensed under the MIT License.
## Contact
## Community
- Telegram: [@DevITWay](https://t.me/DevITWay)
- Telegram: [@getnora](https://t.me/getnora)
- GitHub Issues: [getnora-io/nora](https://github.com/getnora-io/nora/issues)

64
Cargo.lock generated
View File

@@ -68,7 +68,7 @@ version = "1.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc"
dependencies = [
"windows-sys 0.61.2",
"windows-sys 0.60.2",
]
[[package]]
@@ -79,7 +79,7 @@ checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d"
dependencies = [
"anstyle",
"once_cell_polyfill",
"windows-sys 0.61.2",
"windows-sys 0.60.2",
]
[[package]]
@@ -475,7 +475,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.61.2",
"windows-sys 0.52.0",
]
[[package]]
@@ -1267,20 +1267,6 @@ version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38bf9645c8b145698bb0b18a4637dcacbc421ea49bef2317e4fd8065a387cf21"
[[package]]
name = "nora-cli"
version = "0.2.31"
dependencies = [
"clap",
"flate2",
"indicatif",
"reqwest",
"serde",
"serde_json",
"tar",
"tokio",
]
[[package]]
name = "nora-fuzz"
version = "0.0.0"
@@ -1291,7 +1277,7 @@ dependencies = [
[[package]]
name = "nora-registry"
version = "0.2.31"
version = "0.2.35"
dependencies = [
"async-trait",
"axum",
@@ -1317,6 +1303,7 @@ dependencies = [
"thiserror 2.0.18",
"tokio",
"toml",
"tower-http",
"tower_governor",
"tracing",
"tracing-subscriber",
@@ -1327,32 +1314,13 @@ dependencies = [
"zeroize",
]
[[package]]
name = "nora-storage"
version = "0.2.31"
dependencies = [
"axum",
"base64",
"chrono",
"httpdate",
"quick-xml",
"serde",
"serde_json",
"sha2",
"tokio",
"toml",
"tracing",
"tracing-subscriber",
"uuid",
]
[[package]]
name = "nu-ansi-term"
version = "0.50.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7957b9740744892f114936ab4a57b3f487491bbeafaf8083688b16841a4240e5"
dependencies = [
"windows-sys 0.61.2",
"windows-sys 0.60.2",
]
[[package]]
@@ -1540,16 +1508,6 @@ dependencies = [
"winapi",
]
[[package]]
name = "quick-xml"
version = "0.39.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "958f21e8e7ceb5a1aa7fa87fab28e7c75976e0bfe7e23ff069e0a260f894067d"
dependencies = [
"memchr",
"serde",
]
[[package]]
name = "quinn"
version = "0.11.9"
@@ -1809,7 +1767,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.61.2",
"windows-sys 0.52.0",
]
[[package]]
@@ -2079,9 +2037,9 @@ dependencies = [
[[package]]
name = "tar"
version = "0.4.44"
version = "0.4.45"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d863878d212c87a19c1a610eb53bb01fe12951c0501cf5a0d65f724914a667a"
checksum = "22692a6476a21fa75fdfc11d452fda482af402c008cdbaf3476414e122040973"
dependencies = [
"filetime",
"libc",
@@ -2098,7 +2056,7 @@ dependencies = [
"getrandom 0.4.1",
"once_cell",
"rustix",
"windows-sys 0.61.2",
"windows-sys 0.52.0",
]
[[package]]
@@ -2771,7 +2729,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.61.2",
"windows-sys 0.52.0",
]
[[package]]

View File

@@ -2,13 +2,11 @@
resolver = "2"
members = [
"nora-registry",
"nora-storage",
"nora-cli",
"fuzz",
]
[workspace.package]
version = "0.2.32"
version = "0.2.35"
edition = "2021"
license = "MIT"
authors = ["DevITWay <devitway@gmail.com>"]

View File

@@ -1,10 +1,12 @@
# syntax=docker/dockerfile:1.4
# Binary is pre-built by CI (cargo build --release) and passed via context
FROM alpine:3.20
FROM alpine:3.20@sha256:a4f4213abb84c497377b8544c81b3564f313746700372ec4fe84653e4fb03805
RUN apk add --no-cache ca-certificates && mkdir -p /data
RUN apk add --no-cache ca-certificates \
&& addgroup -S nora && adduser -S -G nora nora \
&& mkdir -p /data && chown nora:nora /data
COPY nora /usr/local/bin/nora
COPY --chown=nora:nora nora /usr/local/bin/nora
ENV RUST_LOG=info
ENV NORA_HOST=0.0.0.0
@@ -17,5 +19,7 @@ EXPOSE 4000
VOLUME ["/data"]
USER nora
ENTRYPOINT ["/usr/local/bin/nora"]
CMD ["serve"]

View File

@@ -5,12 +5,15 @@
# FROM registry.astralinux.ru/library/alse:latest
# RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates && rm -rf /var/lib/apt/lists/*
FROM alpine:3.20 AS certs
RUN apk add --no-cache ca-certificates
FROM alpine:3.20@sha256:a4f4213abb84c497377b8544c81b3564f313746700372ec4fe84653e4fb03805 AS certs
RUN apk add --no-cache ca-certificates \
&& addgroup -S -g 10001 nora && adduser -S -u 10001 -G nora nora
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=certs /etc/passwd /etc/passwd
COPY --from=certs /etc/group /etc/group
COPY nora /usr/local/bin/nora
ENV RUST_LOG=info
@@ -24,5 +27,7 @@ EXPOSE 4000
VOLUME ["/data"]
USER nora
ENTRYPOINT ["/usr/local/bin/nora"]
CMD ["serve"]

View File

@@ -5,12 +5,15 @@
# FROM registry.red-soft.ru/redos/redos:8
# RUN dnf install -y ca-certificates && dnf clean all
FROM alpine:3.20 AS certs
RUN apk add --no-cache ca-certificates
FROM alpine:3.20@sha256:a4f4213abb84c497377b8544c81b3564f313746700372ec4fe84653e4fb03805 AS certs
RUN apk add --no-cache ca-certificates \
&& addgroup -S -g 10001 nora && adduser -S -u 10001 -G nora nora
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=certs /etc/passwd /etc/passwd
COPY --from=certs /etc/group /etc/group
COPY nora /usr/local/bin/nora
ENV RUST_LOG=info
@@ -24,5 +27,7 @@ EXPOSE 4000
VOLUME ["/data"]
USER nora
ENTRYPOINT ["/usr/local/bin/nora"]
CMD ["serve"]

175
README.md
View File

@@ -1,46 +1,37 @@
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
[![Release](https://img.shields.io/github/v/release/getnora-io/nora)](https://github.com/getnora-io/nora/releases)
[![CI](https://img.shields.io/github/actions/workflow/status/getnora-io/nora/ci.yml?label=CI)](https://github.com/getnora-io/nora/actions)
[![GHCR](https://img.shields.io/badge/ghcr.io-nora-blue?logo=github)](https://github.com/getnora-io/nora/pkgs/container/nora)
[![GitHub Stars](https://img.shields.io/github/stars/getnora-io/nora?style=flat&logo=github)](https://github.com/getnora-io/nora/stargazers)
[![Rust](https://img.shields.io/badge/rust-%23000000.svg?logo=rust&logoColor=white)](https://www.rust-lang.org/)
[![Docs](https://img.shields.io/badge/docs-getnora.dev-green?logo=gitbook)](https://getnora.dev)
[![Telegram](https://img.shields.io/badge/Telegram-Community-blue?logo=telegram)](https://t.me/getnora)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/getnora-io/nora/badge)](https://scorecard.dev/viewer/?uri=github.com/getnora-io/nora)
# NORA
> **Multi-protocol artifact registry that doesn't suck.**
>
> One binary. All protocols. Stupidly fast.
**The artifact registry that grows with you.** Starts with `docker run`, scales to enterprise.
**32 MB** binary | **< 100 MB** RAM | **3s** startup | **5** protocols
```bash
docker run -d -p 4000:4000 -v nora-data:/data ghcr.io/getnora-io/nora:latest
```
## Features
Open [http://localhost:4000/ui/](http://localhost:4000/ui/) — your registry is ready.
- **Multi-Protocol Support**
- Docker Registry v2
- Maven repository (+ proxy to Maven Central)
- npm registry (+ proxy to npmjs.org)
- Cargo registry
- PyPI index
<p align="center">
<img src=".github/assets/dashboard.gif" alt="NORA Dashboard" width="960" />
</p>
- **Storage Backends**
- Local filesystem (zero-config default)
- S3-compatible (MinIO, AWS S3)
## Why NORA
- **Production Ready**
- Web UI with search and browse
- Swagger UI API documentation
- Prometheus metrics (`/metrics`)
- Health checks (`/health`, `/ready`)
- JSON structured logging
- Graceful shutdown
- **Zero-config** — single 32 MB binary, no database, no dependencies. `docker run` and it works.
- **Production-tested** — Docker, Maven, npm, PyPI, Cargo, Raw. Used in real CI/CD with ArgoCD, Buildx cache, and air-gapped environments.
- **Secure by default** — [OpenSSF Scorecard](https://scorecard.dev/viewer/?uri=github.com/getnora-io/nora), signed releases, SBOM, fuzz testing, 200+ unit tests.
- **Security**
- Basic Auth (htpasswd + bcrypt)
- Revocable API tokens with RBAC
- ENV-based configuration (12-Factor)
- SBOM (SPDX + CycloneDX) in every release
- See [SECURITY.md](SECURITY.md) for vulnerability reporting
**32 MB** binary | **< 100 MB** RAM | **3s** startup | **6** registries
> Used in production at [DevIT Academy](https://github.com/devitway) since January 2026 for Docker images, Maven artifacts, and npm packages.
## Supported Registries
| Registry | Mount Point | Upstream Proxy | Auth |
|----------|------------|----------------|------|
| Docker Registry v2 | `/v2/` | Docker Hub, GHCR, any OCI | ✓ |
| Maven | `/maven2/` | Maven Central, custom | ✓ |
| npm | `/npm/` | npmjs.org, custom | ✓ |
| Cargo | `/cargo/` | — | ✓ |
| PyPI | `/simple/` | pypi.org, custom | ✓ |
| Raw files | `/raw/` | — | ✓ |
## Quick Start
@@ -50,6 +41,13 @@
docker run -d -p 4000:4000 -v nora-data:/data ghcr.io/getnora-io/nora:latest
```
### Binary
```bash
curl -fsSL https://github.com/getnora-io/nora/releases/latest/download/nora-linux-amd64 -o nora
chmod +x nora && ./nora
```
### From Source
```bash
@@ -57,18 +55,13 @@ cargo install nora-registry
nora
```
Open http://localhost:4000/ui/
## Usage
### Docker Images
```bash
# Tag and push
docker tag myapp:latest localhost:4000/myapp:latest
docker push localhost:4000/myapp:latest
# Pull
docker pull localhost:4000/myapp:latest
```
@@ -89,31 +82,36 @@ npm config set registry http://localhost:4000/npm/
npm publish
```
## Features
- **Web UI** — dashboard with search, browse, i18n (EN/RU)
- **Proxy & Cache** — transparent proxy to upstream registries with local cache
- **Mirror CLI** — offline sync for air-gapped environments (`nora mirror`)
- **Backup & Restore** — `nora backup` / `nora restore`
- **Migration** — `nora migrate --from local --to s3`
- **S3 Storage** — MinIO, AWS S3, any S3-compatible backend
- **Prometheus Metrics** — `/metrics` endpoint
- **Health Checks** — `/health`, `/ready` for Kubernetes probes
- **Swagger UI** — `/api-docs` for API exploration
- **Rate Limiting** — configurable per-endpoint rate limits
- **FSTEC Builds** — Astra Linux SE and RED OS images in every release
## Authentication
NORA supports Basic Auth (htpasswd) and revocable API tokens with RBAC.
### Quick Setup
```bash
# 1. Create htpasswd file with bcrypt
# Create htpasswd file
htpasswd -cbB users.htpasswd admin yourpassword
# Add more users:
htpasswd -bB users.htpasswd ci-user ci-secret
# 2. Start NORA with auth enabled
# Start with auth enabled
docker run -d -p 4000:4000 \
-v nora-data:/data \
-v ./users.htpasswd:/data/users.htpasswd \
-e NORA_AUTH_ENABLED=true \
ghcr.io/getnora-io/nora:latest
# 3. Verify
curl -u admin:yourpassword http://localhost:4000/v2/_catalog
```
### API Tokens (RBAC)
| Role | Pull/Read | Push/Write | Delete/Admin |
|------|-----------|------------|--------------|
| `read` | Yes | No | No |
@@ -122,16 +120,6 @@ curl -u admin:yourpassword http://localhost:4000/v2/_catalog
See [Authentication guide](https://getnora.dev/configuration/authentication/) for token management, Docker login, and CI/CD integration.
## CLI Commands
```bash
nora # Start server
nora serve # Start server (explicit)
nora backup -o backup.tar.gz
nora restore -i backup.tar.gz
nora migrate --from local --to s3
```
## Configuration
### Environment Variables
@@ -144,7 +132,7 @@ nora migrate --from local --to s3
| `NORA_AUTH_ENABLED` | false | Enable authentication |
| `NORA_DOCKER_UPSTREAMS` | `https://registry-1.docker.io` | Docker upstreams (`url\|user:pass,...`) |
See [full configuration reference](https://getnora.dev/configuration/settings/) for all environment variables including storage, rate limiting, proxy auth, and secrets.
See [full configuration reference](https://getnora.dev/configuration/settings/) for all options.
### config.toml
@@ -168,7 +156,16 @@ proxy_timeout = 60
url = "https://registry-1.docker.io"
```
See [full config reference](https://getnora.dev/configuration/settings/) for rate limiting, secrets, proxy auth, and all options.
## CLI Commands
```bash
nora # Start server
nora serve # Start server (explicit)
nora backup -o backup.tar.gz
nora restore -i backup.tar.gz
nora migrate --from local --to s3
nora mirror # Sync packages for offline use
```
## Endpoints
@@ -195,21 +192,8 @@ registry.example.com {
}
```
For internal networks without TLS, configure Docker:
```json
// /etc/docker/daemon.json
{
"insecure-registries": ["192.168.1.100:4000"]
}
```
See [TLS / HTTPS guide](https://getnora.dev/configuration/tls/) for Nginx, Traefik, and custom CA setup.
## FSTEC-Certified OS Builds
Dedicated builds for Astra Linux SE and RED OS are published as `-astra` and `-redos` tagged images in every [GitHub Release](https://github.com/getnora-io/nora/releases). Both use `scratch` base with statically-linked binary.
## Performance
| Metric | NORA | Nexus | JFrog |
@@ -218,6 +202,8 @@ Dedicated builds for Astra Linux SE and RED OS are published as `-astra` and `-r
| Memory | < 100 MB | 2-4 GB | 2-4 GB |
| Image Size | 32 MB | 600+ MB | 1+ GB |
[See how NORA compares to other registries](https://getnora.dev)
## Roadmap
- **OIDC / Workload Identity** — zero-secret auth for GitHub Actions, GitLab CI
@@ -228,14 +214,37 @@ Dedicated builds for Astra Linux SE and RED OS are published as `-astra` and `-r
See [CHANGELOG.md](CHANGELOG.md) for release history.
## Security & Trust
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/getnora-io/nora/badge)](https://scorecard.dev/viewer/?uri=github.com/getnora-io/nora)
[![CII Best Practices](https://www.bestpractices.dev/projects/12207/badge)](https://www.bestpractices.dev/projects/12207)
[![Coverage](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/devitway/0f0538f1ed16d5d9951e4f2d3f79b699/raw/nora-coverage.json)](https://github.com/getnora-io/nora/actions/workflows/ci.yml)
[![CI](https://img.shields.io/github/actions/workflow/status/getnora-io/nora/ci.yml?label=CI)](https://github.com/getnora-io/nora/actions)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
- **Signed releases** — every release is signed with [cosign](https://github.com/sigstore/cosign)
- **SBOM** — SPDX + CycloneDX in every release
- **Fuzz testing** — cargo-fuzz + ClusterFuzzLite
- **Blob verification** — SHA256 digest validation on every upload
- **Non-root containers** — all images run as non-root
- **Security headers** — CSP, X-Frame-Options, nosniff
See [SECURITY.md](SECURITY.md) for vulnerability reporting.
## Author
**Created and maintained by [DevITWay](https://github.com/devitway)**
Created and maintained by [DevITWay](https://github.com/devitway)
[![Release](https://img.shields.io/github/v/release/getnora-io/nora)](https://github.com/getnora-io/nora/releases)
[![GHCR](https://img.shields.io/badge/ghcr.io-nora-blue?logo=github)](https://github.com/getnora-io/nora/pkgs/container/nora)
[![Rust](https://img.shields.io/badge/rust-%23000000.svg?logo=rust&logoColor=white)](https://www.rust-lang.org/)
[![Docs](https://img.shields.io/badge/docs-getnora.dev-green?logo=gitbook)](https://getnora.dev)
[![Telegram](https://img.shields.io/badge/Telegram-Community-blue?logo=telegram)](https://t.me/getnora)
[![GitHub Stars](https://img.shields.io/github/stars/getnora-io/nora?style=flat&logo=github)](https://github.com/getnora-io/nora/stargazers)
- Website: [getnora.dev](https://getnora.dev)
- Telegram: [@DevITWay](https://t.me/DevITWay)
- Telegram: [@getnora](https://t.me/getnora)
- GitHub: [@devitway](https://github.com/devitway)
- Email: devitway@gmail.com
## Contributing
@@ -243,10 +252,6 @@ NORA welcomes contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelin
## License
MIT License - see [LICENSE](LICENSE)
MIT License see [LICENSE](LICENSE)
Copyright (c) 2026 DevITWay
---
**🐿️ N○RA** - Organized like a chipmunk's stash | Built with Rust by [DevITWay](https://t.me/DevITWay)

38
ROADMAP.md Normal file
View File

@@ -0,0 +1,38 @@
# Roadmap
> This roadmap reflects current priorities. It may change based on community feedback.
## Recently Completed
- **v0.2.32** — Docker dashboard fix for namespaced images, `library/` auto-prepend for Hub official images
- **v0.2.31** — npm full proxy (URL rewriting, scoped packages, publish, SHA-256 integrity cache, metadata TTL)
- **v0.2.29** — Upstream authentication for all protocols (Docker, Maven, npm, PyPI)
## In Progress
- **`nora mirror`** — Pre-fetch dependencies from lockfiles for air-gapped environments ([#40](https://github.com/getnora-io/nora/issues/40))
- npm: `package-lock.json` (v1/v2/v3)
- pip: `requirements.txt`
- cargo: `Cargo.lock`
- maven: dependency list
## Next Up
- **Consistent env var naming** — Unify `NORA_*_PROXY` / `NORA_*_UPSTREAMS` across all protocols ([#39](https://github.com/getnora-io/nora/issues/39))
- **Package blocklist** — Deny specific packages or versions via config ([#41](https://github.com/getnora-io/nora/issues/41))
- **Multiple upstreams for npm/PyPI** — Same as Maven already supports
- **v1.0.0 release** — Stable API, production-ready
## Future
- Docker image mirroring ([#42](https://github.com/getnora-io/nora/issues/42))
- Virtual repositories via config (named endpoints with custom search order)
- Path-based ACL (per-namespace write permissions)
- OIDC/LDAP authentication
- HA mode (stateless API + external database)
- Golang modules proxy
- Content trust (Cosign/Notation verification)
## How to Influence
Open an issue or join [Telegram](https://t.me/getnora) to discuss priorities.

View File

@@ -9,6 +9,7 @@ ignore = [
]
[licenses]
unused-allowed-license = "allow"
# Allowed open-source licenses
allow = [
"MIT",
@@ -24,6 +25,7 @@ allow = [
"Zlib",
"CDLA-Permissive-2.0", # webpki-roots (CA certificates bundle)
"MPL-2.0",
"NCSA", # libfuzzer-sys (LLVM fuzzer)
]
[bans]

View File

@@ -1,83 +0,0 @@
#!/bin/bash
# Demo traffic simulator for NORA registry
# Generates random registry activity for dashboard demo
REGISTRY="http://localhost:4000"
LOG_FILE="/var/log/nora-demo-traffic.log"
# Sample packages to fetch
NPM_PACKAGES=("lodash" "express" "react" "axios" "moment" "underscore" "chalk" "debug")
MAVEN_ARTIFACTS=(
"org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.pom"
"com/google/guava/guava/31.1-jre/guava-31.1-jre.pom"
"org/slf4j/slf4j-api/2.0.0/slf4j-api-2.0.0.pom"
)
DOCKER_IMAGES=("alpine" "busybox" "hello-world")
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"
}
# Random sleep between min and max seconds
random_sleep() {
local min=$1
local max=$2
local delay=$((RANDOM % (max - min + 1) + min))
sleep $delay
}
# Fetch random npm package
fetch_npm() {
local pkg=${NPM_PACKAGES[$RANDOM % ${#NPM_PACKAGES[@]}]}
log "NPM: fetching $pkg"
curl -s "$REGISTRY/npm/$pkg" > /dev/null 2>&1
}
# Fetch random maven artifact
fetch_maven() {
local artifact=${MAVEN_ARTIFACTS[$RANDOM % ${#MAVEN_ARTIFACTS[@]}]}
log "MAVEN: fetching $artifact"
curl -s "$REGISTRY/maven2/$artifact" > /dev/null 2>&1
}
# Docker push/pull cycle
docker_cycle() {
local img=${DOCKER_IMAGES[$RANDOM % ${#DOCKER_IMAGES[@]}]}
local tag="demo-$(date +%s)"
log "DOCKER: push/pull cycle for $img"
# Tag and push
docker tag "$img:latest" "localhost:4000/demo/$img:$tag" 2>/dev/null
docker push "localhost:4000/demo/$img:$tag" > /dev/null 2>&1
# Pull back
docker rmi "localhost:4000/demo/$img:$tag" > /dev/null 2>&1
docker pull "localhost:4000/demo/$img:$tag" > /dev/null 2>&1
# Cleanup
docker rmi "localhost:4000/demo/$img:$tag" > /dev/null 2>&1
}
# Main loop
log "Starting demo traffic simulator"
while true; do
# Random operation
op=$((RANDOM % 10))
case $op in
0|1|2|3) # 40% npm
fetch_npm
;;
4|5|6) # 30% maven
fetch_maven
;;
7|8|9) # 30% docker
docker_cycle
;;
esac
# Random delay: 30-120 seconds
random_sleep 30 120
done

View File

@@ -1,15 +0,0 @@
[Unit]
Description=NORA Demo Traffic Simulator
After=docker.service
Requires=docker.service
[Service]
Type=simple
ExecStart=/opt/nora/demo-traffic.sh
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target

20
dist/install.sh vendored
View File

@@ -37,6 +37,26 @@ else
fi
chmod +x /tmp/nora
# Verify signature if cosign is available
if command -v cosign &>/dev/null; then
echo "Verifying binary signature..."
SIG_URL="${DOWNLOAD_URL}.sig"
CERT_URL="${DOWNLOAD_URL}.pem"
if curl -fsSL -o /tmp/nora.sig "$SIG_URL" 2>/dev/null && \
curl -fsSL -o /tmp/nora.pem "$CERT_URL" 2>/dev/null; then
cosign verify-blob --signature /tmp/nora.sig --certificate /tmp/nora.pem \
--certificate-identity-regexp "github.com/getnora-io/nora" \
--certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
/tmp/nora && echo "Signature verified." || echo "Warning: signature verification failed."
rm -f /tmp/nora.sig /tmp/nora.pem
else
echo "Signature files not available, skipping verification."
fi
else
echo "Install cosign for binary signature verification: https://docs.sigstore.dev/cosign/system_config/installation/"
fi
sudo mv /tmp/nora "$INSTALL_DIR/nora"
# Create system user

9031
dist/nora.cdx.json vendored

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
# Руководство администратора NORA
**Версия:** 1.0
**Версия:** 0.2.32
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)

View File

@@ -2,7 +2,7 @@
## Программа «NORA — Реестр артефактов»
**Версия документа:** 1.0
**Версия документа:** 0.2.32
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)

View File

@@ -1,6 +1,6 @@
# Руководство пользователя NORA
**Версия:** 1.0
**Версия:** 0.2.32
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)

View File

@@ -2,6 +2,7 @@
name = "nora-fuzz"
version = "0.0.0"
publish = false
license = "MIT"
edition = "2021"
[package.metadata]

View File

@@ -1,98 +0,0 @@
#!/usr/bin/env sh
# NORA installer — https://getnora.io/install.sh
# Usage: curl -fsSL https://getnora.io/install.sh | sh
set -e
REPO="getnora-io/nora"
BINARY="nora"
INSTALL_DIR="/usr/local/bin"
# ── Detect OS and architecture ──────────────────────────────────────────────
OS="$(uname -s)"
ARCH="$(uname -m)"
case "$OS" in
Linux) os="linux" ;;
Darwin) os="darwin" ;;
*)
echo "Unsupported OS: $OS"
echo "Please download manually: https://github.com/$REPO/releases/latest"
exit 1
;;
esac
case "$ARCH" in
x86_64 | amd64) arch="amd64" ;;
aarch64 | arm64) arch="arm64" ;;
*)
echo "Unsupported architecture: $ARCH"
echo "Please download manually: https://github.com/$REPO/releases/latest"
exit 1
;;
esac
ASSET="${BINARY}-${os}-${arch}"
# ── Get latest release version ──────────────────────────────────────────────
VERSION="$(curl -fsSL "https://api.github.com/repos/$REPO/releases/latest" \
| grep '"tag_name"' \
| sed 's/.*"tag_name": *"\([^"]*\)".*/\1/')"
if [ -z "$VERSION" ]; then
echo "Failed to get latest version"
exit 1
fi
echo "Installing NORA $VERSION ($os/$arch)..."
# ── Download binary and checksum ────────────────────────────────────────────
BASE_URL="https://github.com/$REPO/releases/download/$VERSION"
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "$TMP_DIR"' EXIT
echo "Downloading $ASSET..."
curl -fsSL "$BASE_URL/$ASSET" -o "$TMP_DIR/$BINARY"
curl -fsSL "$BASE_URL/$ASSET.sha256" -o "$TMP_DIR/$ASSET.sha256"
# ── Verify checksum ─────────────────────────────────────────────────────────
echo "Verifying checksum..."
EXPECTED="$(awk '{print $1}' "$TMP_DIR/$ASSET.sha256")"
ACTUAL="$(sha256sum "$TMP_DIR/$BINARY" | awk '{print $1}')"
if [ "$EXPECTED" != "$ACTUAL" ]; then
echo "Checksum mismatch!"
echo " Expected: $EXPECTED"
echo " Actual: $ACTUAL"
exit 1
fi
echo "Checksum OK"
# ── Install ─────────────────────────────────────────────────────────────────
chmod +x "$TMP_DIR/$BINARY"
if [ -w "$INSTALL_DIR" ]; then
mv "$TMP_DIR/$BINARY" "$INSTALL_DIR/$BINARY"
elif command -v sudo >/dev/null 2>&1; then
sudo mv "$TMP_DIR/$BINARY" "$INSTALL_DIR/$BINARY"
else
# Fallback to ~/.local/bin
INSTALL_DIR="$HOME/.local/bin"
mkdir -p "$INSTALL_DIR"
mv "$TMP_DIR/$BINARY" "$INSTALL_DIR/$BINARY"
echo "Installed to $INSTALL_DIR/$BINARY"
echo "Make sure $INSTALL_DIR is in your PATH"
fi
# ── Done ────────────────────────────────────────────────────────────────────
echo ""
echo "NORA $VERSION installed to $INSTALL_DIR/$BINARY"
echo ""
nora --version 2>/dev/null || true

BIN
logo.jpg

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

View File

@@ -1,23 +0,0 @@
[package]
name = "nora-cli"
version.workspace = true
edition.workspace = true
license.workspace = true
authors.workspace = true
repository.workspace = true
homepage.workspace = true
description = "CLI tool for NORA registry"
[[bin]]
name = "nora-cli"
path = "src/main.rs"
[dependencies]
tokio.workspace = true
reqwest.workspace = true
serde.workspace = true
serde_json.workspace = true
clap = { version = "4", features = ["derive"] }
indicatif = "0.18"
tar = "0.4"
flate2 = "1.1"

File diff suppressed because it is too large Load Diff

View File

@@ -1,55 +0,0 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
use clap::{Parser, Subcommand};
#[derive(Parser)]
#[command(name = "nora-cli")]
#[command(about = "CLI tool for Nora registry")]
struct Cli {
#[command(subcommand)]
command: Commands,
}
#[derive(Subcommand)]
enum Commands {
/// Login to a registry
Login {
#[arg(long)]
registry: String,
#[arg(short, long)]
username: String,
},
/// Push an artifact
Push {
#[arg(long)]
registry: String,
path: String,
},
/// Pull an artifact
Pull {
#[arg(long)]
registry: String,
artifact: String,
},
}
#[tokio::main]
async fn main() {
let cli = Cli::parse();
match cli.command {
Commands::Login { registry, username } => {
println!("Logging in to {} as {}", registry, username);
// TODO: implement
}
Commands::Push { registry, path } => {
println!("Pushing {} to {}", path, registry);
// TODO: implement
}
Commands::Pull { registry, artifact } => {
println!("Pulling {} from {}", artifact, registry);
// TODO: implement
}
}
}

View File

@@ -49,6 +49,7 @@ tower_governor = "0.8"
governor = "0.10"
parking_lot = "0.12"
zeroize = { version = "1.8", features = ["derive"] }
tower-http = { version = "0.6", features = ["set-header"] }
[dev-dependencies]
tempfile = "3"

File diff suppressed because it is too large Load Diff

View File

@@ -99,3 +99,139 @@ impl Default for ActivityLog {
Self::new(50)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_action_type_display() {
assert_eq!(ActionType::Pull.to_string(), "PULL");
assert_eq!(ActionType::Push.to_string(), "PUSH");
assert_eq!(ActionType::CacheHit.to_string(), "CACHE");
assert_eq!(ActionType::ProxyFetch.to_string(), "PROXY");
}
#[test]
fn test_action_type_equality() {
assert_eq!(ActionType::Pull, ActionType::Pull);
assert_ne!(ActionType::Pull, ActionType::Push);
}
#[test]
fn test_activity_entry_new() {
let entry = ActivityEntry::new(
ActionType::Pull,
"nginx:latest".to_string(),
"docker",
"LOCAL",
);
assert_eq!(entry.action, ActionType::Pull);
assert_eq!(entry.artifact, "nginx:latest");
assert_eq!(entry.registry, "docker");
assert_eq!(entry.source, "LOCAL");
}
#[test]
fn test_activity_log_push_and_len() {
let log = ActivityLog::new(10);
assert!(log.is_empty());
assert_eq!(log.len(), 0);
log.push(ActivityEntry::new(
ActionType::Push,
"test:v1".to_string(),
"docker",
"LOCAL",
));
assert!(!log.is_empty());
assert_eq!(log.len(), 1);
}
#[test]
fn test_activity_log_recent() {
let log = ActivityLog::new(10);
for i in 0..5 {
log.push(ActivityEntry::new(
ActionType::Pull,
format!("image:{}", i),
"docker",
"LOCAL",
));
}
let recent = log.recent(3);
assert_eq!(recent.len(), 3);
// newest first
assert_eq!(recent[0].artifact, "image:4");
assert_eq!(recent[1].artifact, "image:3");
assert_eq!(recent[2].artifact, "image:2");
}
#[test]
fn test_activity_log_all() {
let log = ActivityLog::new(10);
for i in 0..3 {
log.push(ActivityEntry::new(
ActionType::Pull,
format!("pkg:{}", i),
"npm",
"PROXY",
));
}
let all = log.all();
assert_eq!(all.len(), 3);
assert_eq!(all[0].artifact, "pkg:2"); // newest first
}
#[test]
fn test_activity_log_bounded_size() {
let log = ActivityLog::new(3);
for i in 0..5 {
log.push(ActivityEntry::new(
ActionType::Pull,
format!("item:{}", i),
"cargo",
"CACHE",
));
}
assert_eq!(log.len(), 3);
let all = log.all();
// oldest entries should be dropped
assert_eq!(all[0].artifact, "item:4");
assert_eq!(all[1].artifact, "item:3");
assert_eq!(all[2].artifact, "item:2");
}
#[test]
fn test_activity_log_recent_more_than_available() {
let log = ActivityLog::new(10);
log.push(ActivityEntry::new(
ActionType::Push,
"one".to_string(),
"maven",
"LOCAL",
));
let recent = log.recent(100);
assert_eq!(recent.len(), 1);
}
#[test]
fn test_activity_log_default() {
let log = ActivityLog::default();
assert!(log.is_empty());
// default capacity is 50
for i in 0..60 {
log.push(ActivityEntry::new(
ActionType::Pull,
format!("x:{}", i),
"docker",
"LOCAL",
));
}
assert_eq!(log.len(), 50);
}
}

View File

@@ -71,3 +71,69 @@ impl AuditLog {
&self.path
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
#[test]
fn test_audit_entry_new() {
let entry = AuditEntry::new(
"push",
"admin",
"nginx:latest",
"docker",
"uploaded manifest",
);
assert_eq!(entry.action, "push");
assert_eq!(entry.actor, "admin");
assert_eq!(entry.artifact, "nginx:latest");
assert_eq!(entry.registry, "docker");
assert_eq!(entry.detail, "uploaded manifest");
}
#[test]
fn test_audit_log_new_and_path() {
let tmp = TempDir::new().unwrap();
let log = AuditLog::new(tmp.path().to_str().unwrap());
assert!(log.path().ends_with("audit.jsonl"));
}
#[test]
fn test_audit_log_write_entry() {
let tmp = TempDir::new().unwrap();
let log = AuditLog::new(tmp.path().to_str().unwrap());
let entry = AuditEntry::new("pull", "user1", "lodash", "npm", "downloaded");
log.log(entry);
// Verify file contains the entry
let content = std::fs::read_to_string(log.path()).unwrap();
assert!(content.contains(r#""action":"pull""#));
assert!(content.contains(r#""actor":"user1""#));
assert!(content.contains(r#""artifact":"lodash""#));
}
#[test]
fn test_audit_log_multiple_entries() {
let tmp = TempDir::new().unwrap();
let log = AuditLog::new(tmp.path().to_str().unwrap());
log.log(AuditEntry::new("push", "admin", "a", "docker", ""));
log.log(AuditEntry::new("pull", "user", "b", "npm", ""));
log.log(AuditEntry::new("delete", "admin", "c", "maven", ""));
let content = std::fs::read_to_string(log.path()).unwrap();
let lines: Vec<&str> = content.lines().collect();
assert_eq!(lines.len(), 3);
}
#[test]
fn test_audit_entry_serialization() {
let entry = AuditEntry::new("push", "ci", "app:v1", "docker", "ci build");
let json = serde_json::to_string(&entry).unwrap();
assert!(json.contains(r#""action":"push""#));
assert!(json.contains(r#""ts":""#));
}
}

View File

@@ -94,6 +94,16 @@ pub async fn auth_middleware(
return next.run(request).await;
}
// Allow anonymous read if configured
let is_read_method = matches!(
*request.method(),
axum::http::Method::GET | axum::http::Method::HEAD
);
if state.config.auth.anonymous_read && is_read_method {
// Read requests allowed without auth
return next.run(request).await;
}
// Extract Authorization header
let auth_header = request
.headers()

View File

@@ -200,6 +200,9 @@ fn default_max_file_size() -> u64 {
pub struct AuthConfig {
#[serde(default)]
pub enabled: bool,
/// Allow anonymous read access (pull/download without auth, push requires auth)
#[serde(default)]
pub anonymous_read: bool,
#[serde(default = "default_htpasswd_file")]
pub htpasswd_file: String,
#[serde(default = "default_token_storage")]
@@ -279,6 +282,7 @@ impl Default for AuthConfig {
fn default() -> Self {
Self {
enabled: false,
anonymous_read: false,
htpasswd_file: "users.htpasswd".to_string(),
token_storage: "data/tokens".to_string(),
}
@@ -457,6 +461,9 @@ impl Config {
if let Ok(val) = env::var("NORA_AUTH_ENABLED") {
self.auth.enabled = val.to_lowercase() == "true" || val == "1";
}
if let Ok(val) = env::var("NORA_AUTH_ANONYMOUS_READ") {
self.auth.anonymous_read = val.to_lowercase() == "true" || val == "1";
}
if let Ok(val) = env::var("NORA_AUTH_HTPASSWD_FILE") {
self.auth.htpasswd_file = val;
}
@@ -666,4 +673,378 @@ mod tests {
assert_eq!(config.rate_limit.upload_burst, 1000);
assert_eq!(config.rate_limit.auth_burst, 5); // default
}
#[test]
fn test_basic_auth_header() {
let header = basic_auth_header("user:pass");
assert_eq!(header, "Basic dXNlcjpwYXNz");
}
#[test]
fn test_basic_auth_header_empty() {
let header = basic_auth_header("");
assert!(header.starts_with("Basic "));
}
#[test]
fn test_config_default() {
let config = Config::default();
assert_eq!(config.server.host, "127.0.0.1");
assert_eq!(config.server.port, 4000);
assert_eq!(config.server.body_limit_mb, 2048);
assert!(config.server.public_url.is_none());
assert_eq!(config.storage.path, "data/storage");
assert_eq!(config.storage.mode, StorageMode::Local);
assert_eq!(config.storage.bucket, "registry");
assert_eq!(config.storage.s3_region, "us-east-1");
assert!(!config.auth.enabled);
assert_eq!(config.auth.htpasswd_file, "users.htpasswd");
assert_eq!(config.auth.token_storage, "data/tokens");
}
#[test]
fn test_maven_config_default() {
let m = MavenConfig::default();
assert_eq!(m.proxy_timeout, 30);
assert_eq!(m.proxies.len(), 1);
assert_eq!(m.proxies[0].url(), "https://repo1.maven.org/maven2");
assert!(m.proxies[0].auth().is_none());
}
#[test]
fn test_npm_config_default() {
let n = NpmConfig::default();
assert_eq!(n.proxy, Some("https://registry.npmjs.org".to_string()));
assert!(n.proxy_auth.is_none());
assert_eq!(n.proxy_timeout, 30);
assert_eq!(n.metadata_ttl, 300);
}
#[test]
fn test_pypi_config_default() {
let p = PypiConfig::default();
assert_eq!(p.proxy, Some("https://pypi.org/simple/".to_string()));
assert!(p.proxy_auth.is_none());
assert_eq!(p.proxy_timeout, 30);
}
#[test]
fn test_docker_config_default() {
let d = DockerConfig::default();
assert_eq!(d.proxy_timeout, 60);
assert_eq!(d.upstreams.len(), 1);
assert_eq!(d.upstreams[0].url, "https://registry-1.docker.io");
assert!(d.upstreams[0].auth.is_none());
}
#[test]
fn test_raw_config_default() {
let r = RawConfig::default();
assert!(r.enabled);
assert_eq!(r.max_file_size, 104_857_600);
}
#[test]
fn test_auth_config_default() {
let a = AuthConfig::default();
assert!(!a.enabled);
assert!(!a.anonymous_read);
assert_eq!(a.htpasswd_file, "users.htpasswd");
assert_eq!(a.token_storage, "data/tokens");
}
#[test]
fn test_auth_anonymous_read_from_toml() {
let toml = r#"
[server]
host = "127.0.0.1"
port = 4000
[storage]
mode = "local"
[auth]
enabled = true
anonymous_read = true
"#;
let config: Config = toml::from_str(toml).unwrap();
assert!(config.auth.enabled);
assert!(config.auth.anonymous_read);
}
#[test]
fn test_env_override_anonymous_read() {
let mut config = Config::default();
std::env::set_var("NORA_AUTH_ANONYMOUS_READ", "true");
config.apply_env_overrides();
assert!(config.auth.anonymous_read);
std::env::remove_var("NORA_AUTH_ANONYMOUS_READ");
}
#[test]
fn test_maven_proxy_entry_simple() {
let entry = MavenProxyEntry::Simple("https://repo.example.com".to_string());
assert_eq!(entry.url(), "https://repo.example.com");
assert!(entry.auth().is_none());
}
#[test]
fn test_maven_proxy_entry_full() {
let entry = MavenProxyEntry::Full(MavenProxy {
url: "https://private.repo.com".to_string(),
auth: Some("user:secret".to_string()),
});
assert_eq!(entry.url(), "https://private.repo.com");
assert_eq!(entry.auth(), Some("user:secret"));
}
#[test]
fn test_maven_proxy_entry_full_no_auth() {
let entry = MavenProxyEntry::Full(MavenProxy {
url: "https://repo.com".to_string(),
auth: None,
});
assert_eq!(entry.url(), "https://repo.com");
assert!(entry.auth().is_none());
}
#[test]
fn test_storage_mode_default() {
let mode = StorageMode::default();
assert_eq!(mode, StorageMode::Local);
}
#[test]
fn test_env_override_server() {
let mut config = Config::default();
std::env::set_var("NORA_HOST", "0.0.0.0");
std::env::set_var("NORA_PORT", "8080");
std::env::set_var("NORA_PUBLIC_URL", "registry.example.com");
std::env::set_var("NORA_BODY_LIMIT_MB", "4096");
config.apply_env_overrides();
assert_eq!(config.server.host, "0.0.0.0");
assert_eq!(config.server.port, 8080);
assert_eq!(
config.server.public_url,
Some("registry.example.com".to_string())
);
assert_eq!(config.server.body_limit_mb, 4096);
std::env::remove_var("NORA_HOST");
std::env::remove_var("NORA_PORT");
std::env::remove_var("NORA_PUBLIC_URL");
std::env::remove_var("NORA_BODY_LIMIT_MB");
}
#[test]
fn test_env_override_storage() {
let mut config = Config::default();
std::env::set_var("NORA_STORAGE_MODE", "s3");
std::env::set_var("NORA_STORAGE_PATH", "/data/nora");
std::env::set_var("NORA_STORAGE_BUCKET", "my-bucket");
std::env::set_var("NORA_STORAGE_S3_REGION", "eu-west-1");
config.apply_env_overrides();
assert_eq!(config.storage.mode, StorageMode::S3);
assert_eq!(config.storage.path, "/data/nora");
assert_eq!(config.storage.bucket, "my-bucket");
assert_eq!(config.storage.s3_region, "eu-west-1");
std::env::remove_var("NORA_STORAGE_MODE");
std::env::remove_var("NORA_STORAGE_PATH");
std::env::remove_var("NORA_STORAGE_BUCKET");
std::env::remove_var("NORA_STORAGE_S3_REGION");
}
#[test]
fn test_env_override_auth() {
let mut config = Config::default();
std::env::set_var("NORA_AUTH_ENABLED", "true");
std::env::set_var("NORA_AUTH_HTPASSWD_FILE", "/etc/nora/users");
std::env::set_var("NORA_AUTH_TOKEN_STORAGE", "/data/tokens");
config.apply_env_overrides();
assert!(config.auth.enabled);
assert_eq!(config.auth.htpasswd_file, "/etc/nora/users");
assert_eq!(config.auth.token_storage, "/data/tokens");
std::env::remove_var("NORA_AUTH_ENABLED");
std::env::remove_var("NORA_AUTH_HTPASSWD_FILE");
std::env::remove_var("NORA_AUTH_TOKEN_STORAGE");
}
#[test]
fn test_env_override_maven_proxies() {
let mut config = Config::default();
std::env::set_var(
"NORA_MAVEN_PROXIES",
"https://repo1.com,https://repo2.com|user:pass",
);
config.apply_env_overrides();
assert_eq!(config.maven.proxies.len(), 2);
assert_eq!(config.maven.proxies[0].url(), "https://repo1.com");
assert!(config.maven.proxies[0].auth().is_none());
assert_eq!(config.maven.proxies[1].url(), "https://repo2.com");
assert_eq!(config.maven.proxies[1].auth(), Some("user:pass"));
std::env::remove_var("NORA_MAVEN_PROXIES");
}
#[test]
fn test_env_override_docker_upstreams() {
let mut config = Config::default();
std::env::set_var(
"NORA_DOCKER_UPSTREAMS",
"https://mirror.gcr.io,https://private.io|token123",
);
config.apply_env_overrides();
assert_eq!(config.docker.upstreams.len(), 2);
assert_eq!(config.docker.upstreams[0].url, "https://mirror.gcr.io");
assert!(config.docker.upstreams[0].auth.is_none());
assert_eq!(config.docker.upstreams[1].url, "https://private.io");
assert_eq!(
config.docker.upstreams[1].auth,
Some("token123".to_string())
);
std::env::remove_var("NORA_DOCKER_UPSTREAMS");
}
#[test]
fn test_env_override_npm() {
let mut config = Config::default();
std::env::set_var("NORA_NPM_PROXY", "https://npm.company.com");
std::env::set_var("NORA_NPM_PROXY_AUTH", "user:token");
std::env::set_var("NORA_NPM_PROXY_TIMEOUT", "60");
std::env::set_var("NORA_NPM_METADATA_TTL", "600");
config.apply_env_overrides();
assert_eq!(
config.npm.proxy,
Some("https://npm.company.com".to_string())
);
assert_eq!(config.npm.proxy_auth, Some("user:token".to_string()));
assert_eq!(config.npm.proxy_timeout, 60);
assert_eq!(config.npm.metadata_ttl, 600);
std::env::remove_var("NORA_NPM_PROXY");
std::env::remove_var("NORA_NPM_PROXY_AUTH");
std::env::remove_var("NORA_NPM_PROXY_TIMEOUT");
std::env::remove_var("NORA_NPM_METADATA_TTL");
}
#[test]
fn test_env_override_raw() {
let mut config = Config::default();
std::env::set_var("NORA_RAW_ENABLED", "false");
std::env::set_var("NORA_RAW_MAX_FILE_SIZE", "524288000");
config.apply_env_overrides();
assert!(!config.raw.enabled);
assert_eq!(config.raw.max_file_size, 524288000);
std::env::remove_var("NORA_RAW_ENABLED");
std::env::remove_var("NORA_RAW_MAX_FILE_SIZE");
}
#[test]
fn test_env_override_rate_limit() {
let mut config = Config::default();
std::env::set_var("NORA_RATE_LIMIT_ENABLED", "false");
std::env::set_var("NORA_RATE_LIMIT_AUTH_RPS", "10");
std::env::set_var("NORA_RATE_LIMIT_GENERAL_BURST", "500");
config.apply_env_overrides();
assert!(!config.rate_limit.enabled);
assert_eq!(config.rate_limit.auth_rps, 10);
assert_eq!(config.rate_limit.general_burst, 500);
std::env::remove_var("NORA_RATE_LIMIT_ENABLED");
std::env::remove_var("NORA_RATE_LIMIT_AUTH_RPS");
std::env::remove_var("NORA_RATE_LIMIT_GENERAL_BURST");
}
#[test]
fn test_config_from_toml_full() {
let toml = r#"
[server]
host = "0.0.0.0"
port = 8080
public_url = "nora.example.com"
body_limit_mb = 4096
[storage]
mode = "s3"
path = "/data"
s3_url = "http://minio:9000"
bucket = "artifacts"
s3_region = "eu-central-1"
[auth]
enabled = true
htpasswd_file = "/etc/nora/users.htpasswd"
[raw]
enabled = false
max_file_size = 500000000
"#;
let config: Config = toml::from_str(toml).unwrap();
assert_eq!(config.server.host, "0.0.0.0");
assert_eq!(config.server.port, 8080);
assert_eq!(
config.server.public_url,
Some("nora.example.com".to_string())
);
assert_eq!(config.server.body_limit_mb, 4096);
assert_eq!(config.storage.mode, StorageMode::S3);
assert_eq!(config.storage.s3_url, "http://minio:9000");
assert_eq!(config.storage.bucket, "artifacts");
assert!(config.auth.enabled);
assert!(!config.raw.enabled);
assert_eq!(config.raw.max_file_size, 500000000);
}
#[test]
fn test_config_from_toml_minimal() {
let toml = r#"
[server]
host = "127.0.0.1"
port = 4000
[storage]
mode = "local"
"#;
let config: Config = toml::from_str(toml).unwrap();
// Defaults should be filled
assert_eq!(config.storage.path, "data/storage");
assert_eq!(config.maven.proxies.len(), 1);
assert_eq!(
config.npm.proxy,
Some("https://registry.npmjs.org".to_string())
);
assert_eq!(config.docker.upstreams.len(), 1);
assert!(config.raw.enabled);
assert!(!config.auth.enabled);
}
#[test]
fn test_config_toml_docker_upstreams() {
let toml = r#"
[server]
host = "127.0.0.1"
port = 4000
[storage]
mode = "local"
[docker]
proxy_timeout = 120
[[docker.upstreams]]
url = "https://mirror.gcr.io"
[[docker.upstreams]]
url = "https://private.registry.io"
auth = "user:pass"
"#;
let config: Config = toml::from_str(toml).unwrap();
assert_eq!(config.docker.proxy_timeout, 120);
assert_eq!(config.docker.upstreams.len(), 2);
assert!(config.docker.upstreams[0].auth.is_none());
assert_eq!(
config.docker.upstreams[1].auth,
Some("user:pass".to_string())
);
}
}

View File

@@ -216,3 +216,146 @@ impl Default for DashboardMetrics {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
#[test]
fn test_new_defaults() {
let m = DashboardMetrics::new();
assert_eq!(m.downloads.load(Ordering::Relaxed), 0);
assert_eq!(m.uploads.load(Ordering::Relaxed), 0);
assert_eq!(m.cache_hits.load(Ordering::Relaxed), 0);
assert_eq!(m.cache_misses.load(Ordering::Relaxed), 0);
}
#[test]
fn test_record_download_all_registries() {
let m = DashboardMetrics::new();
for reg in &["docker", "npm", "maven", "cargo", "pypi", "raw"] {
m.record_download(reg);
}
assert_eq!(m.downloads.load(Ordering::Relaxed), 6);
assert_eq!(m.docker_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.npm_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.maven_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.cargo_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.pypi_downloads.load(Ordering::Relaxed), 1);
assert_eq!(m.raw_downloads.load(Ordering::Relaxed), 1);
}
#[test]
fn test_record_download_unknown_registry() {
let m = DashboardMetrics::new();
m.record_download("unknown");
assert_eq!(m.downloads.load(Ordering::Relaxed), 1);
// no per-registry counter should increment
assert_eq!(m.docker_downloads.load(Ordering::Relaxed), 0);
}
#[test]
fn test_record_upload() {
let m = DashboardMetrics::new();
m.record_upload("docker");
m.record_upload("maven");
m.record_upload("raw");
assert_eq!(m.uploads.load(Ordering::Relaxed), 3);
assert_eq!(m.docker_uploads.load(Ordering::Relaxed), 1);
assert_eq!(m.maven_uploads.load(Ordering::Relaxed), 1);
assert_eq!(m.raw_uploads.load(Ordering::Relaxed), 1);
}
#[test]
fn test_record_upload_unknown_registry() {
let m = DashboardMetrics::new();
m.record_upload("npm"); // npm has no upload counter
assert_eq!(m.uploads.load(Ordering::Relaxed), 1);
}
#[test]
fn test_cache_hit_rate_zero() {
let m = DashboardMetrics::new();
assert_eq!(m.cache_hit_rate(), 0.0);
}
#[test]
fn test_cache_hit_rate_all_hits() {
let m = DashboardMetrics::new();
m.record_cache_hit();
m.record_cache_hit();
assert_eq!(m.cache_hit_rate(), 100.0);
}
#[test]
fn test_cache_hit_rate_mixed() {
let m = DashboardMetrics::new();
m.record_cache_hit();
m.record_cache_miss();
assert_eq!(m.cache_hit_rate(), 50.0);
}
#[test]
fn test_get_registry_downloads() {
let m = DashboardMetrics::new();
m.record_download("docker");
m.record_download("docker");
m.record_download("npm");
assert_eq!(m.get_registry_downloads("docker"), 2);
assert_eq!(m.get_registry_downloads("npm"), 1);
assert_eq!(m.get_registry_downloads("cargo"), 0);
assert_eq!(m.get_registry_downloads("unknown"), 0);
}
#[test]
fn test_get_registry_uploads() {
let m = DashboardMetrics::new();
m.record_upload("docker");
assert_eq!(m.get_registry_uploads("docker"), 1);
assert_eq!(m.get_registry_uploads("maven"), 0);
assert_eq!(m.get_registry_uploads("unknown"), 0);
}
#[test]
fn test_persistence_save_and_load() {
let tmp = TempDir::new().unwrap();
let path = tmp.path().to_str().unwrap();
// Create metrics, record some data, save
{
let m = DashboardMetrics::with_persistence(path);
m.record_download("docker");
m.record_download("docker");
m.record_upload("maven");
m.record_cache_hit();
m.save();
}
// Load in new instance
{
let m = DashboardMetrics::with_persistence(path);
assert_eq!(m.downloads.load(Ordering::Relaxed), 2);
assert_eq!(m.uploads.load(Ordering::Relaxed), 1);
assert_eq!(m.docker_downloads.load(Ordering::Relaxed), 2);
assert_eq!(m.maven_uploads.load(Ordering::Relaxed), 1);
assert_eq!(m.cache_hits.load(Ordering::Relaxed), 1);
}
}
#[test]
fn test_persistence_missing_file() {
let tmp = TempDir::new().unwrap();
let path = tmp.path().to_str().unwrap();
// Should work even without existing metrics.json
let m = DashboardMetrics::with_persistence(path);
assert_eq!(m.downloads.load(Ordering::Relaxed), 0);
}
#[test]
fn test_default() {
let m = DashboardMetrics::default();
assert_eq!(m.downloads.load(Ordering::Relaxed), 0);
}
}

View File

@@ -124,4 +124,77 @@ mod tests {
let err = AppError::NotFound("image not found".to_string());
assert_eq!(err.to_string(), "Not found: image not found");
}
#[test]
fn test_error_constructors() {
let err = AppError::not_found("missing");
assert!(matches!(err, AppError::NotFound(_)));
assert_eq!(err.to_string(), "Not found: missing");
let err = AppError::bad_request("invalid input");
assert!(matches!(err, AppError::BadRequest(_)));
assert_eq!(err.to_string(), "Bad request: invalid input");
let err = AppError::unauthorized("no token");
assert!(matches!(err, AppError::Unauthorized(_)));
assert_eq!(err.to_string(), "Unauthorized: no token");
let err = AppError::internal("db crashed");
assert!(matches!(err, AppError::Internal(_)));
assert_eq!(err.to_string(), "Internal error: db crashed");
}
#[test]
fn test_error_display_storage() {
let err = AppError::Storage(StorageError::NotFound);
assert!(err.to_string().contains("Storage error"));
}
#[test]
fn test_error_display_validation() {
let err = AppError::Validation(ValidationError::PathTraversal);
assert!(err.to_string().contains("Validation error"));
}
#[test]
fn test_error_into_response_not_found() {
let err = AppError::NotFound("gone".to_string());
let response = err.into_response();
assert_eq!(response.status(), StatusCode::NOT_FOUND);
}
#[test]
fn test_error_into_response_bad_request() {
let err = AppError::BadRequest("bad".to_string());
let response = err.into_response();
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
}
#[test]
fn test_error_into_response_unauthorized() {
let err = AppError::Unauthorized("nope".to_string());
let response = err.into_response();
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
}
#[test]
fn test_error_into_response_internal() {
let err = AppError::Internal("boom".to_string());
let response = err.into_response();
assert_eq!(response.status(), StatusCode::INTERNAL_SERVER_ERROR);
}
#[test]
fn test_error_into_response_storage_not_found() {
let err = AppError::Storage(StorageError::NotFound);
let response = err.into_response();
assert_eq!(response.status(), StatusCode::NOT_FOUND);
}
#[test]
fn test_error_into_response_validation() {
let err = AppError::Validation(ValidationError::EmptyInput);
let response = err.into_response();
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
}
}

View File

@@ -24,7 +24,7 @@ mod tokens;
mod ui;
mod validation;
use axum::{extract::DefaultBodyLimit, middleware, Router};
use axum::{extract::DefaultBodyLimit, http::HeaderValue, middleware, Router};
use clap::{Parser, Subcommand};
use std::path::{Path, PathBuf};
use std::sync::Arc;
@@ -375,6 +375,22 @@ async fn run_server(config: Config, storage: Storage) {
.layer(DefaultBodyLimit::max(
state.config.server.body_limit_mb * 1024 * 1024,
))
.layer(tower_http::set_header::SetResponseHeaderLayer::overriding(
axum::http::header::HeaderName::from_static("x-content-type-options"),
HeaderValue::from_static("nosniff"),
))
.layer(tower_http::set_header::SetResponseHeaderLayer::overriding(
axum::http::header::HeaderName::from_static("x-frame-options"),
HeaderValue::from_static("DENY"),
))
.layer(tower_http::set_header::SetResponseHeaderLayer::overriding(
axum::http::header::HeaderName::from_static("referrer-policy"),
HeaderValue::from_static("strict-origin-when-cross-origin"),
))
.layer(tower_http::set_header::SetResponseHeaderLayer::overriding(
axum::http::header::HeaderName::from_static("content-security-policy"),
HeaderValue::from_static("default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.tailwindcss.com https://unpkg.com; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self'; connect-src 'self'"),
))
.layer(middleware::from_fn(request_id::request_id_middleware))
.layer(middleware::from_fn(metrics::metrics_middleware))
.layer(middleware::from_fn_with_state(

View File

@@ -148,3 +148,56 @@ pub fn record_storage_op(operation: &str, success: bool) {
.with_label_values(&[operation, status])
.inc();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_detect_registry_docker() {
assert_eq!(detect_registry("/v2/nginx/manifests/latest"), "docker");
assert_eq!(detect_registry("/v2/"), "docker");
assert_eq!(
detect_registry("/v2/library/alpine/blobs/sha256:abc"),
"docker"
);
}
#[test]
fn test_detect_registry_maven() {
assert_eq!(detect_registry("/maven2/com/example/artifact"), "maven");
}
#[test]
fn test_detect_registry_npm() {
assert_eq!(detect_registry("/npm/lodash"), "npm");
assert_eq!(detect_registry("/npm/@scope/package"), "npm");
}
#[test]
fn test_detect_registry_cargo() {
assert_eq!(detect_registry("/cargo/api/v1/crates"), "cargo");
}
#[test]
fn test_detect_registry_pypi() {
assert_eq!(detect_registry("/simple/requests/"), "pypi");
assert_eq!(
detect_registry("/packages/requests/1.0/requests-1.0.tar.gz"),
"pypi"
);
}
#[test]
fn test_detect_registry_ui() {
assert_eq!(detect_registry("/ui/dashboard"), "ui");
assert_eq!(detect_registry("/ui"), "ui");
}
#[test]
fn test_detect_registry_other() {
assert_eq!(detect_registry("/health"), "other");
assert_eq!(detect_registry("/ready"), "other");
assert_eq!(detect_registry("/unknown/path"), "other");
}
}

View File

@@ -44,11 +44,57 @@ pub struct LayerInfo {
pub size: u64,
}
/// In-progress upload session with metadata
struct UploadSession {
data: Vec<u8>,
name: String,
created_at: std::time::Instant,
}
/// Max concurrent upload sessions (prevent memory exhaustion)
const DEFAULT_MAX_UPLOAD_SESSIONS: usize = 100;
/// Max data per session (default 2 GB, configurable via NORA_MAX_UPLOAD_SESSION_SIZE_MB)
const DEFAULT_MAX_SESSION_SIZE_MB: usize = 2048;
/// Session TTL (30 minutes)
const SESSION_TTL: Duration = Duration::from_secs(30 * 60);
/// Read max upload sessions from env or use default
fn max_upload_sessions() -> usize {
std::env::var("NORA_MAX_UPLOAD_SESSIONS")
.ok()
.and_then(|v| v.parse().ok())
.unwrap_or(DEFAULT_MAX_UPLOAD_SESSIONS)
}
/// Read max session size from env (in MB) or use default
fn max_session_size() -> usize {
let mb = std::env::var("NORA_MAX_UPLOAD_SESSION_SIZE_MB")
.ok()
.and_then(|v| v.parse::<usize>().ok())
.unwrap_or(DEFAULT_MAX_SESSION_SIZE_MB);
mb.saturating_mul(1024 * 1024)
}
/// In-progress upload sessions for chunked uploads
/// Maps UUID -> accumulated data
static UPLOAD_SESSIONS: std::sync::LazyLock<RwLock<HashMap<String, Vec<u8>>>> =
/// Maps UUID -> UploadSession with limits and TTL
static UPLOAD_SESSIONS: std::sync::LazyLock<RwLock<HashMap<String, UploadSession>>> =
std::sync::LazyLock::new(|| RwLock::new(HashMap::new()));
/// Remove expired upload sessions (called periodically)
fn cleanup_expired_sessions() {
let mut sessions = UPLOAD_SESSIONS.write();
let before = sessions.len();
sessions.retain(|_, s| s.created_at.elapsed() < SESSION_TTL);
let removed = before - sessions.len();
if removed > 0 {
tracing::info!(
removed = removed,
remaining = sessions.len(),
"Cleaned up expired upload sessions"
);
}
}
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
.route("/v2/", get(check))
@@ -108,9 +154,19 @@ async fn catalog(State(state): State<Arc<AppState>>) -> Json<Value> {
let mut repos: Vec<String> = keys
.iter()
.filter_map(|k| {
k.strip_prefix("docker/")
.and_then(|rest| rest.split('/').next())
.map(String::from)
let rest = k.strip_prefix("docker/")?;
// Find the first known directory separator (manifests/ or blobs/)
let name = if let Some(idx) = rest.find("/manifests/") {
&rest[..idx]
} else if let Some(idx) = rest.find("/blobs/") {
&rest[..idx]
} else {
return None;
};
if name.is_empty() {
return None;
}
Some(name.to_string())
})
.collect();
@@ -254,7 +310,38 @@ async fn start_upload(Path(name): Path<String>) -> Response {
return (StatusCode::BAD_REQUEST, e.to_string()).into_response();
}
// Cleanup expired sessions before checking limits
cleanup_expired_sessions();
// Enforce max concurrent sessions
{
let sessions = UPLOAD_SESSIONS.read();
let max_sessions = max_upload_sessions();
if sessions.len() >= max_sessions {
tracing::warn!(
max = max_sessions,
current = sessions.len(),
"Upload session limit reached — rejecting new upload"
);
return (StatusCode::TOO_MANY_REQUESTS, "Too many concurrent uploads").into_response();
}
}
let uuid = uuid::Uuid::new_v4().to_string();
// Create session with metadata
{
let mut sessions = UPLOAD_SESSIONS.write();
sessions.insert(
uuid.clone(),
UploadSession {
data: Vec::new(),
name: name.clone(),
created_at: std::time::Instant::now(),
},
);
}
let location = format!("/v2/{}/blobs/uploads/{}", name, uuid);
(
StatusCode::ACCEPTED,
@@ -276,9 +363,47 @@ async fn patch_blob(Path((name, uuid)): Path<(String, String)>, body: Bytes) ->
// Append data to the upload session and get total size
let total_size = {
let mut sessions = UPLOAD_SESSIONS.write();
let session = sessions.entry(uuid.clone()).or_default();
session.extend_from_slice(&body);
session.len()
let session = match sessions.get_mut(&uuid) {
Some(s) => s,
None => {
return (StatusCode::NOT_FOUND, "Upload session not found or expired")
.into_response();
}
};
// Verify session belongs to this repository
if session.name != name {
tracing::warn!(
session_name = %session.name,
request_name = %name,
"SECURITY: upload session name mismatch — possible session fixation"
);
return (
StatusCode::BAD_REQUEST,
"Session does not belong to this repository",
)
.into_response();
}
// Check session TTL
if session.created_at.elapsed() >= SESSION_TTL {
sessions.remove(&uuid);
return (StatusCode::NOT_FOUND, "Upload session expired").into_response();
}
// Check size limit
let new_size = session.data.len() + body.len();
if new_size > max_session_size() {
sessions.remove(&uuid);
return (
StatusCode::PAYLOAD_TOO_LARGE,
"Upload session exceeds size limit",
)
.into_response();
}
session.data.extend_from_slice(&body);
session.data.len()
};
let location = format!("/v2/{}/blobs/uploads/{}", name, uuid);
@@ -325,8 +450,22 @@ async fn upload_blob(
// Get data from chunked session if exists, otherwise use body directly
let data = {
let mut sessions = UPLOAD_SESSIONS.write();
if let Some(mut session_data) = sessions.remove(&uuid) {
if let Some(session) = sessions.remove(&uuid) {
// Verify session belongs to this repository
if session.name != name {
tracing::warn!(
session_name = %session.name,
request_name = %name,
"SECURITY: upload finalization name mismatch"
);
return (
StatusCode::BAD_REQUEST,
"Session does not belong to this repository",
)
.into_response();
}
// Chunked upload: append any final body data and use session
let mut session_data = session.data;
if !body.is_empty() {
session_data.extend_from_slice(&body);
}
@@ -337,6 +476,40 @@ async fn upload_blob(
}
};
// Only sha256 digests are supported for verification
if !digest.starts_with("sha256:") {
return (
StatusCode::BAD_REQUEST,
"Only sha256 digests are supported for blob uploads",
)
.into_response();
}
// Verify digest matches uploaded content (Docker Distribution Spec)
{
use sha2::Digest as _;
let computed = format!("sha256:{:x}", sha2::Sha256::digest(&data));
if computed != *digest {
tracing::warn!(
expected = %digest,
computed = %computed,
name = %name,
"SECURITY: blob digest mismatch — rejecting upload"
);
return (
StatusCode::BAD_REQUEST,
Json(json!({
"errors": [{
"code": "DIGEST_INVALID",
"message": "provided digest did not match uploaded content",
"detail": { "expected": digest, "computed": computed }
}]
})),
)
.into_response();
}
}
let key = format!("docker/{}/blobs/{}", name, digest);
match state.storage.put(&key, &data).await {
Ok(()) => {
@@ -619,6 +792,7 @@ async fn list_tags(State(state): State<Arc<AppState>>, Path(name): Path<String>)
.and_then(|t| t.strip_suffix(".json"))
.map(String::from)
})
.filter(|t| !t.ends_with(".meta") && !t.contains(".meta."))
.collect();
(StatusCode::OK, Json(json!({"name": name, "tags": tags}))).into_response()
}

View File

@@ -3,7 +3,7 @@
//! In-memory repository index with lazy rebuild on invalidation.
//!
//! Design (designed for efficiency):
//! Design:
//! - Rebuild happens ONLY on write operations, not TTL
//! - Double-checked locking prevents duplicate rebuilds
//! - Arc<Vec> for zero-cost reads
@@ -361,3 +361,164 @@ pub fn paginate<T: Clone>(data: &[T], page: usize, limit: usize) -> (Vec<T>, usi
let end = (start + limit).min(total);
(data[start..end].to_vec(), total)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_paginate_first_page() {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let (page, total) = paginate(&data, 1, 3);
assert_eq!(page, vec![1, 2, 3]);
assert_eq!(total, 10);
}
#[test]
fn test_paginate_second_page() {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let (page, total) = paginate(&data, 2, 3);
assert_eq!(page, vec![4, 5, 6]);
assert_eq!(total, 10);
}
#[test]
fn test_paginate_last_page_partial() {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let (page, total) = paginate(&data, 4, 3);
assert_eq!(page, vec![10]);
assert_eq!(total, 10);
}
#[test]
fn test_paginate_beyond_range() {
let data = vec![1, 2, 3];
let (page, total) = paginate(&data, 5, 3);
assert!(page.is_empty());
assert_eq!(total, 3);
}
#[test]
fn test_paginate_empty_data() {
let data: Vec<i32> = vec![];
let (page, total) = paginate(&data, 1, 10);
assert!(page.is_empty());
assert_eq!(total, 0);
}
#[test]
fn test_paginate_page_zero() {
// page 0 with saturating_sub becomes 0, so start = 0
let data = vec![1, 2, 3];
let (page, _) = paginate(&data, 0, 2);
assert_eq!(page, vec![1, 2]);
}
#[test]
fn test_paginate_large_limit() {
let data = vec![1, 2, 3];
let (page, total) = paginate(&data, 1, 100);
assert_eq!(page, vec![1, 2, 3]);
assert_eq!(total, 3);
}
#[test]
fn test_registry_index_new() {
let idx = RegistryIndex::new();
assert_eq!(idx.count(), 0);
assert!(idx.is_dirty());
}
#[test]
fn test_registry_index_invalidate() {
let idx = RegistryIndex::new();
// Initially dirty
assert!(idx.is_dirty());
// Set data clears dirty
idx.set(vec![RepoInfo {
name: "test".to_string(),
versions: 1,
size: 100,
updated: "2026-01-01".to_string(),
}]);
assert!(!idx.is_dirty());
assert_eq!(idx.count(), 1);
// Invalidate makes it dirty again
idx.invalidate();
assert!(idx.is_dirty());
}
#[test]
fn test_registry_index_get_cached() {
let idx = RegistryIndex::new();
idx.set(vec![
RepoInfo {
name: "a".to_string(),
versions: 2,
size: 200,
updated: "today".to_string(),
},
RepoInfo {
name: "b".to_string(),
versions: 1,
size: 100,
updated: "yesterday".to_string(),
},
]);
let cached = idx.get_cached();
assert_eq!(cached.len(), 2);
assert_eq!(cached[0].name, "a");
}
#[test]
fn test_registry_index_default() {
let idx = RegistryIndex::default();
assert_eq!(idx.count(), 0);
}
#[test]
fn test_repo_index_new() {
let idx = RepoIndex::new();
let (d, m, n, c, p) = idx.counts();
assert_eq!((d, m, n, c, p), (0, 0, 0, 0, 0));
}
#[test]
fn test_repo_index_invalidate() {
let idx = RepoIndex::new();
// Should not panic for any registry
idx.invalidate("docker");
idx.invalidate("maven");
idx.invalidate("npm");
idx.invalidate("cargo");
idx.invalidate("pypi");
idx.invalidate("unknown"); // should be a no-op
}
#[test]
fn test_repo_index_default() {
let idx = RepoIndex::default();
let (d, m, n, c, p) = idx.counts();
assert_eq!((d, m, n, c, p), (0, 0, 0, 0, 0));
}
#[test]
fn test_to_sorted_vec() {
let mut map = std::collections::HashMap::new();
map.insert("zebra".to_string(), (3usize, 100u64, 0u64));
map.insert("alpha".to_string(), (1, 50, 1700000000));
let result = to_sorted_vec(map);
assert_eq!(result.len(), 2);
assert_eq!(result[0].name, "alpha");
assert_eq!(result[0].versions, 1);
assert_eq!(result[0].size, 50);
assert_ne!(result[0].updated, "N/A");
assert_eq!(result[1].name, "zebra");
assert_eq!(result[1].versions, 3);
assert_eq!(result[1].updated, "N/A"); // modified = 0
}
}

View File

@@ -333,9 +333,9 @@ pub fn render_mount_points_table(
format!(
r##"
<tr class="border-b border-slate-700">
<td class="py-3 text-slate-300">{}</td>
<td class="py-3 font-mono text-blue-400">{}</td>
<td class="py-3 text-slate-400">{}</td>
<td class="px-4 py-3 text-slate-300">{}</td>
<td class="px-4 py-3 font-mono text-blue-400">{}</td>
<td class="px-4 py-3 text-slate-400">{}</td>
</tr>
"##,
registry, mount_path, proxy_display
@@ -358,7 +358,7 @@ pub fn render_mount_points_table(
<th class="px-4 py-2">{}</th>
</tr>
</thead>
<tbody class="px-4">
<tbody>
{}
</tbody>
</table>
@@ -388,11 +388,11 @@ pub fn render_activity_row(
format!(
r##"
<tr class="border-b border-slate-700/50 text-sm">
<td class="py-2 text-slate-500">{}</td>
<td class="py-2 font-medium {}"><span class="px-2 py-0.5 bg-slate-700 rounded">{}</span></td>
<td class="py-2 text-slate-300 font-mono text-xs">{}</td>
<td class="py-2 text-slate-400">{}</td>
<td class="py-2 text-slate-500">{}</td>
<td class="px-4 py-2 text-slate-500">{}</td>
<td class="px-4 py-2 font-medium {}"><span class="px-2 py-0.5 bg-slate-700 rounded">{}</span></td>
<td class="px-4 py-2 text-slate-300 font-mono text-xs">{}</td>
<td class="px-4 py-2 text-slate-400">{}</td>
<td class="px-4 py-2 text-slate-500">{}</td>
</tr>
"##,
timestamp,
@@ -424,7 +424,7 @@ pub fn render_activity_log(rows: &str, t: &Translations) -> String {
<th class="px-4 py-2">{}</th>
</tr>
</thead>
<tbody class="px-4">
<tbody>
{}
</tbody>
</table>

View File

@@ -74,16 +74,56 @@ pub fn render_dashboard(data: &DashboardResponse, lang: Lang) -> String {
t.no_activity
)
} else {
data.activity
// Group consecutive identical entries (same action+artifact+registry+source)
struct GroupedActivity {
time: String,
action: String,
artifact: String,
registry: String,
source: String,
count: usize,
}
let mut grouped: Vec<GroupedActivity> = Vec::new();
for entry in &data.activity {
let action = entry.action.to_string();
let is_repeat = grouped.last().is_some_and(|last| {
last.action == action
&& last.artifact == entry.artifact
&& last.registry == entry.registry
&& last.source == entry.source
});
if is_repeat {
if let Some(last) = grouped.last_mut() {
last.count += 1;
}
} else {
grouped.push(GroupedActivity {
time: format_relative_time(&entry.timestamp),
action,
artifact: entry.artifact.clone(),
registry: entry.registry.clone(),
source: entry.source.clone(),
count: 1,
});
}
}
grouped
.iter()
.map(|entry| {
let time_ago = format_relative_time(&entry.timestamp);
.map(|g| {
let display_artifact = if g.count > 1 {
format!("{} (x{})", g.artifact, g.count)
} else {
g.artifact.clone()
};
render_activity_row(
&time_ago,
&entry.action.to_string(),
&entry.artifact,
&entry.registry,
&entry.source,
&g.time,
&g.action,
&display_artifact,
&g.registry,
&g.source,
)
})
.collect()

View File

@@ -1,28 +0,0 @@
[package]
name = "nora-storage"
version.workspace = true
edition.workspace = true
license.workspace = true
authors.workspace = true
repository.workspace = true
homepage.workspace = true
description = "S3-compatible storage server for NORA"
[[bin]]
name = "nora-storage"
path = "src/main.rs"
[dependencies]
tokio.workspace = true
axum.workspace = true
serde.workspace = true
serde_json.workspace = true
tracing.workspace = true
tracing-subscriber.workspace = true
toml = "1.0"
uuid = { version = "1", features = ["v4"] }
sha2 = "0.10"
base64 = "0.22"
httpdate = "1"
chrono = { version = "0.4", features = ["serde"] }
quick-xml = { version = "0.39", features = ["serialize"] }

File diff suppressed because it is too large Load Diff

View File

@@ -1,47 +0,0 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
use serde::{Deserialize, Serialize};
use std::fs;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
pub server: ServerConfig,
pub storage: StorageConfig,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServerConfig {
pub host: String,
pub port: u16,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StorageConfig {
pub data_dir: String,
pub max_body_size: usize,
}
impl Config {
pub fn load() -> Self {
fs::read_to_string("config.toml")
.ok()
.and_then(|content| toml::from_str(&content).ok())
.unwrap_or_default()
}
}
impl Default for Config {
fn default() -> Self {
Self {
server: ServerConfig {
host: String::from("127.0.0.1"),
port: 3000,
},
storage: StorageConfig {
data_dir: String::from("data"),
max_body_size: 1024 * 1024 * 1024, // 1GB
},
}
}
}

View File

@@ -1,311 +0,0 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
mod config;
use axum::extract::DefaultBodyLimit;
use axum::{
body::Bytes,
extract::{Path, State},
http::StatusCode,
response::{IntoResponse, Response},
routing::{delete, get, put},
Router,
};
use chrono::Utc;
use config::Config;
use quick_xml::se::to_string as to_xml;
use serde::Serialize;
use std::fs;
use std::sync::Arc;
use tracing::info;
pub struct AppState {
pub config: Config,
}
#[derive(Serialize)]
#[serde(rename = "ListAllMyBucketsResult")]
struct ListBucketsResult {
#[serde(rename = "Buckets")]
buckets: Buckets,
}
#[derive(Serialize)]
struct Buckets {
#[serde(rename = "Bucket")]
bucket: Vec<BucketInfo>,
}
#[derive(Serialize)]
struct BucketInfo {
#[serde(rename = "Name")]
name: String,
#[serde(rename = "CreationDate")]
creation_date: String,
}
#[derive(Serialize)]
#[serde(rename = "ListBucketResult")]
struct ListObjectsResult {
#[serde(rename = "Name")]
name: String,
#[serde(rename = "Contents")]
contents: Vec<ObjectInfo>,
}
#[derive(Serialize)]
struct ObjectInfo {
#[serde(rename = "Key")]
key: String,
#[serde(rename = "Size")]
size: u64,
#[serde(rename = "LastModified")]
last_modified: String,
}
#[derive(Serialize)]
#[serde(rename = "Error")]
struct S3Error {
#[serde(rename = "Code")]
code: String,
#[serde(rename = "Message")]
message: String,
}
fn xml_response<T: Serialize>(data: T) -> Response {
let xml = format!(
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n{}",
to_xml(&data).unwrap_or_default()
);
(
StatusCode::OK,
[(axum::http::header::CONTENT_TYPE, "application/xml")],
xml,
)
.into_response()
}
fn error_response(status: StatusCode, code: &str, message: &str) -> Response {
let error = S3Error {
code: code.to_string(),
message: message.to_string(),
};
let xml = format!(
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n{}",
to_xml(&error).unwrap_or_default()
);
(
status,
[(axum::http::header::CONTENT_TYPE, "application/xml")],
xml,
)
.into_response()
}
#[tokio::main]
async fn main() {
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::from_default_env()
.add_directive("nora_storage=info".parse().expect("valid directive")),
)
.init();
let config = Config::load();
fs::create_dir_all(&config.storage.data_dir).expect("Failed to create data directory");
let state = Arc::new(AppState {
config: config.clone(),
});
let app = Router::new()
.route("/", get(list_buckets))
.route("/{bucket}", get(list_objects))
.route("/{bucket}", put(create_bucket))
.route("/{bucket}", delete(delete_bucket))
.route("/{bucket}/{*key}", put(put_object))
.route("/{bucket}/{*key}", get(get_object))
.route("/{bucket}/{*key}", delete(delete_object))
.layer(DefaultBodyLimit::max(config.storage.max_body_size))
.with_state(state);
let addr = format!("{}:{}", config.server.host, config.server.port);
let listener = tokio::net::TcpListener::bind(&addr)
.await
.expect("Failed to bind to address");
info!("nora-storage (S3 compatible) running on http://{}", addr);
axum::serve(listener, app).await.expect("Server error");
}
async fn list_buckets(State(state): State<Arc<AppState>>) -> Response {
let data_dir = &state.config.storage.data_dir;
let entries = match fs::read_dir(data_dir) {
Ok(e) => e,
Err(_) => {
return error_response(
StatusCode::INTERNAL_SERVER_ERROR,
"InternalError",
"Failed to read data",
)
}
};
let bucket_list: Vec<BucketInfo> = entries
.filter_map(|e| e.ok())
.filter(|e| e.path().is_dir())
.filter_map(|e| {
let name = e.file_name().into_string().ok()?;
let modified = e.metadata().ok()?.modified().ok()?;
let datetime: chrono::DateTime<Utc> = modified.into();
Some(BucketInfo {
name,
creation_date: datetime.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
})
})
.collect();
xml_response(ListBucketsResult {
buckets: Buckets {
bucket: bucket_list,
},
})
}
async fn list_objects(State(state): State<Arc<AppState>>, Path(bucket): Path<String>) -> Response {
let bucket_path = format!("{}/{}", state.config.storage.data_dir, bucket);
if !std::path::Path::new(&bucket_path).is_dir() {
return error_response(
StatusCode::NOT_FOUND,
"NoSuchBucket",
"The specified bucket does not exist",
);
}
let objects = collect_files(std::path::Path::new(&bucket_path), "");
xml_response(ListObjectsResult {
name: bucket,
contents: objects,
})
}
fn collect_files(dir: &std::path::Path, prefix: &str) -> Vec<ObjectInfo> {
let mut objects = Vec::new();
if let Ok(entries) = fs::read_dir(dir) {
for entry in entries.filter_map(|e| e.ok()) {
let path = entry.path();
let name = entry.file_name().into_string().unwrap_or_default();
let key = if prefix.is_empty() {
name.clone()
} else {
format!("{}/{}", prefix, name)
};
if path.is_dir() {
objects.extend(collect_files(&path, &key));
} else if let Ok(metadata) = entry.metadata() {
if let Ok(modified) = metadata.modified() {
let datetime: chrono::DateTime<Utc> = modified.into();
objects.push(ObjectInfo {
key,
size: metadata.len(),
last_modified: datetime.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
});
}
}
}
}
objects
}
async fn create_bucket(State(state): State<Arc<AppState>>, Path(bucket): Path<String>) -> Response {
let bucket_path = format!("{}/{}", state.config.storage.data_dir, bucket);
match fs::create_dir(&bucket_path) {
Ok(_) => (StatusCode::OK, "").into_response(),
Err(_) => error_response(
StatusCode::CONFLICT,
"BucketAlreadyExists",
"Bucket already exists",
),
}
}
async fn put_object(
State(state): State<Arc<AppState>>,
Path((bucket, key)): Path<(String, String)>,
body: Bytes,
) -> Response {
let file_path = format!("{}/{}/{}", state.config.storage.data_dir, bucket, key);
if let Some(parent) = std::path::Path::new(&file_path).parent() {
let _ = fs::create_dir_all(parent);
}
match fs::write(&file_path, &body) {
Ok(_) => {
println!("PUT {}/{} ({} bytes)", bucket, key, body.len());
(StatusCode::OK, "").into_response()
}
Err(e) => {
println!("ERROR writing {}/{}: {}", bucket, key, e);
error_response(
StatusCode::INTERNAL_SERVER_ERROR,
"InternalError",
"Failed to write object",
)
}
}
}
async fn get_object(
State(state): State<Arc<AppState>>,
Path((bucket, key)): Path<(String, String)>,
) -> Response {
let file_path = format!("{}/{}/{}", state.config.storage.data_dir, bucket, key);
match fs::read(&file_path) {
Ok(data) => (StatusCode::OK, data).into_response(),
Err(_) => error_response(
StatusCode::NOT_FOUND,
"NoSuchKey",
"The specified key does not exist",
),
}
}
async fn delete_object(
State(state): State<Arc<AppState>>,
Path((bucket, key)): Path<(String, String)>,
) -> Response {
let file_path = format!("{}/{}/{}", state.config.storage.data_dir, bucket, key);
match fs::remove_file(&file_path) {
Ok(_) => {
println!("DELETE {}/{}", bucket, key);
(StatusCode::NO_CONTENT, "").into_response()
}
Err(_) => error_response(
StatusCode::NOT_FOUND,
"NoSuchKey",
"The specified key does not exist",
),
}
}
async fn delete_bucket(State(state): State<Arc<AppState>>, Path(bucket): Path<String>) -> Response {
let bucket_path = format!("{}/{}", state.config.storage.data_dir, bucket);
match fs::remove_dir(&bucket_path) {
Ok(_) => {
println!("DELETE bucket {}", bucket);
(StatusCode::NO_CONTENT, "").into_response()
}
Err(_) => error_response(
StatusCode::CONFLICT,
"BucketNotEmpty",
"The bucket is not empty",
),
}
}

3
tests/e2e/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
node_modules/
test-results/
playwright-report/

76
tests/e2e/package-lock.json generated Normal file
View File

@@ -0,0 +1,76 @@
{
"name": "nora-e2e",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "nora-e2e",
"devDependencies": {
"@playwright/test": "^1.50.0"
}
},
"node_modules/@playwright/test": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.58.2.tgz",
"integrity": "sha512-akea+6bHYBBfA9uQqSYmlJXn61cTa+jbO87xVLCWbTqbWadRVmhxlXATaOjOgcBaWU4ePo0wB41KMFv3o35IXA==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright": "1.58.2"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=18"
}
},
"node_modules/fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"dev": true,
"hasInstallScript": true,
"license": "MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/playwright": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.58.2.tgz",
"integrity": "sha512-vA30H8Nvkq/cPBnNw4Q8TWz1EJyqgpuinBcHET0YVJVFldr8JDNiU9LaWAE1KqSkRYazuaBhTpB5ZzShOezQ6A==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright-core": "1.58.2"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=18"
},
"optionalDependencies": {
"fsevents": "2.3.2"
}
},
"node_modules/playwright-core": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.58.2.tgz",
"integrity": "sha512-yZkEtftgwS8CsfYo7nm0KE8jsvm6i/PTgVtB8DL726wNf6H2IMsDuxCpJj59KDaxCtSnrWan2AeDqM7JBaultg==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"playwright-core": "cli.js"
},
"engines": {
"node": ">=18"
}
}
}
}

11
tests/e2e/package.json Normal file
View File

@@ -0,0 +1,11 @@
{
"name": "nora-e2e",
"private": true,
"scripts": {
"test": "npx playwright test",
"test:ui": "npx playwright test --ui"
},
"devDependencies": {
"@playwright/test": "^1.50.0"
}
}

View File

@@ -0,0 +1,18 @@
import { defineConfig } from '@playwright/test';
export default defineConfig({
testDir: './tests',
timeout: 30000,
retries: 1,
use: {
baseURL: process.env.NORA_URL || 'http://localhost:4000',
screenshot: 'only-on-failure',
trace: 'retain-on-failure',
},
projects: [
{
name: 'chromium',
use: { browserName: 'chromium' },
},
],
});

View File

@@ -0,0 +1,82 @@
import { test, expect } from '@playwright/test';
test.describe('NORA Dashboard', () => {
test('dashboard page loads and shows title', async ({ page }) => {
await page.goto('/ui/');
await expect(page).toHaveTitle(/NORA|nora/i);
});
test('dashboard shows registry sections', async ({ page }) => {
await page.goto('/ui/');
// All registry types should be visible
await expect(page.getByText(/Docker/i).first()).toBeVisible();
await expect(page.getByText(/npm/i).first()).toBeVisible();
await expect(page.getByText(/Maven/i).first()).toBeVisible();
await expect(page.getByText(/PyPI/i).first()).toBeVisible();
await expect(page.getByText(/Cargo/i).first()).toBeVisible();
});
test('dashboard shows non-zero npm count after proxy fetch', async ({ page, request }) => {
// Trigger npm proxy cache by fetching a package
await request.get('/npm/chalk');
await request.get('/npm/chalk/-/chalk-5.4.1.tgz');
// Wait a moment for index rebuild
await page.waitForTimeout(1000);
await page.goto('/ui/');
// npm section should show at least 1 package
// Look for a number > 0 near npm section
const statsResponse = await request.get('/api/ui/stats');
const stats = await statsResponse.json();
expect(stats.npm).toBeGreaterThan(0);
// Verify it's actually rendered on the page
await page.goto('/ui/');
await page.waitForTimeout(500);
// The page should contain the package count somewhere
const content = await page.textContent('body');
expect(content).not.toBeNull();
// Should not show all zeros for npm
expect(content).toContain('npm');
});
test('dashboard shows Docker images after proxy fetch', async ({ page, request }) => {
// Check stats API
const statsResponse = await request.get('/api/ui/stats');
const stats = await statsResponse.json();
// Docker count should be accessible (may be 0 if no images pulled yet)
expect(stats).toHaveProperty('docker');
});
test('health endpoint returns healthy', async ({ request }) => {
const response = await request.get('/health');
expect(response.ok()).toBeTruthy();
const health = await response.json();
expect(health.status).toBe('healthy');
expect(health.registries.npm).toBe('ok');
expect(health.registries.docker).toBe('ok');
expect(health.registries.maven).toBe('ok');
expect(health.registries.pypi).toBe('ok');
expect(health.registries.cargo).toBe('ok');
});
test('OpenAPI docs endpoint accessible', async ({ request }) => {
const response = await request.get('/api-docs', { maxRedirects: 0 });
// api-docs redirects to swagger UI
expect([200, 303]).toContain(response.status());
});
test('metrics endpoint returns prometheus format', async ({ request }) => {
const response = await request.get('/metrics');
expect(response.ok()).toBeTruthy();
const text = await response.text();
expect(text).toContain('nora_http_request_duration_seconds');
});
});

View File

@@ -0,0 +1,74 @@
import { test, expect } from '@playwright/test';
test.describe('Docker Registry', () => {
test('v2 check returns empty JSON', async ({ request }) => {
const response = await request.get('/v2/');
expect(response.ok()).toBeTruthy();
const body = await response.json();
expect(body).toEqual({});
});
test('catalog endpoint returns 200', async ({ request }) => {
const response = await request.get('/v2/_catalog');
expect(response.ok()).toBeTruthy();
});
test('put and get manifest works', async ({ request }) => {
// Push a simple blob
const blobData = 'test-blob-content';
const crypto = require('crypto');
const blobDigest = 'sha256:' + crypto.createHash('sha256').update(blobData).digest('hex');
await request.post(`/v2/e2e-test/blobs/uploads/?digest=${blobDigest}`, {
data: blobData,
headers: { 'Content-Type': 'application/octet-stream' },
});
// Push config blob
const configData = '{}';
const configDigest = 'sha256:' + crypto.createHash('sha256').update(configData).digest('hex');
await request.post(`/v2/e2e-test/blobs/uploads/?digest=${configDigest}`, {
data: configData,
headers: { 'Content-Type': 'application/octet-stream' },
});
// Push manifest
const manifest = {
schemaVersion: 2,
mediaType: 'application/vnd.oci.image.manifest.v1+json',
config: {
mediaType: 'application/vnd.oci.image.config.v1+json',
digest: configDigest,
size: configData.length,
},
layers: [
{
mediaType: 'application/vnd.oci.image.layer.v1.tar+gzip',
digest: blobDigest,
size: blobData.length,
},
],
};
const putResponse = await request.put('/v2/e2e-test/manifests/1.0.0', {
data: manifest,
headers: { 'Content-Type': 'application/vnd.oci.image.manifest.v1+json' },
});
expect(putResponse.status()).toBe(201);
// Pull manifest back
const getResponse = await request.get('/v2/e2e-test/manifests/1.0.0');
expect(getResponse.ok()).toBeTruthy();
const pulled = await getResponse.json();
expect(pulled.schemaVersion).toBe(2);
expect(pulled.layers).toHaveLength(1);
});
test('tags list returns pushed tags', async ({ request }) => {
const response = await request.get('/v2/e2e-test/tags/list');
// May or may not have tags depending on test order
expect([200, 404]).toContain(response.status());
});
});

View File

@@ -0,0 +1,132 @@
import { test, expect } from '@playwright/test';
test.describe('npm Proxy', () => {
test('metadata proxy returns rewritten tarball URLs', async ({ request }) => {
const response = await request.get('/npm/chalk');
expect(response.ok()).toBeTruthy();
const metadata = await response.json();
expect(metadata.name).toBe('chalk');
expect(metadata.versions).toBeDefined();
// Tarball URL must point to NORA, not npmjs.org
const version = metadata.versions['5.4.1'];
expect(version).toBeDefined();
expect(version.dist.tarball).not.toContain('registry.npmjs.org');
expect(version.dist.tarball).toContain('/npm/chalk/-/chalk-5.4.1.tgz');
});
test('scoped package @babel/parser works', async ({ request }) => {
const response = await request.get('/npm/@babel/parser');
expect(response.ok()).toBeTruthy();
const metadata = await response.json();
expect(metadata.name).toBe('@babel/parser');
// Check tarball URL rewriting for scoped package
const versions = Object.keys(metadata.versions);
expect(versions.length).toBeGreaterThan(0);
const firstVersion = metadata.versions[versions[0]];
if (firstVersion?.dist?.tarball) {
expect(firstVersion.dist.tarball).toContain('/npm/@babel/parser/-/');
expect(firstVersion.dist.tarball).not.toContain('registry.npmjs.org');
}
});
test('tarball download returns gzip data', async ({ request }) => {
// Ensure metadata is cached first
await request.get('/npm/chalk');
const response = await request.get('/npm/chalk/-/chalk-5.4.1.tgz');
expect(response.ok()).toBeTruthy();
expect(response.headers()['content-type']).toBe('application/octet-stream');
const body = await response.body();
expect(body.length).toBeGreaterThan(100);
// gzip magic bytes
expect(body[0]).toBe(0x1f);
expect(body[1]).toBe(0x8b);
});
test('npm publish creates package', async ({ request }) => {
const pkgName = `e2e-pub-${Date.now()}`;
const publishBody = {
name: pkgName,
versions: {
'1.0.0': {
name: pkgName,
version: '1.0.0',
dist: {},
},
},
'dist-tags': { latest: '1.0.0' },
_attachments: {
[`${pkgName}-1.0.0.tgz`]: {
data: 'dGVzdA==',
content_type: 'application/octet-stream',
},
},
};
const response = await request.put(`/npm/${pkgName}`, {
data: publishBody,
headers: { 'Content-Type': 'application/json' },
});
expect(response.status()).toBe(201);
// Verify published package is accessible
const getResponse = await request.get(`/npm/${pkgName}`);
expect(getResponse.ok()).toBeTruthy();
const metadata = await getResponse.json();
expect(metadata.name).toBe(pkgName);
expect(metadata.versions['1.0.0']).toBeDefined();
});
test('npm publish rejects duplicate version (409)', async ({ request }) => {
const pkgName = `e2e-dupe-${Date.now()}`;
const body = {
name: pkgName,
versions: { '1.0.0': { name: pkgName, version: '1.0.0', dist: {} } },
'dist-tags': { latest: '1.0.0' },
_attachments: { [`${pkgName}-1.0.0.tgz`]: { data: 'dGVzdA==' } },
};
await request.put(`/npm/${pkgName}`, {
data: body,
headers: { 'Content-Type': 'application/json' },
});
// Publish same version again
const response = await request.put(`/npm/${pkgName}`, {
data: body,
headers: { 'Content-Type': 'application/json' },
});
expect(response.status()).toBe(409);
});
test('npm publish rejects name mismatch (400)', async ({ request }) => {
const response = await request.put('/npm/legitimate-pkg', {
data: {
name: 'evil-pkg',
versions: { '1.0.0': {} },
_attachments: { 'a.tgz': { data: 'dGVzdA==' } },
},
headers: { 'Content-Type': 'application/json' },
});
expect(response.status()).toBe(400);
});
test('npm publish rejects path traversal filename (400)', async ({ request }) => {
const response = await request.put('/npm/safe-pkg', {
data: {
name: 'safe-pkg',
versions: { '1.0.0': {} },
_attachments: { '../../etc/passwd': { data: 'dGVzdA==' } },
},
headers: { 'Content-Type': 'application/json' },
});
expect(response.status()).toBe(400);
});
});

View File

@@ -0,0 +1,51 @@
import { test, expect } from '@playwright/test';
test.describe('Maven Proxy', () => {
test('download Maven artifact', async ({ request }) => {
const response = await request.get(
'/maven2/org/apache/commons/commons-lang3/3.17.0/commons-lang3-3.17.0.pom'
);
expect(response.ok()).toBeTruthy();
const text = await response.text();
expect(text).toContain('commons-lang3');
});
test('Maven upload works', async ({ request }) => {
const response = await request.put('/maven2/com/test/smoke/1.0/smoke-1.0.jar', {
data: 'test-jar-content',
});
expect(response.status()).toBe(201);
});
});
test.describe('PyPI Proxy', () => {
test('simple index returns HTML', async ({ request }) => {
const response = await request.get('/simple/');
expect(response.ok()).toBeTruthy();
const text = await response.text();
expect(text).toContain('Simple Index');
});
test('package page returns links', async ({ request }) => {
const response = await request.get('/simple/requests/');
expect(response.ok()).toBeTruthy();
const text = await response.text();
expect(text).toContain('requests');
});
});
test.describe('Raw Storage', () => {
test('upload and download file', async ({ request }) => {
const data = 'raw-e2e-test-content-' + Date.now();
const putResponse = await request.put('/raw/e2e/test.txt', {
data: data,
});
expect(putResponse.status()).toBe(201);
const getResponse = await request.get('/raw/e2e/test.txt');
expect(getResponse.ok()).toBeTruthy();
const body = await getResponse.text();
expect(body).toBe(data);
});
});

210
tests/smoke.sh Executable file
View File

@@ -0,0 +1,210 @@
#!/usr/bin/env bash
set -euo pipefail
# NORA E2E Smoke Test
# Starts NORA, runs real-world scenarios, verifies results.
# Exit code 0 = all passed, non-zero = failures.
NORA_BIN="${NORA_BIN:-./target/release/nora}"
PORT="${NORA_TEST_PORT:-14000}"
BASE="http://localhost:${PORT}"
STORAGE_DIR=$(mktemp -d)
PASSED=0
FAILED=0
NORA_PID=""
cleanup() {
[ -n "$NORA_PID" ] && kill "$NORA_PID" 2>/dev/null || true
rm -rf "$STORAGE_DIR"
}
trap cleanup EXIT
fail() {
echo " FAIL: $1"
FAILED=$((FAILED + 1))
}
pass() {
echo " PASS: $1"
PASSED=$((PASSED + 1))
}
check() {
local desc="$1"
shift
if "$@" >/dev/null 2>&1; then
pass "$desc"
else
fail "$desc"
fi
}
echo "=== NORA Smoke Test ==="
echo "Binary: $NORA_BIN"
echo "Port: $PORT"
echo "Storage: $STORAGE_DIR"
echo ""
# Start NORA
NORA_HOST=127.0.0.1 \
NORA_PORT=$PORT \
NORA_STORAGE_PATH="$STORAGE_DIR" \
NORA_RATE_LIMIT_ENABLED=false \
NORA_PUBLIC_URL="$BASE" \
"$NORA_BIN" serve &
NORA_PID=$!
# Wait for startup
for i in $(seq 1 20); do
curl -sf "$BASE/health" >/dev/null 2>&1 && break
sleep 0.5
done
echo "--- Health & Monitoring ---"
check "GET /health returns healthy" \
curl -sf "$BASE/health"
check "GET /ready returns 200" \
curl -sf "$BASE/ready"
check "GET /metrics returns prometheus" \
curl -sf "$BASE/metrics"
echo ""
echo "--- npm Proxy ---"
# Fetch metadata — triggers proxy cache
METADATA=$(curl -sf "$BASE/npm/chalk" 2>/dev/null || echo "{}")
check "npm metadata returns 200" \
curl -sf "$BASE/npm/chalk"
# URL rewriting check
TARBALL_URL=$(echo "$METADATA" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('versions',{}).get('5.4.1',{}).get('dist',{}).get('tarball',''))" 2>/dev/null || echo "")
if echo "$TARBALL_URL" | grep -q "localhost:${PORT}/npm"; then
pass "npm tarball URL rewritten to NORA"
else
fail "npm tarball URL not rewritten: $TARBALL_URL"
fi
# Fetch tarball
check "npm tarball download" \
curl -sf "$BASE/npm/chalk/-/chalk-5.4.1.tgz" -o /dev/null
# Scoped package
check "npm scoped package @babel/parser" \
curl -sf "$BASE/npm/@babel/parser"
# Publish
PUBLISH_RESULT=$(curl -s -o /dev/null -w "%{http_code}" -X PUT \
-H "Content-Type: application/json" \
-d '{"name":"smoke-test-pkg","versions":{"1.0.0":{"name":"smoke-test-pkg","version":"1.0.0","dist":{}}},"dist-tags":{"latest":"1.0.0"},"_attachments":{"smoke-test-pkg-1.0.0.tgz":{"data":"dGVzdA==","content_type":"application/octet-stream"}}}' \
"$BASE/npm/smoke-test-pkg")
if [ "$PUBLISH_RESULT" = "201" ]; then
pass "npm publish returns 201"
else
fail "npm publish returned $PUBLISH_RESULT"
fi
# Version immutability
DUPE_RESULT=$(curl -s -o /dev/null -w "%{http_code}" -X PUT \
-H "Content-Type: application/json" \
-d '{"name":"smoke-test-pkg","versions":{"1.0.0":{"name":"smoke-test-pkg","version":"1.0.0","dist":{}}},"dist-tags":{"latest":"1.0.0"},"_attachments":{"smoke-test-pkg-1.0.0.tgz":{"data":"dGVzdA==","content_type":"application/octet-stream"}}}' \
"$BASE/npm/smoke-test-pkg")
if [ "$DUPE_RESULT" = "409" ]; then
pass "npm version immutability (409 on duplicate)"
else
fail "npm duplicate publish returned $DUPE_RESULT, expected 409"
fi
# Security: name mismatch
MISMATCH_RESULT=$(curl -s -o /dev/null -w "%{http_code}" -X PUT \
-H "Content-Type: application/json" \
-d '{"name":"evil-pkg","versions":{"1.0.0":{}},"_attachments":{"a.tgz":{"data":"dGVzdA=="}}}' \
"$BASE/npm/lodash")
if [ "$MISMATCH_RESULT" = "400" ]; then
pass "npm name mismatch rejected (400)"
else
fail "npm name mismatch returned $MISMATCH_RESULT, expected 400"
fi
# Security: path traversal
TRAVERSAL_RESULT=$(curl -s -o /dev/null -w "%{http_code}" -X PUT \
-H "Content-Type: application/json" \
-d '{"name":"test-pkg","versions":{"1.0.0":{}},"_attachments":{"../../etc/passwd":{"data":"dGVzdA=="}}}' \
"$BASE/npm/test-pkg")
if [ "$TRAVERSAL_RESULT" = "400" ]; then
pass "npm path traversal rejected (400)"
else
fail "npm path traversal returned $TRAVERSAL_RESULT, expected 400"
fi
echo ""
echo "--- Maven ---"
check "Maven proxy download" \
curl -sf "$BASE/maven2/org/apache/commons/commons-lang3/3.17.0/commons-lang3-3.17.0.pom" -o /dev/null
echo ""
echo "--- PyPI ---"
check "PyPI simple index" \
curl -sf "$BASE/simple/"
check "PyPI package page" \
curl -sf "$BASE/simple/requests/"
echo ""
echo "--- Docker ---"
check "Docker v2 check" \
curl -sf "$BASE/v2/"
echo ""
echo "--- Raw ---"
echo "raw-test-data" | curl -sf -X PUT --data-binary @- "$BASE/raw/smoke/test.txt" >/dev/null 2>&1
check "Raw upload" \
curl -sf "$BASE/raw/smoke/test.txt" -o /dev/null
echo ""
echo "--- UI & API ---"
check "UI dashboard loads" \
curl -sf "$BASE/ui/"
check "OpenAPI docs" \
curl -sf "$BASE/api-docs" -o /dev/null
# Dashboard stats — check npm count > 0 after proxy fetches
sleep 1
STATS=$(curl -sf "$BASE/ui/api/stats" 2>/dev/null || echo "{}")
NPM_COUNT=$(echo "$STATS" | python3 -c "import sys,json; print(json.load(sys.stdin).get('npm',0))" 2>/dev/null || echo "0")
if [ "$NPM_COUNT" -gt 0 ] 2>/dev/null; then
pass "Dashboard npm count > 0 (got $NPM_COUNT)"
else
fail "Dashboard npm count is $NPM_COUNT, expected > 0"
fi
echo ""
echo "--- Mirror CLI ---"
# Create a minimal lockfile
LOCKFILE=$(mktemp)
cat > "$LOCKFILE" << 'EOF'
{
"lockfileVersion": 3,
"packages": {
"": { "name": "test" },
"node_modules/chalk": { "version": "5.4.1" }
}
}
EOF
MIRROR_RESULT=$("$NORA_BIN" mirror --registry "$BASE" npm --lockfile "$LOCKFILE" 2>&1)
if echo "$MIRROR_RESULT" | grep -q "Failed: 0"; then
pass "nora mirror npm --lockfile (0 failures)"
else
fail "nora mirror: $MIRROR_RESULT"
fi
rm -f "$LOCKFILE"
echo ""
echo "================================"
echo "Results: $PASSED passed, $FAILED failed"
echo "================================"
[ "$FAILED" -eq 0 ] && exit 0 || exit 1