18 Commits

Author SHA1 Message Date
e415f0f1ce fix: Docker dashboard for namespaced images, library/ auto-prepend for Hub official images (v0.2.32)
Docker dashboard:
- build_docker_index now finds manifests segment by position, not fixed index
- Correctly indexes library/alpine, grafana/grafana, and other namespaced images

Docker proxy:
- Auto-prepend library/ for single-segment names when upstream returns 404
- Applies to both manifests and blobs
- nginx, alpine, node now work without explicit library/ prefix
- Cached under original name for future local hits
2026-03-18 08:07:53 +00:00
aa86633a04 security: add cargo-fuzz targets and ClusterFuzzLite config
Fuzz targets:
- fuzz_validation: storage key, Docker name, digest, reference validators
- fuzz_docker_manifest: Docker/OCI manifest media type detection

Infrastructure:
- lib.rs exposing validation module and docker_fuzz for fuzz harnesses
- ClusterFuzzLite project config (libfuzzer + ASan)
2026-03-17 11:20:17 +00:00
31afa1f70b fix: use tags for scorecard webapp verification 2026-03-17 11:04:48 +00:00
f36abd82ef fix: use scorecard-action by tag for webapp verification 2026-03-17 11:02:14 +00:00
ea6a86b0f1 docs: add OpenSSF Scorecard badge 2026-03-17 10:41:00 +00:00
638f99d8dc Merge pull request #32 from getnora-io/dependabot/cargo/tracing-subscriber-0.3.23
chore(deps): bump tracing-subscriber from 0.3.22 to 0.3.23
2026-03-17 13:38:22 +03:00
c55307a3af Merge pull request #31 from getnora-io/dependabot/cargo/clap-4.6.0
chore(deps): bump clap from 4.5.60 to 4.6.0
2026-03-17 13:38:20 +03:00
cc416f3adf Merge pull request #30 from getnora-io/dependabot/cargo/tempfile-3.27.0
chore(deps): bump tempfile from 3.26.0 to 3.27.0
2026-03-17 13:38:17 +03:00
30aedac238 Merge pull request #33 from getnora-io/security/scorecard-hardening
security: OpenSSF Scorecard hardening
2026-03-17 13:36:40 +03:00
34e85acd6e security: harden OpenSSF Scorecard compliance
- Pin all GitHub Actions by SHA hash (Pinned-Dependencies)
- Add top-level permissions: read-all (Token-Permissions)
- Add explicit job-level permissions (least privilege)
- Add OpenSSF Scorecard workflow with weekly schedule
- Publish scorecard results to scorecard.dev and GitHub Security tab
2026-03-17 10:30:15 +00:00
dependabot[bot]
41eefdd90d chore(deps): bump tracing-subscriber from 0.3.22 to 0.3.23
Bumps [tracing-subscriber](https://github.com/tokio-rs/tracing) from 0.3.22 to 0.3.23.
- [Release notes](https://github.com/tokio-rs/tracing/releases)
- [Commits](https://github.com/tokio-rs/tracing/compare/tracing-subscriber-0.3.22...tracing-subscriber-0.3.23)

---
updated-dependencies:
- dependency-name: tracing-subscriber
  dependency-version: 0.3.23
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:25:13 +00:00
dependabot[bot]
94ca418155 chore(deps): bump clap from 4.5.60 to 4.6.0
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.60 to 4.6.0.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.60...clap_complete-v4.6.0)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:25:03 +00:00
dependabot[bot]
e72648a6c4 chore(deps): bump tempfile from 3.26.0 to 3.27.0
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.26.0 to 3.27.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.26.0...v3.27.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-version: 3.27.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:24:55 +00:00
18e93d23a9 docs: Russian documentation — admin guide, user guide, technical spec (Минцифры) 2026-03-16 13:58:17 +00:00
db05adb060 docs: add CycloneDX SBOM (238 components, 0 vulnerabilities) 2026-03-16 13:29:59 +00:00
a57de6690e feat: nora mirror CLI + systemd + install script
nora mirror:
- Pre-fetch dependencies through NORA proxy cache
- npm: --lockfile (v1/v2/v3) and --packages with --all-versions
- pip: requirements.txt parser
- cargo: Cargo.lock parser
- maven: dependency:list output parser
- Concurrent downloads (--concurrency, default 8)
- Progress bar with indicatif
- Health check before start

dist/:
- nora.service — systemd unit with security hardening
- nora.env.example — environment configuration template
- install.sh — automated install (binary + user + systemd + config)

Tested: 103 tests pass, 0 clippy warnings, cargo audit clean.
Smoke: mirrored 70 npm packages from real lockfile in 5.4s.
2026-03-16 13:27:37 +00:00
d3439ae33d docs: changelog v0.2.31 2026-03-16 12:51:10 +00:00
b3b74b8b2d feat: npm full proxy — URL rewriting, scoped packages, publish, integrity cache (v0.2.31)
npm proxy:
- Rewrite tarball URLs in metadata to point to NORA (was broken — tarballs bypassed NORA)
- Scoped packages (@scope/package) full support in handler and repo index
- Metadata cache TTL (NORA_NPM_METADATA_TTL, default 300s) with stale-while-revalidate
- proxy_auth now wired into fetch_from_proxy (was configured but unused)

npm publish:
- PUT /npm/{package} — accepts standard npm publish payload
- Version immutability — 409 Conflict on duplicate version
- Tarball URL rewriting in published metadata

Security:
- SHA256 integrity verification on cached tarballs (immutable cache)
- Attachment filename validation (path traversal protection)
- Package name mismatch detection (URL vs payload)

Config:
- npm.metadata_ttl — configurable cache TTL (env: NORA_NPM_METADATA_TTL)
2026-03-16 12:32:16 +00:00
32 changed files with 30382 additions and 111 deletions

View File

@@ -0,0 +1,9 @@
FROM rust:1.87-slim
RUN apt-get update && apt-get install -y build-essential pkg-config && rm -rf /var/lib/apt/lists/*
RUN cargo install cargo-fuzz
COPY . /src
WORKDIR /src
RUN cd fuzz && cargo fuzz build 2>/dev/null || true

View File

@@ -0,0 +1,5 @@
language: rust
fuzzing_engines:
- libfuzzer
sanitizers:
- address

View File

@@ -6,18 +6,20 @@ on:
pull_request:
branches: [main]
permissions: read-all
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Cache cargo
uses: Swatinem/rust-cache@v2
uses: Swatinem/rust-cache@42dc69e1aa15d09112580998cf2ef0119e2e91ae # v2
- name: Check formatting
run: cargo fmt --check
@@ -33,18 +35,18 @@ jobs:
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write # for uploading SARIF to GitHub Security tab
security-events: write
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
fetch-depth: 0 # full history required for gitleaks
fetch-depth: 0
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Cache cargo
uses: Swatinem/rust-cache@v2
uses: Swatinem/rust-cache@42dc69e1aa15d09112580998cf2ef0119e2e91ae # v2
# ── Secrets ────────────────────────────────────────────────────────────
- name: Gitleaks — scan for hardcoded secrets
@@ -58,11 +60,11 @@ jobs:
run: cargo install cargo-audit --locked
- name: cargo audit — RustSec advisory database
run: cargo audit --ignore RUSTSEC-2025-0119 # known: number_prefix via indicatif
run: cargo audit --ignore RUSTSEC-2025-0119
# ── Licenses, banned crates, supply chain policy ────────────────────────
- name: cargo deny — licenses and banned crates
uses: EmbarkStudios/cargo-deny-action@v2
uses: EmbarkStudios/cargo-deny-action@82eb9f621fbc699dd0918f3ea06864c14cc84246 # v2
with:
command: check
arguments: --all-features
@@ -70,17 +72,17 @@ jobs:
# ── CVE scan of source tree and Cargo.lock ──────────────────────────────
- name: Trivy — filesystem scan (Cargo.lock + source)
if: always()
uses: aquasecurity/trivy-action@0.35.0
uses: aquasecurity/trivy-action@57a97c7e7821a5776cebc9bb87c984fa69cba8f1 # 0.35.0
with:
scan-type: fs
scan-ref: .
format: sarif
output: trivy-fs.sarif
severity: HIGH,CRITICAL
exit-code: 1 # block pipeline on HIGH/CRITICAL vulnerabilities
exit-code: 1
- name: Upload Trivy fs results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v4
uses: github/codeql-action/upload-sarif@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
if: always()
with:
sarif_file: trivy-fs.sarif
@@ -92,18 +94,17 @@ jobs:
needs: test
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Cache cargo
uses: Swatinem/rust-cache@v2
uses: Swatinem/rust-cache@42dc69e1aa15d09112580998cf2ef0119e2e91ae # v2
- name: Build NORA
run: cargo build --release --package nora-registry
# -- Start NORA --
- name: Start NORA
run: |
NORA_STORAGE_PATH=/tmp/nora-data ./target/release/nora &
@@ -112,7 +113,6 @@ jobs:
done
curl -sf http://localhost:4000/health | jq .
# -- Docker push/pull --
- name: Configure Docker for insecure registry
run: |
echo '{"insecure-registries": ["localhost:4000"]}' | sudo tee /etc/docker/daemon.json
@@ -133,38 +133,35 @@ jobs:
curl -sf http://localhost:4000/v2/_catalog | jq .
curl -sf http://localhost:4000/v2/test/alpine/tags/list | jq .
# -- npm (read-only proxy, no publish support yet) --
- name: npm — verify registry endpoint
run: |
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:4000/npm/lodash)
echo "npm endpoint returned: $STATUS"
[ "$STATUS" != "000" ] && echo "npm endpoint OK" || (echo "npm endpoint unreachable" && exit 1)
# -- Maven deploy/download --
- name: Maven — deploy and download artifact
run: |
echo "test-artifact-content-$(date +%s)" > /tmp/test-artifact.jar
CHECKSUM=$(sha256sum /tmp/test-artifact.jar | cut -d' ' -f1)
curl -sf -X PUT --data-binary @/tmp/test-artifact.jar http://localhost:4000/maven2/com/example/test-lib/1.0.0/test-lib-1.0.0.jar
curl -sf -o /tmp/downloaded.jar http://localhost:4000/maven2/com/example/test-lib/1.0.0/test-lib-1.0.0.jar
curl -sf -X PUT --data-binary @/tmp/test-artifact.jar \
http://localhost:4000/maven2/com/example/test-lib/1.0.0/test-lib-1.0.0.jar
curl -sf -o /tmp/downloaded.jar \
http://localhost:4000/maven2/com/example/test-lib/1.0.0/test-lib-1.0.0.jar
DOWNLOAD_CHECKSUM=$(sha256sum /tmp/downloaded.jar | cut -d' ' -f1)
[ "$CHECKSUM" = "$DOWNLOAD_CHECKSUM" ] && echo "Maven deploy/download OK" || (echo "Checksum mismatch!" && exit 1)
# -- PyPI (read-only proxy, no upload support yet) --
- name: PyPI — verify simple index
run: |
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:4000/simple/)
echo "PyPI simple index returned: $STATUS"
[ "$STATUS" = "200" ] && echo "PyPI endpoint OK" || (echo "Expected 200, got $STATUS" && exit 1)
# -- Cargo (read-only proxy, no publish support yet) --
- name: Cargo — verify registry API responds
run: |
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:4000/cargo/api/v1/crates/serde)
echo "Cargo API returned: $STATUS"
[ "$STATUS" != "000" ] && echo "Cargo endpoint OK" || (echo "Cargo endpoint unreachable" && exit 1)
# -- API checks --
- name: API — health, ready, metrics
run: |
curl -sf http://localhost:4000/health | jq .status

View File

@@ -4,6 +4,8 @@ on:
push:
tags: ['v*']
permissions: read-all
env:
REGISTRY: ghcr.io
NORA: localhost:5000
@@ -18,7 +20,7 @@ jobs:
packages: write
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Rust
run: |
@@ -32,19 +34,19 @@ jobs:
cp target/x86_64-unknown-linux-musl/release/nora ./nora
- name: Upload binary artifact
uses: actions/upload-artifact@v7
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: nora-binary-${{ github.run_id }}
path: ./nora
retention-days: 1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v4
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4
with:
driver-opts: network=host
- name: Log in to GitHub Container Registry
uses: docker/login-action@v4
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@@ -53,7 +55,7 @@ jobs:
# ── Alpine ───────────────────────────────────────────────────────────────
- name: Extract metadata (alpine)
id: meta-alpine
uses: docker/metadata-action@v6
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6
with:
images: |
${{ env.NORA }}/${{ env.IMAGE_NAME }}
@@ -64,7 +66,7 @@ jobs:
type=raw,value=latest
- name: Build and push (alpine)
uses: docker/build-push-action@v7
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7
with:
context: .
file: Dockerfile
@@ -78,7 +80,7 @@ jobs:
# ── RED OS ───────────────────────────────────────────────────────────────
- name: Extract metadata (redos)
id: meta-redos
uses: docker/metadata-action@v6
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6
with:
images: |
${{ env.NORA }}/${{ env.IMAGE_NAME }}
@@ -90,7 +92,7 @@ jobs:
type=raw,value=redos
- name: Build and push (redos)
uses: docker/build-push-action@v7
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7
with:
context: .
file: Dockerfile.redos
@@ -104,7 +106,7 @@ jobs:
# ── Astra Linux SE ───────────────────────────────────────────────────────
- name: Extract metadata (astra)
id: meta-astra
uses: docker/metadata-action@v6
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6
with:
images: |
${{ env.NORA }}/${{ env.IMAGE_NAME }}
@@ -116,7 +118,7 @@ jobs:
type=raw,value=astra
- name: Build and push (astra)
uses: docker/build-push-action@v7
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7
with:
context: .
file: Dockerfile.astra
@@ -165,7 +167,7 @@ jobs:
run: echo "tag=${GITHUB_REF_NAME#v}" >> $GITHUB_OUTPUT
- name: Trivy — image scan (${{ matrix.name }})
uses: aquasecurity/trivy-action@0.35.0
uses: aquasecurity/trivy-action@57a97c7e7821a5776cebc9bb87c984fa69cba8f1 # 0.35.0
with:
scan-type: image
image-ref: ${{ env.NORA }}/${{ env.IMAGE_NAME }}:${{ steps.ver.outputs.tag }}${{ matrix.suffix }}
@@ -175,7 +177,7 @@ jobs:
exit-code: 1
- name: Upload Trivy image results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v4
uses: github/codeql-action/upload-sarif@a60c4df7a135c7317c1e9ddf9b5a9b07a910dda9 # v4
if: always()
with:
sarif_file: trivy-image-${{ matrix.name }}.sarif
@@ -190,14 +192,14 @@ jobs:
packages: read
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set version tag (strip leading v)
id: ver
run: echo "tag=${GITHUB_REF_NAME#v}" >> $GITHUB_OUTPUT
- name: Download binary artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: nora-binary-${{ github.run_id }}
path: ./artifacts
@@ -211,21 +213,21 @@ jobs:
cat nora-linux-amd64.sha256
- name: Generate SBOM (SPDX)
uses: anchore/sbom-action@v0
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0
with:
image: ${{ env.NORA }}/${{ env.IMAGE_NAME }}:${{ steps.ver.outputs.tag }}
format: spdx-json
output-file: nora-${{ github.ref_name }}.sbom.spdx.json
- name: Generate SBOM (CycloneDX)
uses: anchore/sbom-action@v0
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0
with:
image: ${{ env.NORA }}/${{ env.IMAGE_NAME }}:${{ steps.ver.outputs.tag }}
format: cyclonedx-json
output-file: nora-${{ github.ref_name }}.sbom.cdx.json
- name: Create Release
uses: softprops/action-gh-release@v2
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
generate_release_notes: true
files: |

35
.github/workflows/scorecard.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: OpenSSF Scorecard
on:
push:
branches: [main]
schedule:
- cron: '0 6 * * 1' # every Monday at 06:00 UTC
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
security-events: write
id-token: write
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
persist-credentials: false
- name: Run OpenSSF Scorecard
uses: ossf/scorecard-action@v2.4.3
with:
results_file: results.sarif
results_format: sarif
publish_results: true
- name: Upload Scorecard results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v4
with:
sarif_file: results.sarif
category: scorecard

View File

@@ -1,5 +1,33 @@
# Changelog
## [0.2.31] - 2026-03-16
### Added / Добавлено
- **npm URL rewriting**: Tarball URLs in proxied metadata now rewritten to point to NORA (previously tarballs bypassed NORA and downloaded directly from npmjs.org)
- **npm scoped packages**: Full support for `@scope/package` in proxy handler and repository index
- **npm publish**: `PUT /npm/{package}` accepts standard npm publish payload with base64-encoded tarballs
- **npm metadata TTL**: Configurable cache TTL (`NORA_NPM_METADATA_TTL`, default 300s) with stale-while-revalidate fallback
- **Immutable cache**: SHA256 integrity verification on cached npm tarballs — detects tampering on cache hit
- **npm URL rewriting**: Tarball URL в проксированных метаданных теперь переписываются на NORA (ранее тарболы шли напрямую из npmjs.org)
- **npm scoped packages**: Полная поддержка `@scope/package` в прокси-хендлере и индексе репозитория
- **npm publish**: `PUT /npm/{package}` принимает стандартный npm publish payload с base64-тарболами
- **npm metadata TTL**: Настраиваемый TTL кеша (`NORA_NPM_METADATA_TTL`, default 300s) с stale-while-revalidate
- **Immutable cache**: SHA256 проверка целостности npm-тарболов — обнаружение подмены при отдаче из кеша
### Security / Безопасность
- **Path traversal protection**: Attachment filename validation in npm publish (rejects `../`, `/`, `\`)
- **Package name mismatch**: npm publish rejects payloads where URL path doesn't match `name` field (anti-spoofing)
- **Version immutability**: npm publish returns 409 Conflict on duplicate version
- **Защита от path traversal**: Валидация имён файлов в npm publish (отклоняет `../`, `/`, `\`)
- **Проверка имени пакета**: npm publish отклоняет payload если имя в URL не совпадает с полем `name` (anti-spoofing)
- **Иммутабельность версий**: npm publish возвращает 409 Conflict при попытке перезаписать версию
### Fixed / Исправлено
- **npm proxy_auth**: `proxy_auth` field was configured but not wired into `fetch_from_proxy` — now sends Basic Auth header to upstream
- **npm proxy_auth**: Поле `proxy_auth` было в конфиге, но не передавалось в `fetch_from_proxy` — теперь отправляет Basic Auth в upstream
All notable changes to NORA will be documented in this file.
---

80
Cargo.lock generated
View File

@@ -34,9 +34,9 @@ dependencies = [
[[package]]
name = "anstream"
version = "0.6.21"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a"
checksum = "824a212faf96e9acacdbd09febd34438f8f711fb84e09a8916013cd7815ca28d"
dependencies = [
"anstyle",
"anstyle-parse",
@@ -55,9 +55,9 @@ checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78"
[[package]]
name = "anstyle-parse"
version = "0.2.7"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2"
checksum = "52ce7f38b242319f7cabaa6813055467063ecdc9d355bbb4ce0c68908cd8130e"
dependencies = [
"utf8parse",
]
@@ -251,6 +251,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "47b26a0954ae34af09b50f0de26458fa95369a0d478d8236d3f93082b219bd29"
dependencies = [
"find-msvc-tools",
"jobserver",
"libc",
"shlex",
]
@@ -292,9 +294,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.60"
version = "4.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2797f34da339ce31042b27d23607e051786132987f595b02ba4f6a6dffb7030a"
checksum = "b193af5b67834b676abd72466a96c1024e6a6ad978a1f484bd90b85c94041351"
dependencies = [
"clap_builder",
"clap_derive",
@@ -302,9 +304,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.60"
version = "4.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24a241312cea5059b13574bb9b3861cabf758b879c15190b37b6d6fd63ab6876"
checksum = "714a53001bf66416adb0e2ef5ac857140e7dc3a0c48fb28b2f10762fc4b5069f"
dependencies = [
"anstream",
"anstyle",
@@ -314,9 +316,9 @@ dependencies = [
[[package]]
name = "clap_derive"
version = "4.5.55"
version = "4.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a92793da1a46a5f2a02a6f4c46c6496b28c43638adea8306fcb0caa1634f24e5"
checksum = "1110bd8a634a1ab8cb04345d8d878267d57c3cf1b38d91b71af6686408bbca6a"
dependencies = [
"heck",
"proc-macro2",
@@ -473,7 +475,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.60.2",
"windows-sys 0.61.2",
]
[[package]]
@@ -1103,6 +1105,16 @@ version = "1.0.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2"
[[package]]
name = "jobserver"
version = "0.1.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9afb3de4395d6b3e67a780b6de64b51c978ecf11cb9a462c66be7d4ca9039d33"
dependencies = [
"getrandom 0.3.4",
"libc",
]
[[package]]
name = "js-sys"
version = "0.3.85"
@@ -1131,6 +1143,16 @@ version = "0.2.182"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6800badb6cb2082ffd7b6a67e6125bb39f18782f793520caee8cb8846be06112"
[[package]]
name = "libfuzzer-sys"
version = "0.4.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f12a681b7dd8ce12bff52488013ba614b869148d54dd79836ab85aafdd53f08d"
dependencies = [
"arbitrary",
"cc",
]
[[package]]
name = "libredox"
version = "0.1.12"
@@ -1247,7 +1269,7 @@ checksum = "38bf9645c8b145698bb0b18a4637dcacbc421ea49bef2317e4fd8065a387cf21"
[[package]]
name = "nora-cli"
version = "0.2.30"
version = "0.2.31"
dependencies = [
"clap",
"flate2",
@@ -1259,9 +1281,17 @@ dependencies = [
"tokio",
]
[[package]]
name = "nora-fuzz"
version = "0.0.0"
dependencies = [
"libfuzzer-sys",
"nora-registry",
]
[[package]]
name = "nora-registry"
version = "0.2.30"
version = "0.2.31"
dependencies = [
"async-trait",
"axum",
@@ -1299,7 +1329,7 @@ dependencies = [
[[package]]
name = "nora-storage"
version = "0.2.30"
version = "0.2.31"
dependencies = [
"axum",
"base64",
@@ -1577,9 +1607,9 @@ dependencies = [
[[package]]
name = "quote"
version = "1.0.44"
version = "1.0.45"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "21b2ebcf727b7760c461f091f9f0f539b77b8e87f2fd88131e7f1b433b3cece4"
checksum = "41f2619966050689382d2b44f664f4bc593e129785a36d6ee376ddf37259b924"
dependencies = [
"proc-macro2",
]
@@ -1779,7 +1809,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.60.2",
"windows-sys 0.61.2",
]
[[package]]
@@ -2018,9 +2048,9 @@ checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292"
[[package]]
name = "syn"
version = "2.0.114"
version = "2.0.117"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4d107df263a3013ef9b1879b0df87d706ff80f65a86ea879bd9c31f9b307c2a"
checksum = "e665b8803e7b1d2a727f4023456bbbbe74da67099c585258af0ad9c5013b9b99"
dependencies = [
"proc-macro2",
"quote",
@@ -2060,15 +2090,15 @@ dependencies = [
[[package]]
name = "tempfile"
version = "3.26.0"
version = "3.27.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "82a72c767771b47409d2345987fda8628641887d5466101319899796367354a0"
checksum = "32497e9a4c7b38532efcdebeef879707aa9f794296a4f0244f6f69e9bc8574bd"
dependencies = [
"fastrand",
"getrandom 0.4.1",
"once_cell",
"rustix",
"windows-sys 0.60.2",
"windows-sys 0.61.2",
]
[[package]]
@@ -2397,9 +2427,9 @@ dependencies = [
[[package]]
name = "tracing-subscriber"
version = "0.3.22"
version = "0.3.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f30143827ddab0d256fd843b7a66d164e9f271cfa0dde49142c5ca0ca291f1e"
checksum = "cb7f578e5945fb242538965c2d0b04418d38ec25c79d160cd279bf0731c8d319"
dependencies = [
"matchers",
"nu-ansi-term",
@@ -2741,7 +2771,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.60.2",
"windows-sys 0.61.2",
]
[[package]]

View File

@@ -4,10 +4,11 @@ members = [
"nora-registry",
"nora-storage",
"nora-cli",
"fuzz",
]
[workspace.package]
version = "0.2.30"
version = "0.2.32"
edition = "2021"
license = "MIT"
authors = ["DevITWay <devitway@gmail.com>"]

View File

@@ -6,6 +6,7 @@
[![Rust](https://img.shields.io/badge/rust-%23000000.svg?logo=rust&logoColor=white)](https://www.rust-lang.org/)
[![Docs](https://img.shields.io/badge/docs-getnora.dev-green?logo=gitbook)](https://getnora.dev)
[![Telegram](https://img.shields.io/badge/Telegram-Community-blue?logo=telegram)](https://t.me/getnora)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/getnora-io/nora/badge)](https://scorecard.dev/viewer/?uri=github.com/getnora-io/nora)
> **Multi-protocol artifact registry that doesn't suck.**
>

111
dist/install.sh vendored Executable file
View File

@@ -0,0 +1,111 @@
#!/usr/bin/env bash
set -euo pipefail
# NORA Artifact Registry — install script
# Usage: curl -fsSL https://getnora.io/install.sh | bash
VERSION="${NORA_VERSION:-latest}"
ARCH=$(uname -m)
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
INSTALL_DIR="/usr/local/bin"
CONFIG_DIR="/etc/nora"
DATA_DIR="/var/lib/nora"
LOG_DIR="/var/log/nora"
case "$ARCH" in
x86_64|amd64) ARCH="x86_64" ;;
aarch64|arm64) ARCH="aarch64" ;;
*) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac
echo "Installing NORA ($OS/$ARCH)..."
# Download binary
if [ "$VERSION" = "latest" ]; then
DOWNLOAD_URL="https://github.com/getnora-io/nora/releases/latest/download/nora-${OS}-${ARCH}"
else
DOWNLOAD_URL="https://github.com/getnora-io/nora/releases/download/${VERSION}/nora-${OS}-${ARCH}"
fi
echo "Downloading from $DOWNLOAD_URL..."
if command -v curl &>/dev/null; then
curl -fsSL -o /tmp/nora "$DOWNLOAD_URL"
elif command -v wget &>/dev/null; then
wget -qO /tmp/nora "$DOWNLOAD_URL"
else
echo "Error: curl or wget required"; exit 1
fi
chmod +x /tmp/nora
sudo mv /tmp/nora "$INSTALL_DIR/nora"
# Create system user
if ! id nora &>/dev/null; then
sudo useradd --system --shell /usr/sbin/nologin --home-dir "$DATA_DIR" --create-home nora
echo "Created system user: nora"
fi
# Create directories
sudo mkdir -p "$CONFIG_DIR" "$DATA_DIR" "$LOG_DIR"
sudo chown nora:nora "$DATA_DIR" "$LOG_DIR"
# Install default config if not exists
if [ ! -f "$CONFIG_DIR/nora.env" ]; then
cat > /tmp/nora.env << 'ENVEOF'
NORA_HOST=0.0.0.0
NORA_PORT=4000
NORA_STORAGE_PATH=/var/lib/nora
ENVEOF
sudo mv /tmp/nora.env "$CONFIG_DIR/nora.env"
sudo chmod 600 "$CONFIG_DIR/nora.env"
sudo chown nora:nora "$CONFIG_DIR/nora.env"
echo "Created default config: $CONFIG_DIR/nora.env"
fi
# Install systemd service
if [ -d /etc/systemd/system ]; then
cat > /tmp/nora.service << 'SVCEOF'
[Unit]
Description=NORA Artifact Registry
Documentation=https://getnora.dev
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=nora
Group=nora
ExecStart=/usr/local/bin/nora serve
WorkingDirectory=/etc/nora
Restart=on-failure
RestartSec=5
LimitNOFILE=65535
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/nora /var/log/nora
PrivateTmp=true
EnvironmentFile=-/etc/nora/nora.env
[Install]
WantedBy=multi-user.target
SVCEOF
sudo mv /tmp/nora.service /etc/systemd/system/nora.service
sudo systemctl daemon-reload
sudo systemctl enable nora
echo "Installed systemd service: nora"
fi
echo ""
echo "NORA installed successfully!"
echo ""
echo " Binary: $INSTALL_DIR/nora"
echo " Config: $CONFIG_DIR/nora.env"
echo " Data: $DATA_DIR"
echo " Version: $(nora --version 2>/dev/null || echo 'unknown')"
echo ""
echo "Next steps:"
echo " 1. Edit $CONFIG_DIR/nora.env"
echo " 2. sudo systemctl start nora"
echo " 3. curl http://localhost:4000/health"
echo ""

9031
dist/nora.cdx.json vendored Normal file

File diff suppressed because it is too large Load Diff

31
dist/nora.env.example vendored Normal file
View File

@@ -0,0 +1,31 @@
# NORA configuration — environment variables
# Copy to /etc/nora/nora.env and adjust
# Server
NORA_HOST=0.0.0.0
NORA_PORT=4000
# NORA_PUBLIC_URL=https://registry.example.com
# Storage
NORA_STORAGE_PATH=/var/lib/nora
# NORA_STORAGE_MODE=s3
# NORA_STORAGE_S3_URL=http://minio:9000
# NORA_STORAGE_BUCKET=registry
# Auth (optional)
# NORA_AUTH_ENABLED=true
# NORA_AUTH_HTPASSWD_FILE=/etc/nora/users.htpasswd
# Rate limiting
# NORA_RATE_LIMIT_ENABLED=true
# npm proxy
# NORA_NPM_PROXY=https://registry.npmjs.org
# NORA_NPM_METADATA_TTL=300
# NORA_NPM_PROXY_AUTH=user:pass
# PyPI proxy
# NORA_PYPI_PROXY=https://pypi.org/simple/
# Docker upstreams
# NORA_DOCKER_UPSTREAMS=https://registry-1.docker.io

28
dist/nora.service vendored Normal file
View File

@@ -0,0 +1,28 @@
[Unit]
Description=NORA Artifact Registry
Documentation=https://getnora.dev
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=nora
Group=nora
ExecStart=/usr/local/bin/nora serve
WorkingDirectory=/etc/nora
Restart=on-failure
RestartSec=5
LimitNOFILE=65535
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/nora /var/log/nora
PrivateTmp=true
# Environment
EnvironmentFile=-/etc/nora/nora.env
[Install]
WantedBy=multi-user.target

13
docs-ru/README.md Normal file
View File

@@ -0,0 +1,13 @@
# Документация NORA для Росреестра
## Структура
- `ТУ.md` — Технические условия
- `Руководство.md` — Руководство пользователя
- `Руководство_администратора.md` — Руководство администратора
- `SBOM.md` — Перечень компонентов (Software Bill of Materials)
## Статус
Подготовка документации для включения в Единый реестр российских программ
для электронных вычислительных машин и баз данных (Минцифры РФ).

301
docs-ru/admin-guide.md Normal file
View File

@@ -0,0 +1,301 @@
# Руководство администратора NORA
**Версия:** 1.0
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)
---
## 1. Общие сведения
NORA — многопротокольный реестр артефактов, предназначенный для хранения, кэширования и распространения программных компонентов. Программа обеспечивает централизованное управление зависимостями при разработке и сборке программного обеспечения.
### 1.1. Назначение
- Хранение и раздача артефактов по протоколам Docker (OCI), npm, Maven, PyPI, Cargo, Helm OCI и Raw.
- Проксирование и кэширование внешних репозиториев для ускорения сборок и обеспечения доступности при отсутствии соединения с сетью Интернет.
- Контроль целостности артефактов посредством верификации SHA-256.
- Аудит и протоколирование всех операций.
### 1.2. Системные требования
| Параметр | Минимальные | Рекомендуемые |
|----------|-------------|---------------|
| ОС | Linux (amd64, arm64) | Ubuntu 22.04+, RHEL 8+ |
| ЦПУ | 1 ядро | 2+ ядра |
| ОЗУ | 256 МБ | 1+ ГБ |
| Диск | 1 ГБ | 50+ ГБ (зависит от объёма хранимых артефактов) |
| Сеть | TCP-порт (по умолчанию 4000) | — |
### 1.3. Зависимости
Программа поставляется как единый статически слинкованный исполняемый файл. Внешние зависимости отсутствуют. Перечень библиотек, включённых в состав программы, приведён в файле `nora.cdx.json` (формат CycloneDX).
---
## 2. Установка
### 2.1. Автоматическая установка
```bash
curl -fsSL https://getnora.io/install.sh | bash
```
Скрипт выполняет следующие действия:
1. Определяет архитектуру процессора (amd64 или arm64).
2. Загружает исполняемый файл с GitHub Releases.
3. Создаёт системного пользователя `nora`.
4. Создаёт каталоги: `/etc/nora/`, `/var/lib/nora/`, `/var/log/nora/`.
5. Устанавливает файл конфигурации `/etc/nora/nora.env`.
6. Устанавливает и активирует systemd-сервис.
### 2.2. Ручная установка
```bash
# Загрузка
wget https://github.com/getnora-io/nora/releases/download/v1.0.0/nora-linux-x86_64
chmod +x nora-linux-x86_64
mv nora-linux-x86_64 /usr/local/bin/nora
# Создание пользователя
useradd --system --shell /usr/sbin/nologin --home-dir /var/lib/nora --create-home nora
# Создание каталогов
mkdir -p /etc/nora /var/lib/nora /var/log/nora
chown nora:nora /var/lib/nora /var/log/nora
# Установка systemd-сервиса
cp dist/nora.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable nora
```
### 2.3. Установка из Docker-образа
```bash
docker run -d \
--name nora \
-p 4000:4000 \
-v nora-data:/data \
ghcr.io/getnora-io/nora:latest
```
---
## 3. Конфигурация
Конфигурация задаётся через переменные окружения, файл `config.toml` или их комбинацию. Приоритет: переменные окружения > config.toml > значения по умолчанию.
### 3.1. Основные параметры
| Переменная | Описание | По умолчанию |
|-----------|----------|--------------|
| `NORA_HOST` | Адрес привязки | `127.0.0.1` |
| `NORA_PORT` | Порт | `4000` |
| `NORA_PUBLIC_URL` | Внешний URL (для генерации ссылок) | — |
| `NORA_STORAGE_PATH` | Путь к каталогу хранилища | `data/storage` |
| `NORA_STORAGE_MODE` | Тип хранилища: `local` или `s3` | `local` |
| `NORA_BODY_LIMIT_MB` | Максимальный размер тела запроса (МБ) | `2048` |
### 3.2. Аутентификация
| Переменная | Описание | По умолчанию |
|-----------|----------|--------------|
| `NORA_AUTH_ENABLED` | Включить аутентификацию | `false` |
| `NORA_AUTH_HTPASSWD_FILE` | Путь к файлу htpasswd | `users.htpasswd` |
Создание пользователя:
```bash
htpasswd -Bc /etc/nora/users.htpasswd admin
```
Роли: `admin` (полный доступ), `write` (чтение и запись), `read` (только чтение, по умолчанию).
### 3.3. Проксирование внешних репозиториев
| Переменная | Описание | По умолчанию |
|-----------|----------|--------------|
| `NORA_NPM_PROXY` | URL npm-реестра | `https://registry.npmjs.org` |
| `NORA_NPM_PROXY_AUTH` | Учётные данные (`user:pass`) | — |
| `NORA_NPM_METADATA_TTL` | TTL кэша метаданных (секунды) | `300` |
| `NORA_PYPI_PROXY` | URL PyPI-реестра | `https://pypi.org/simple/` |
| `NORA_MAVEN_PROXIES` | Список Maven-репозиториев через запятую | `https://repo1.maven.org/maven2` |
| `NORA_DOCKER_UPSTREAMS` | Docker-реестры, формат: `url\|auth,url2` | `https://registry-1.docker.io` |
### 3.4. Ограничение частоты запросов
| Переменная | Описание | По умолчанию |
|-----------|----------|--------------|
| `NORA_RATE_LIMIT_ENABLED` | Включить ограничение | `true` |
| `NORA_RATE_LIMIT_GENERAL_RPS` | Запросов в секунду (общие) | `100` |
| `NORA_RATE_LIMIT_AUTH_RPS` | Запросов в секунду (аутентификация) | `1` |
| `NORA_RATE_LIMIT_UPLOAD_RPS` | Запросов в секунду (загрузка) | `200` |
---
## 4. Управление сервисом
### 4.1. Запуск и остановка
```bash
systemctl start nora # Запуск
systemctl stop nora # Остановка
systemctl restart nora # Перезапуск
systemctl status nora # Статус
journalctl -u nora -f # Просмотр журнала
```
### 4.2. Проверка работоспособности
```bash
curl http://localhost:4000/health
```
Ответ при нормальной работе:
```json
{
"status": "healthy",
"version": "1.0.0",
"storage": { "backend": "local", "reachable": true },
"registries": { "docker": "ok", "npm": "ok", "maven": "ok", "cargo": "ok", "pypi": "ok" }
}
```
### 4.3. Метрики (Prometheus)
```
GET /metrics
```
Экспортируются: количество запросов, латентность, загрузки и выгрузки по протоколам.
---
## 5. Резервное копирование и восстановление
### 5.1. Создание резервной копии
```bash
nora backup --output /backup/nora-$(date +%Y%m%d).tar.gz
```
### 5.2. Восстановление
```bash
nora restore --input /backup/nora-20260316.tar.gz
```
### 5.3. Сборка мусора
```bash
nora gc --dry-run # Просмотр (без удаления)
nora gc # Удаление осиротевших блобов
```
---
## 6. Предварительное кэширование (nora mirror)
Команда `nora mirror` позволяет заранее загрузить зависимости через прокси-кэш NORA. Это обеспечивает доступность артефактов при работе в изолированных средах без доступа к сети Интернет.
### 6.1. Кэширование по lockfile
```bash
nora mirror npm --lockfile package-lock.json --registry http://localhost:4000
nora mirror pip --lockfile requirements.txt --registry http://localhost:4000
nora mirror cargo --lockfile Cargo.lock --registry http://localhost:4000
```
### 6.2. Кэширование по списку пакетов
```bash
nora mirror npm --packages lodash,express --registry http://localhost:4000
nora mirror npm --packages lodash --all-versions --registry http://localhost:4000
```
### 6.3. Параметры
| Флаг | Описание | По умолчанию |
|------|----------|--------------|
| `--registry` | URL экземпляра NORA | `http://localhost:4000` |
| `--concurrency` | Количество параллельных загрузок | `8` |
| `--all-versions` | Загрузить все версии (только с `--packages`) | — |
---
## 7. Миграция хранилища
Перенос артефактов между локальным хранилищем и S3:
```bash
nora migrate --from local --to s3 --dry-run # Просмотр
nora migrate --from local --to s3 # Выполнение
```
---
## 8. Безопасность
### 8.1. Контроль целостности
При проксировании npm-пакетов NORA вычисляет и сохраняет контрольную сумму SHA-256 для каждого тарбола. При повторной выдаче из кэша контрольная сумма проверяется. В случае расхождения запрос отклоняется, а в журнал записывается предупреждение уровня SECURITY.
### 8.2. Защита от подмены пакетов
- Валидация имён файлов при публикации (защита от обхода каталогов).
- Проверка соответствия имени пакета в URL и теле запроса.
- Иммутабельность версий: повторная публикация той же версии запрещена.
### 8.3. Аудит
Все операции (загрузка, выгрузка, обращения к кэшу, ошибки) фиксируются в файле `audit.jsonl` в каталоге хранилища. Формат — JSON Lines, одна запись на строку.
### 8.4. Усиление systemd
Файл сервиса содержит параметры безопасности:
- `NoNewPrivileges=true` — запрет повышения привилегий.
- `ProtectSystem=strict` — файловая система только для чтения, кроме указанных каталогов.
- `ProtectHome=true` — запрет доступа к домашним каталогам.
- `PrivateTmp=true` — изолированный каталог временных файлов.
---
## 9. Точки подключения (endpoints)
| Протокол | Endpoint | Описание |
|----------|----------|----------|
| Docker / OCI | `/v2/` | Docker Registry V2 API |
| npm | `/npm/` | npm-реестр (прокси + публикация) |
| Maven | `/maven2/` | Maven-репозиторий |
| PyPI | `/simple/` | Python Simple API (PEP 503) |
| Cargo | `/cargo/` | Cargo-реестр |
| Helm | `/v2/` (OCI) | Helm-чарты через OCI-протокол |
| Raw | `/raw/` | Произвольные файлы |
| Мониторинг | `/health`, `/ready`, `/metrics` | Проверка и метрики |
| Интерфейс | `/ui/` | Веб-интерфейс управления |
| Документация API | `/api-docs` | OpenAPI (Swagger UI) |
---
## 10. Устранение неполадок
### Сервис не запускается
```bash
journalctl -u nora --no-pager -n 50
```
Частые причины: занят порт, недоступен каталог хранилища, ошибка в конфигурации.
### Прокси-кэш не работает
1. Проверьте доступность внешнего реестра: `curl https://registry.npmjs.org/lodash`.
2. Убедитесь, что переменная `NORA_NPM_PROXY` задана корректно.
3. При использовании приватного реестра укажите `NORA_NPM_PROXY_AUTH`.
### Ошибка целостности (Integrity check failed)
Контрольная сумма кэшированного тарбола не совпадает с сохранённой. Возможные причины: повреждение файловой системы или несанкционированное изменение файла. Удалите повреждённый файл из каталога хранилища — NORA загрузит его заново из внешнего реестра.

165
docs-ru/technical-spec.md Normal file
View File

@@ -0,0 +1,165 @@
# Технические условия
## Программа «NORA — Реестр артефактов»
**Версия документа:** 1.0
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)
---
## 1. Наименование и обозначение
**Полное наименование:** NORA — многопротокольный реестр артефактов.
**Краткое наименование:** NORA.
**Обозначение:** nora-registry.
---
## 2. Назначение
Программа предназначена для хранения, кэширования и распространения программных компонентов (артефактов), используемых при разработке, сборке и развёртывании программного обеспечения.
### 2.1. Область применения
- Организация внутренних репозиториев программных компонентов.
- Проксирование и кэширование общедоступных репозиториев (npmjs.org, PyPI, Maven Central, Docker Hub, crates.io).
- Обеспечение доступности зависимостей в изолированных средах без доступа к сети Интернет (air-gapped).
- Контроль целостности и безопасности цепочки поставки программного обеспечения.
### 2.2. Класс программного обеспечения
Инструментальное программное обеспечение для разработки и DevOps.
Код ОКПД2: 62.01 — Разработка компьютерного программного обеспечения.
---
## 3. Функциональные характеристики
### 3.1. Поддерживаемые протоколы
| Протокол | Стандарт | Назначение |
|----------|----------|------------|
| Docker / OCI | OCI Distribution Spec v1.0 | Контейнерные образы, Helm-чарты |
| npm | npm Registry API | Библиотеки JavaScript / TypeScript |
| Maven | Maven Repository Layout | Библиотеки Java / Kotlin |
| PyPI | PEP 503 (Simple API) | Библиотеки Python |
| Cargo | Cargo Registry Protocol | Библиотеки Rust |
| Raw | HTTP PUT/GET | Произвольные файлы |
### 3.2. Режимы работы
1. **Хранилище (hosted):** приём и хранение артефактов, опубликованных пользователями.
2. **Прокси-кэш (proxy):** прозрачное проксирование запросов к внешним репозиториям с локальным кэшированием.
3. **Комбинированный:** одновременная работа в режимах хранилища и прокси-кэша (поиск сначала в локальном хранилище, затем во внешнем репозитории).
### 3.3. Управление доступом
- Аутентификация на основе htpasswd (bcrypt).
- Ролевая модель: `read` (чтение), `write` (чтение и запись), `admin` (полный доступ).
- Токены доступа с ограниченным сроком действия.
### 3.4. Безопасность
- Контроль целостности кэшированных артефактов (SHA-256).
- Защита от обхода каталогов (path traversal) при публикации.
- Проверка соответствия имени пакета в URL и теле запроса.
- Иммутабельность опубликованных версий.
- Аудит всех операций в формате JSON Lines.
- Поддержка TLS при размещении за обратным прокси-сервером.
### 3.5. Эксплуатация
- Предварительное кэширование зависимостей (`nora mirror`) по файлам фиксации версий (lockfile).
- Сборка мусора (`nora gc`) — удаление осиротевших блобов.
- Резервное копирование и восстановление (`nora backup`, `nora restore`).
- Миграция между локальным хранилищем и S3-совместимым объектным хранилищем.
- Мониторинг: эндпоинты `/health`, `/ready`, `/metrics` (формат Prometheus).
- Веб-интерфейс для просмотра содержимого реестра.
- Документация API в формате OpenAPI 3.0.
---
## 4. Технические характеристики
### 4.1. Среда исполнения
| Параметр | Значение |
|----------|----------|
| Язык реализации | Rust |
| Формат поставки | Единый исполняемый файл (статическая линковка) |
| Поддерживаемые ОС | Linux (ядро 4.15+) |
| Архитектуры | x86_64 (amd64), aarch64 (arm64) |
| Контейнеризация | Docker-образ на базе `scratch` |
| Системная интеграция | systemd (файл сервиса в комплекте) |
### 4.2. Хранение данных
| Параметр | Значение |
|----------|----------|
| Локальное хранилище | Файловая система (ext4, XFS, ZFS) |
| Объектное хранилище | S3-совместимое API (MinIO, Yandex Object Storage, Selectel S3) |
| Структура | Иерархическая: `{protocol}/{package}/{artifact}` |
| Аудит | Append-only JSONL файл |
### 4.3. Конфигурация
| Источник | Приоритет |
|----------|-----------|
| Переменные окружения (`NORA_*`) | Высший |
| Файл `config.toml` | Средний |
| Значения по умолчанию | Низший |
### 4.4. Производительность
| Параметр | Значение |
|----------|----------|
| Время запуска | < 100 мс |
| Обслуживание из кэша | < 2 мс (метаданные), < 10 мс (артефакты до 1 МБ) |
| Параллельная обработка | Асинхронный ввод-вывод (tokio runtime) |
| Ограничение частоты | Настраиваемое (по умолчанию 100 запросов/сек) |
---
## 5. Лицензирование
| Компонент | Лицензия |
|-----------|----------|
| NORA (core) | MIT License |
| NORA Enterprise | Проприетарная |
Полный перечень лицензий включённых библиотек приведён в файле SBOM (формат CycloneDX).
---
## 6. Комплектность
| Компонент | Описание |
|-----------|----------|
| `nora` | Исполняемый файл |
| `nora.service` | Файл systemd-сервиса |
| `nora.env.example` | Шаблон конфигурации |
| `install.sh` | Скрипт установки |
| `nora.cdx.json` | SBOM в формате CycloneDX |
| Руководство администратора | Настоящий комплект документации |
| Руководство пользователя | Настоящий комплект документации |
| Технические условия | Настоящий документ |
---
## 7. Контактная информация
**Правообладатель:** ООО «ТАИАРС»
**Торговая марка:** АРТАИС
**Сайт продукта:** https://getnora.io
**Документация:** https://getnora.dev
**Исходный код:** https://github.com/getnora-io/nora
**Поддержка:** https://t.me/getnora

221
docs-ru/user-guide.md Normal file
View File

@@ -0,0 +1,221 @@
# Руководство пользователя NORA
**Версия:** 1.0
**Дата:** 2026-03-16
**Правообладатель:** ООО «ТАИАРС» (торговая марка АРТАИС)
---
## 1. Общие сведения
NORA — реестр артефактов для команд разработки. Программа обеспечивает хранение и кэширование библиотек, Docker-образов и иных программных компонентов, используемых при сборке приложений.
Данное руководство предназначено для разработчиков, которые используют NORA в качестве источника зависимостей.
---
## 2. Настройка рабочего окружения
### 2.1. npm / Node.js
Укажите NORA в качестве реестра:
```bash
npm config set registry http://nora.example.com:4000/npm
```
Или создайте файл `.npmrc` в корне проекта:
```
registry=http://nora.example.com:4000/npm
```
После этого все команды `npm install` будут загружать пакеты через NORA. При первом обращении NORA загрузит пакет из внешнего реестра (npmjs.org) и сохранит его в кэш. Последующие обращения обслуживаются из кэша.
### 2.2. Docker
```bash
docker login nora.example.com:4000
docker pull nora.example.com:4000/library/nginx:latest
docker push nora.example.com:4000/myteam/myapp:1.0.0
```
### 2.3. Maven
Добавьте репозиторий в `settings.xml`:
```xml
<mirrors>
<mirror>
<id>nora</id>
<mirrorOf>central</mirrorOf>
<url>http://nora.example.com:4000/maven2</url>
</mirror>
</mirrors>
```
### 2.4. Python / pip
```bash
pip install --index-url http://nora.example.com:4000/simple flask
```
Или в `pip.conf`:
```ini
[global]
index-url = http://nora.example.com:4000/simple
```
### 2.5. Cargo / Rust
Настройка в `~/.cargo/config.toml`:
```toml
[registries.nora]
index = "sparse+http://nora.example.com:4000/cargo/"
[source.crates-io]
replace-with = "nora"
```
### 2.6. Helm
Helm использует OCI-протокол через Docker Registry API:
```bash
helm push mychart-0.1.0.tgz oci://nora.example.com:4000/helm
helm pull oci://nora.example.com:4000/helm/mychart --version 0.1.0
```
---
## 3. Публикация пакетов
### 3.1. npm
```bash
npm publish --registry http://nora.example.com:4000/npm
```
Требования:
- Файл `package.json` с полями `name` и `version`.
- Каждая версия публикуется однократно. Повторная публикация той же версии запрещена.
### 3.2. Docker
```bash
docker tag myapp:latest nora.example.com:4000/myteam/myapp:1.0.0
docker push nora.example.com:4000/myteam/myapp:1.0.0
```
### 3.3. Maven
```bash
mvn deploy -DaltDeploymentRepository=nora::default::http://nora.example.com:4000/maven2
```
### 3.4. Raw (произвольные файлы)
```bash
# Загрузка
curl -X PUT --data-binary @release.tar.gz http://nora.example.com:4000/raw/builds/release-1.0.tar.gz
# Скачивание
curl -O http://nora.example.com:4000/raw/builds/release-1.0.tar.gz
```
---
## 4. Работа в изолированной среде
Если сборочный сервер не имеет доступа к сети Интернет, используйте предварительное кэширование.
### 4.1. Кэширование зависимостей проекта
На машине с доступом к Интернету и NORA выполните:
```bash
nora mirror npm --lockfile package-lock.json --registry http://nora.example.com:4000
```
После этого все зависимости из lockfile будут доступны через NORA, даже если связь с внешними реестрами отсутствует.
### 4.2. Кэширование всех версий пакета
```bash
nora mirror npm --packages lodash,express --all-versions --registry http://nora.example.com:4000
```
Эта команда загрузит все опубликованные версии указанных пакетов.
---
## 5. Веб-интерфейс
NORA предоставляет веб-интерфейс для просмотра содержимого реестра:
```
http://nora.example.com:4000/ui/
```
Доступные функции:
- Просмотр списка артефактов по протоколам.
- Количество версий и размер каждого пакета.
- Журнал последних операций.
- Метрики загрузок.
---
## 6. Документация API
Интерактивная документация API доступна по адресу:
```
http://nora.example.com:4000/api-docs
```
Формат: OpenAPI 3.0 (Swagger UI).
---
## 7. Аутентификация
Если администратор включил аутентификацию, для операций записи требуется токен.
### 7.1. Получение токена
```bash
curl -u admin:password http://nora.example.com:4000/auth/token
```
### 7.2. Использование токена
```bash
# npm
npm config set //nora.example.com:4000/npm/:_authToken TOKEN
# Docker
docker login nora.example.com:4000
# curl
curl -H "Authorization: Bearer TOKEN" http://nora.example.com:4000/npm/my-package
```
Операции чтения по умолчанию не требуют аутентификации (роль `read` назначается автоматически).
---
## 8. Часто задаваемые вопросы
**В: Что произойдёт, если внешний реестр (npmjs.org) станет недоступен?**
О: NORA продолжит обслуживать запросы из кэша. Пакеты, которые ранее не запрашивались, будут недоступны до восстановления связи. Для предотвращения такой ситуации используйте `nora mirror`.
**В: Можно ли публиковать приватные пакеты?**
О: Да. Пакеты, опубликованные через `npm publish` или `docker push`, сохраняются в локальном хранилище NORA и доступны всем пользователям данного экземпляра.
**В: Как обновить кэш метаданных?**
О: Кэш метаданных npm обновляется автоматически по истечении TTL (по умолчанию 5 минут). Для немедленного обновления удалите файл `metadata.json` из каталога хранилища.
**В: Поддерживаются ли scoped-пакеты npm (@scope/package)?**
О: Да, полностью. Например: `npm install @babel/core --registry http://nora.example.com:4000/npm`.

22
fuzz/Cargo.toml Normal file
View File

@@ -0,0 +1,22 @@
[package]
name = "nora-fuzz"
version = "0.0.0"
publish = false
edition = "2021"
[package.metadata]
cargo-fuzz = true
[dependencies]
libfuzzer-sys = "0.4"
nora-registry = { path = "../nora-registry" }
[[bin]]
name = "fuzz_validation"
path = "fuzz_targets/fuzz_validation.rs"
doc = false
[[bin]]
name = "fuzz_docker_manifest"
path = "fuzz_targets/fuzz_docker_manifest.rs"
doc = false

View File

@@ -0,0 +1,8 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use nora_registry::docker_fuzz::detect_manifest_media_type;
fuzz_target!(|data: &[u8]| {
// Fuzz Docker manifest parser — must never panic on any input
let _ = detect_manifest_media_type(data);
});

View File

@@ -0,0 +1,13 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use nora_registry::validation::{
validate_digest, validate_docker_name, validate_docker_reference, validate_storage_key,
};
fuzz_target!(|data: &str| {
// Fuzz all validators — they must never panic on any input
let _ = validate_storage_key(data);
let _ = validate_docker_name(data);
let _ = validate_digest(data);
let _ = validate_docker_reference(data);
});

5902
nora-cli/nora-cli.cdx.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -10,6 +10,10 @@ description = "Cloud-Native Artifact Registry - Fast, lightweight, multi-protoco
keywords = ["registry", "docker", "artifacts", "cloud-native", "devops"]
categories = ["command-line-utilities", "development-tools", "web-programming"]
[lib]
name = "nora_registry"
path = "src/lib.rs"
[[bin]]
name = "nora"
path = "src/main.rs"

File diff suppressed because it is too large Load Diff

View File

@@ -112,6 +112,9 @@ pub struct NpmConfig {
pub proxy_auth: Option<String>, // "user:pass" for basic auth
#[serde(default = "default_timeout")]
pub proxy_timeout: u64,
/// Metadata cache TTL in seconds (default: 300 = 5 min). Set to 0 to cache forever.
#[serde(default = "default_metadata_ttl")]
pub metadata_ttl: u64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -215,6 +218,10 @@ fn default_timeout() -> u64 {
30
}
fn default_metadata_ttl() -> u64 {
300 // 5 minutes
}
impl Default for MavenConfig {
fn default() -> Self {
Self {
@@ -232,6 +239,7 @@ impl Default for NpmConfig {
proxy: Some("https://registry.npmjs.org".to_string()),
proxy_auth: None,
proxy_timeout: 30,
metadata_ttl: 300,
}
}
}
@@ -486,6 +494,11 @@ impl Config {
self.npm.proxy_timeout = timeout;
}
}
if let Ok(val) = env::var("NORA_NPM_METADATA_TTL") {
if let Ok(ttl) = val.parse() {
self.npm.metadata_ttl = ttl;
}
}
// npm proxy auth
if let Ok(val) = env::var("NORA_NPM_PROXY_AUTH") {

28
nora-registry/src/lib.rs Normal file
View File

@@ -0,0 +1,28 @@
//! NORA Registry — library interface for fuzzing and testing
pub mod validation;
/// Re-export Docker manifest parsing for fuzz targets
pub mod docker_fuzz {
pub fn detect_manifest_media_type(data: &[u8]) -> String {
let Ok(value) = serde_json::from_slice::<serde_json::Value>(data) else {
return "application/octet-stream".to_string();
};
if let Some(mt) = value.get("mediaType").and_then(|v| v.as_str()) {
return mt.to_string();
}
if value.get("manifests").is_some() {
return "application/vnd.oci.image.index.v1+json".to_string();
}
if value.get("schemaVersion").and_then(|v| v.as_i64()) == Some(2) {
if value.get("layers").is_some() {
return "application/vnd.oci.image.manifest.v1+json".to_string();
}
return "application/vnd.docker.distribution.manifest.v2+json".to_string();
}
if value.get("schemaVersion").and_then(|v| v.as_i64()) == Some(1) {
return "application/vnd.docker.distribution.manifest.v1+json".to_string();
}
"application/vnd.docker.distribution.manifest.v2+json".to_string()
}
}

View File

@@ -12,6 +12,7 @@ mod gc;
mod health;
mod metrics;
mod migrate;
mod mirror;
mod openapi;
mod rate_limit;
mod registry;
@@ -82,6 +83,17 @@ enum Commands {
#[arg(long, default_value = "false")]
dry_run: bool,
},
/// Pre-fetch dependencies through NORA proxy cache
Mirror {
#[command(subcommand)]
format: mirror::MirrorFormat,
/// NORA registry URL
#[arg(long, default_value = "http://localhost:4000", global = true)]
registry: String,
/// Max concurrent downloads
#[arg(long, default_value = "8", global = true)]
concurrency: usize,
},
}
pub struct AppState {
@@ -164,6 +176,16 @@ async fn main() {
println!("\nRun without --dry-run to delete orphaned blobs.");
}
}
Some(Commands::Mirror {
format,
registry,
concurrency,
}) => {
if let Err(e) = mirror::run_mirror(format, &registry, concurrency).await {
error!("Mirror failed: {}", e);
std::process::exit(1);
}
}
Some(Commands::Migrate { from, to, dry_run }) => {
let source = match from.as_str() {
"local" => Storage::new_local(&config.storage.path),

View File

@@ -0,0 +1,325 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
//! `nora mirror` — pre-fetch dependencies through NORA proxy cache.
mod npm;
use clap::Subcommand;
use indicatif::{ProgressBar, ProgressStyle};
use std::path::PathBuf;
use std::time::Instant;
#[derive(Subcommand)]
pub enum MirrorFormat {
/// Mirror npm packages
Npm {
/// Path to package-lock.json (v1/v2/v3)
#[arg(long, conflicts_with = "packages")]
lockfile: Option<PathBuf>,
/// Comma-separated package names
#[arg(long, conflicts_with = "lockfile", value_delimiter = ',')]
packages: Option<Vec<String>>,
/// Fetch all versions (only with --packages)
#[arg(long)]
all_versions: bool,
},
/// Mirror Python packages
Pip {
/// Path to requirements.txt
#[arg(long)]
lockfile: PathBuf,
},
/// Mirror Cargo crates
Cargo {
/// Path to Cargo.lock
#[arg(long)]
lockfile: PathBuf,
},
/// Mirror Maven artifacts
Maven {
/// Path to dependency list (mvn dependency:list output)
#[arg(long)]
lockfile: PathBuf,
},
}
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
pub struct MirrorTarget {
pub name: String,
pub version: String,
}
pub struct MirrorResult {
pub total: usize,
pub fetched: usize,
pub failed: usize,
pub bytes: u64,
}
pub fn create_progress_bar(total: u64) -> ProgressBar {
let pb = ProgressBar::new(total);
pb.set_style(
ProgressStyle::default_bar()
.template(
"{spinner:.green} [{elapsed_precise}] [{bar:40.cyan/blue}] {pos}/{len} ({eta}) {msg}",
)
.unwrap()
.progress_chars("=>-"),
);
pb
}
pub async fn run_mirror(
format: MirrorFormat,
registry: &str,
concurrency: usize,
) -> Result<(), String> {
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(120))
.build()
.map_err(|e| format!("Failed to create HTTP client: {}", e))?;
// Health check
let health_url = format!("{}/health", registry.trim_end_matches('/'));
match client.get(&health_url).send().await {
Ok(r) if r.status().is_success() => {}
_ => {
return Err(format!(
"Cannot connect to NORA at {}. Is `nora serve` running?",
registry
))
}
}
let start = Instant::now();
let result = match format {
MirrorFormat::Npm {
lockfile,
packages,
all_versions,
} => {
npm::run_npm_mirror(
&client,
registry,
lockfile,
packages,
all_versions,
concurrency,
)
.await?
}
MirrorFormat::Pip { lockfile } => {
mirror_lockfile(&client, registry, "pip", &lockfile).await?
}
MirrorFormat::Cargo { lockfile } => {
mirror_lockfile(&client, registry, "cargo", &lockfile).await?
}
MirrorFormat::Maven { lockfile } => {
mirror_lockfile(&client, registry, "maven", &lockfile).await?
}
};
let elapsed = start.elapsed();
println!("\nMirror complete:");
println!(" Total: {}", result.total);
println!(" Fetched: {}", result.fetched);
println!(" Failed: {}", result.failed);
println!(" Size: {:.1} MB", result.bytes as f64 / 1_048_576.0);
println!(" Time: {:.1}s", elapsed.as_secs_f64());
if result.failed > 0 {
Err(format!("{} packages failed to mirror", result.failed))
} else {
Ok(())
}
}
async fn mirror_lockfile(
client: &reqwest::Client,
registry: &str,
format: &str,
lockfile: &PathBuf,
) -> Result<MirrorResult, String> {
let content = std::fs::read_to_string(lockfile)
.map_err(|e| format!("Cannot read {}: {}", lockfile.display(), e))?;
let targets = match format {
"pip" => parse_requirements_txt(&content),
"cargo" => parse_cargo_lock(&content)?,
"maven" => parse_maven_deps(&content),
_ => vec![],
};
if targets.is_empty() {
println!("No packages found in {}", lockfile.display());
return Ok(MirrorResult {
total: 0,
fetched: 0,
failed: 0,
bytes: 0,
});
}
let pb = create_progress_bar(targets.len() as u64);
let base = registry.trim_end_matches('/');
let mut fetched = 0;
let mut failed = 0;
let mut bytes = 0u64;
for target in &targets {
let url = match format {
"pip" => format!("{}/simple/{}/", base, target.name),
"cargo" => format!(
"{}/cargo/api/v1/crates/{}/{}/download",
base, target.name, target.version
),
"maven" => {
let parts: Vec<&str> = target.name.split(':').collect();
if parts.len() == 2 {
let group_path = parts[0].replace('.', "/");
format!(
"{}/maven2/{}/{}/{}/{}-{}.jar",
base, group_path, parts[1], target.version, parts[1], target.version
)
} else {
pb.inc(1);
failed += 1;
continue;
}
}
_ => continue,
};
match client.get(&url).send().await {
Ok(r) if r.status().is_success() => {
if let Ok(body) = r.bytes().await {
bytes += body.len() as u64;
}
fetched += 1;
}
_ => failed += 1,
}
pb.set_message(format!("{}@{}", target.name, target.version));
pb.inc(1);
}
pb.finish_with_message("done");
Ok(MirrorResult {
total: targets.len(),
fetched,
failed,
bytes,
})
}
fn parse_requirements_txt(content: &str) -> Vec<MirrorTarget> {
content
.lines()
.filter(|l| !l.trim().is_empty() && !l.starts_with('#') && !l.starts_with('-'))
.filter_map(|line| {
let line = line.split('#').next().unwrap().trim();
if let Some((name, version)) = line.split_once("==") {
Some(MirrorTarget {
name: name.trim().to_string(),
version: version.trim().to_string(),
})
} else {
let name = line.split(['>', '<', '=', '!', '~', ';']).next()?.trim();
if name.is_empty() {
None
} else {
Some(MirrorTarget {
name: name.to_string(),
version: "latest".to_string(),
})
}
}
})
.collect()
}
fn parse_cargo_lock(content: &str) -> Result<Vec<MirrorTarget>, String> {
let lock: toml::Value =
toml::from_str(content).map_err(|e| format!("Invalid Cargo.lock: {}", e))?;
let packages = lock
.get("package")
.and_then(|p| p.as_array())
.cloned()
.unwrap_or_default();
Ok(packages
.iter()
.filter(|p| {
p.get("source")
.and_then(|s| s.as_str())
.map(|s| s.starts_with("registry+"))
.unwrap_or(false)
})
.filter_map(|p| {
let name = p.get("name")?.as_str()?.to_string();
let version = p.get("version")?.as_str()?.to_string();
Some(MirrorTarget { name, version })
})
.collect())
}
fn parse_maven_deps(content: &str) -> Vec<MirrorTarget> {
content
.lines()
.filter_map(|line| {
let line = line.trim().trim_start_matches("[INFO]").trim();
let parts: Vec<&str> = line.split(':').collect();
if parts.len() >= 4 {
let name = format!("{}:{}", parts[0], parts[1]);
let version = parts[3].to_string();
Some(MirrorTarget { name, version })
} else {
None
}
})
.collect()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_requirements_txt() {
let content = "flask==2.3.0\nrequests>=2.28.0\n# comment\nnumpy==1.24.3\n";
let targets = parse_requirements_txt(content);
assert_eq!(targets.len(), 3);
assert_eq!(targets[0].name, "flask");
assert_eq!(targets[0].version, "2.3.0");
assert_eq!(targets[1].name, "requests");
assert_eq!(targets[1].version, "latest");
}
#[test]
fn test_parse_cargo_lock() {
let content = "\
[[package]]
name = \"serde\"
version = \"1.0.197\"
source = \"registry+https://github.com/rust-lang/crates.io-index\"
[[package]]
name = \"my-local-crate\"
version = \"0.1.0\"
";
let targets = parse_cargo_lock(content).unwrap();
assert_eq!(targets.len(), 1);
assert_eq!(targets[0].name, "serde");
}
#[test]
fn test_parse_maven_deps() {
let content = "[INFO] org.apache.commons:commons-lang3:jar:3.12.0:compile\n";
let targets = parse_maven_deps(content);
assert_eq!(targets.len(), 1);
assert_eq!(targets[0].name, "org.apache.commons:commons-lang3");
assert_eq!(targets[0].version, "3.12.0");
}
}

View File

@@ -0,0 +1,323 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
//! npm lockfile parser + mirror logic.
use super::{create_progress_bar, MirrorResult, MirrorTarget};
use std::collections::HashSet;
use std::path::PathBuf;
use tokio::sync::Semaphore;
/// Entry point for npm mirroring
pub async fn run_npm_mirror(
client: &reqwest::Client,
registry: &str,
lockfile: Option<PathBuf>,
packages: Option<Vec<String>>,
all_versions: bool,
concurrency: usize,
) -> Result<MirrorResult, String> {
let targets = if let Some(path) = lockfile {
let content = std::fs::read_to_string(&path)
.map_err(|e| format!("Cannot read {}: {}", path.display(), e))?;
parse_npm_lockfile(&content)?
} else if let Some(names) = packages {
resolve_npm_packages(client, registry, &names, all_versions).await?
} else {
return Err("Specify --lockfile or --packages".to_string());
};
if targets.is_empty() {
println!("No npm packages to mirror");
return Ok(MirrorResult {
total: 0,
fetched: 0,
failed: 0,
bytes: 0,
});
}
println!(
"Mirroring {} npm packages via {}...",
targets.len(),
registry
);
mirror_npm_packages(client, registry, &targets, concurrency).await
}
/// Parse package-lock.json (v1, v2, v3)
fn parse_npm_lockfile(content: &str) -> Result<Vec<MirrorTarget>, String> {
let json: serde_json::Value =
serde_json::from_str(content).map_err(|e| format!("Invalid JSON: {}", e))?;
let version = json
.get("lockfileVersion")
.and_then(|v| v.as_u64())
.unwrap_or(1);
let mut seen = HashSet::new();
let mut targets = Vec::new();
if version >= 2 {
// v2/v3: use "packages" object
if let Some(packages) = json.get("packages").and_then(|p| p.as_object()) {
for (key, pkg) in packages {
if key.is_empty() {
continue; // root package
}
if let Some(name) = extract_package_name(key) {
if let Some(ver) = pkg.get("version").and_then(|v| v.as_str()) {
let pair = (name.to_string(), ver.to_string());
if seen.insert(pair.clone()) {
targets.push(MirrorTarget {
name: pair.0,
version: pair.1,
});
}
}
}
}
}
}
if version == 1 || targets.is_empty() {
// v1 fallback: recursive "dependencies"
if let Some(deps) = json.get("dependencies").and_then(|d| d.as_object()) {
parse_v1_deps(deps, &mut targets, &mut seen);
}
}
Ok(targets)
}
/// Extract package name from lockfile key like "node_modules/@babel/core"
fn extract_package_name(key: &str) -> Option<&str> {
// Handle nested: "node_modules/foo/node_modules/@scope/bar" → "@scope/bar"
let last_nm = key.rfind("node_modules/")?;
let after = &key[last_nm + "node_modules/".len()..];
let name = after.trim_end_matches('/');
if name.is_empty() {
None
} else {
Some(name)
}
}
/// Recursively parse v1 lockfile "dependencies"
fn parse_v1_deps(
deps: &serde_json::Map<String, serde_json::Value>,
targets: &mut Vec<MirrorTarget>,
seen: &mut HashSet<(String, String)>,
) {
for (name, pkg) in deps {
if let Some(ver) = pkg.get("version").and_then(|v| v.as_str()) {
let pair = (name.clone(), ver.to_string());
if seen.insert(pair.clone()) {
targets.push(MirrorTarget {
name: pair.0,
version: pair.1,
});
}
}
// Recurse into nested dependencies
if let Some(nested) = pkg.get("dependencies").and_then(|d| d.as_object()) {
parse_v1_deps(nested, targets, seen);
}
}
}
/// Resolve --packages list by fetching metadata from NORA
async fn resolve_npm_packages(
client: &reqwest::Client,
registry: &str,
names: &[String],
all_versions: bool,
) -> Result<Vec<MirrorTarget>, String> {
let base = registry.trim_end_matches('/');
let mut targets = Vec::new();
for name in names {
let url = format!("{}/npm/{}", base, name);
let resp = client.get(&url).send().await.map_err(|e| e.to_string())?;
if !resp.status().is_success() {
eprintln!("Warning: {} not found (HTTP {})", name, resp.status());
continue;
}
let json: serde_json::Value = resp.json().await.map_err(|e| e.to_string())?;
if all_versions {
if let Some(versions) = json.get("versions").and_then(|v| v.as_object()) {
for ver in versions.keys() {
targets.push(MirrorTarget {
name: name.clone(),
version: ver.clone(),
});
}
}
} else {
// Just latest
let latest = json
.get("dist-tags")
.and_then(|d| d.get("latest"))
.and_then(|v| v.as_str())
.unwrap_or("latest");
targets.push(MirrorTarget {
name: name.clone(),
version: latest.to_string(),
});
}
}
Ok(targets)
}
/// Fetch packages through NORA (triggers proxy cache)
async fn mirror_npm_packages(
client: &reqwest::Client,
registry: &str,
targets: &[MirrorTarget],
concurrency: usize,
) -> Result<MirrorResult, String> {
let base = registry.trim_end_matches('/');
let pb = create_progress_bar(targets.len() as u64);
let sem = std::sync::Arc::new(Semaphore::new(concurrency));
// Deduplicate metadata fetches (one per package name)
let unique_names: HashSet<&str> = targets.iter().map(|t| t.name.as_str()).collect();
pb.set_message("fetching metadata...");
for name in &unique_names {
let url = format!("{}/npm/{}", base, name);
let _ = client.get(&url).send().await; // trigger metadata cache
}
// Fetch tarballs concurrently
let fetched = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0));
let failed = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0));
let bytes = std::sync::Arc::new(std::sync::atomic::AtomicU64::new(0));
let mut handles = Vec::new();
for target in targets {
let permit = sem.clone().acquire_owned().await.unwrap();
let client = client.clone();
let pb = pb.clone();
let fetched = fetched.clone();
let failed = failed.clone();
let bytes = bytes.clone();
let short_name = target.name.split('/').next_back().unwrap_or(&target.name);
let tarball_url = format!(
"{}/npm/{}/-/{}-{}.tgz",
base, target.name, short_name, target.version
);
let label = format!("{}@{}", target.name, target.version);
handles.push(tokio::spawn(async move {
let _permit = permit;
match client.get(&tarball_url).send().await {
Ok(r) if r.status().is_success() => {
if let Ok(body) = r.bytes().await {
bytes.fetch_add(body.len() as u64, std::sync::atomic::Ordering::Relaxed);
}
fetched.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
_ => {
failed.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
}
pb.set_message(label);
pb.inc(1);
}));
}
for h in handles {
let _ = h.await;
}
pb.finish_with_message("done");
Ok(MirrorResult {
total: targets.len(),
fetched: fetched.load(std::sync::atomic::Ordering::Relaxed),
failed: failed.load(std::sync::atomic::Ordering::Relaxed),
bytes: bytes.load(std::sync::atomic::Ordering::Relaxed),
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_extract_package_name() {
assert_eq!(extract_package_name("node_modules/lodash"), Some("lodash"));
assert_eq!(
extract_package_name("node_modules/@babel/core"),
Some("@babel/core")
);
assert_eq!(
extract_package_name("node_modules/foo/node_modules/bar"),
Some("bar")
);
assert_eq!(
extract_package_name("node_modules/foo/node_modules/@types/node"),
Some("@types/node")
);
assert_eq!(extract_package_name(""), None);
}
#[test]
fn test_parse_lockfile_v3() {
let content = r#"{
"lockfileVersion": 3,
"packages": {
"": { "name": "test" },
"node_modules/lodash": { "version": "4.17.21" },
"node_modules/@babel/core": { "version": "7.26.0" },
"node_modules/@babel/core/node_modules/semver": { "version": "6.3.1" }
}
}"#;
let targets = parse_npm_lockfile(content).unwrap();
assert_eq!(targets.len(), 3);
let names: HashSet<&str> = targets.iter().map(|t| t.name.as_str()).collect();
assert!(names.contains("lodash"));
assert!(names.contains("@babel/core"));
assert!(names.contains("semver"));
}
#[test]
fn test_parse_lockfile_v1() {
let content = r#"{
"lockfileVersion": 1,
"dependencies": {
"express": {
"version": "4.18.2",
"dependencies": {
"accepts": { "version": "1.3.8" }
}
}
}
}"#;
let targets = parse_npm_lockfile(content).unwrap();
assert_eq!(targets.len(), 2);
assert_eq!(targets[0].name, "express");
assert_eq!(targets[1].name, "accepts");
}
#[test]
fn test_deduplication() {
let content = r#"{
"lockfileVersion": 3,
"packages": {
"": {},
"node_modules/debug": { "version": "4.3.4" },
"node_modules/express/node_modules/debug": { "version": "4.3.4" }
}
}"#;
let targets = parse_npm_lockfile(content).unwrap();
assert_eq!(targets.len(), 1); // deduplicated
assert_eq!(targets[0].name, "debug");
}
}

View File

@@ -214,6 +214,38 @@ async fn download_blob(
}
}
// Auto-prepend library/ for single-segment names (Docker Hub official images)
if !name.contains('/') {
let library_name = format!("library/{}", name);
for upstream in &state.config.docker.upstreams {
if let Ok(data) = fetch_blob_from_upstream(
&state.http_client,
&upstream.url,
&library_name,
&digest,
&state.docker_auth,
state.config.docker.proxy_timeout,
upstream.auth.as_deref(),
)
.await
{
let storage = state.storage.clone();
let key_clone = key.clone();
let data_clone = data.clone();
tokio::spawn(async move {
let _ = storage.put(&key_clone, &data_clone).await;
});
return (
StatusCode::OK,
[(header::CONTENT_TYPE, "application/octet-stream")],
Bytes::from(data),
)
.into_response();
}
}
}
StatusCode::NOT_FOUND.into_response()
}
@@ -453,6 +485,57 @@ async fn get_manifest(
}
}
// Auto-prepend library/ for single-segment names (Docker Hub official images)
// e.g., "nginx" -> "library/nginx", "alpine" -> "library/alpine"
if !name.contains('/') {
let library_name = format!("library/{}", name);
for upstream in &state.config.docker.upstreams {
if let Ok((data, content_type)) = fetch_manifest_from_upstream(
&state.http_client,
&upstream.url,
&library_name,
&reference,
&state.docker_auth,
state.config.docker.proxy_timeout,
upstream.auth.as_deref(),
)
.await
{
state.metrics.record_download("docker");
state.metrics.record_cache_miss();
state.activity.push(ActivityEntry::new(
ActionType::ProxyFetch,
format!("{}:{}", name, reference),
"docker",
"PROXY",
));
use sha2::Digest;
let digest = format!("sha256:{:x}", sha2::Sha256::digest(&data));
// Cache under original name for future local hits
let storage = state.storage.clone();
let key_clone = key.clone();
let data_clone = data.clone();
tokio::spawn(async move {
let _ = storage.put(&key_clone, &data_clone).await;
});
state.repo_index.invalidate("docker");
return (
StatusCode::OK,
[
(header::CONTENT_TYPE, content_type),
(HeaderName::from_static("docker-content-digest"), digest),
],
Bytes::from(data),
)
.into_response();
}
}
}
StatusCode::NOT_FOUND.into_response()
}

View File

@@ -10,21 +10,64 @@ use axum::{
extract::{Path, State},
http::{header, StatusCode},
response::{IntoResponse, Response},
routing::get,
routing::{get, put},
Router,
};
use base64::Engine;
use sha2::Digest;
use std::sync::Arc;
use std::time::Duration;
pub fn routes() -> Router<Arc<AppState>> {
Router::new().route("/npm/{*path}", get(handle_request))
Router::new()
.route("/npm/{*path}", get(handle_request))
.route("/npm/{*path}", put(handle_publish))
}
/// Build NORA base URL from config (for URL rewriting)
fn nora_base_url(state: &AppState) -> String {
state.config.server.public_url.clone().unwrap_or_else(|| {
format!(
"http://{}:{}",
state.config.server.host, state.config.server.port
)
})
}
/// Rewrite tarball URLs in npm metadata to point to NORA.
///
/// Replaces upstream registry URLs (e.g. `https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz`)
/// with NORA URLs (e.g. `http://nora:5000/npm/lodash/-/lodash-4.17.21.tgz`).
fn rewrite_tarball_urls(data: &[u8], nora_base: &str, upstream_url: &str) -> Result<Vec<u8>, ()> {
let mut json: serde_json::Value = serde_json::from_slice(data).map_err(|_| ())?;
let upstream_trimmed = upstream_url.trim_end_matches('/');
let nora_npm_base = format!("{}/npm", nora_base.trim_end_matches('/'));
if let Some(versions) = json.get_mut("versions").and_then(|v| v.as_object_mut()) {
for (_ver, version_data) in versions.iter_mut() {
if let Some(tarball_url) = version_data
.get("dist")
.and_then(|d| d.get("tarball"))
.and_then(|t| t.as_str())
.map(|s| s.to_string())
{
let rewritten = tarball_url.replace(upstream_trimmed, &nora_npm_base);
if let Some(dist) = version_data.get_mut("dist") {
dist["tarball"] = serde_json::Value::String(rewritten);
}
}
}
}
serde_json::to_vec(&json).map_err(|_| ())
}
async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<String>) -> Response {
let is_tarball = path.contains("/-/");
let key = if is_tarball {
let parts: Vec<&str> = path.split("/-/").collect();
let parts: Vec<&str> = path.splitn(2, "/-/").collect();
if parts.len() == 2 {
format!("npm/{}/tarballs/{}", parts[0], parts[1])
} else {
@@ -40,23 +83,60 @@ async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<Str
path.clone()
};
// --- Cache hit path ---
if let Ok(data) = state.storage.get(&key).await {
if is_tarball {
state.metrics.record_download("npm");
state.metrics.record_cache_hit();
state.activity.push(ActivityEntry::new(
ActionType::CacheHit,
package_name,
"npm",
"CACHE",
));
state
.audit
.log(AuditEntry::new("cache_hit", "api", "", "npm", ""));
// Metadata TTL: if stale, try to refetch from upstream
if !is_tarball {
let ttl = state.config.npm.metadata_ttl;
if ttl > 0 {
if let Some(meta) = state.storage.stat(&key).await {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_secs())
.unwrap_or(0);
if now.saturating_sub(meta.modified) > ttl {
if let Some(fresh) = refetch_metadata(&state, &path, &key).await {
return with_content_type(false, fresh.into()).into_response();
}
// Upstream failed — serve stale cache
}
}
}
return with_content_type(false, data).into_response();
}
return with_content_type(is_tarball, data).into_response();
// Tarball: integrity check if hash exists
let hash_key = format!("{}.sha256", key);
if let Ok(stored_hash) = state.storage.get(&hash_key).await {
let computed = format!("{:x}", sha2::Sha256::digest(&data));
let expected = String::from_utf8_lossy(&stored_hash);
if computed != expected.as_ref() {
tracing::error!(
key = %key,
expected = %expected,
computed = %computed,
"SECURITY: npm tarball integrity check FAILED — possible tampering"
);
return (StatusCode::INTERNAL_SERVER_ERROR, "Integrity check failed")
.into_response();
}
}
state.metrics.record_download("npm");
state.metrics.record_cache_hit();
state.activity.push(ActivityEntry::new(
ActionType::CacheHit,
package_name,
"npm",
"CACHE",
));
state
.audit
.log(AuditEntry::new("cache_hit", "api", "", "npm", ""));
return with_content_type(true, data).into_response();
}
// --- Proxy fetch path ---
if let Some(proxy_url) = &state.config.npm.proxy {
let url = format!("{}/{}", proxy_url.trim_end_matches('/'), path);
@@ -68,7 +148,18 @@ async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<Str
)
.await
{
let data_to_cache;
let data_to_serve;
if is_tarball {
// Compute and store sha256
let hash = format!("{:x}", sha2::Sha256::digest(&data));
let hash_key = format!("{}.sha256", key);
let storage = state.storage.clone();
tokio::spawn(async move {
let _ = storage.put(&hash_key, hash.as_bytes()).await;
});
state.metrics.record_download("npm");
state.metrics.record_cache_miss();
state.activity.push(ActivityEntry::new(
@@ -80,26 +171,254 @@ async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<Str
state
.audit
.log(AuditEntry::new("proxy_fetch", "api", "", "npm", ""));
data_to_cache = data.clone();
data_to_serve = data;
} else {
// Metadata: rewrite tarball URLs to point to NORA
let nora_base = nora_base_url(&state);
let rewritten = rewrite_tarball_urls(&data, &nora_base, proxy_url)
.unwrap_or_else(|_| data.clone());
data_to_cache = rewritten.clone();
data_to_serve = rewritten;
}
// Cache in background
let storage = state.storage.clone();
let key_clone = key.clone();
let data_clone = data.clone();
tokio::spawn(async move {
let _ = storage.put(&key_clone, &data_clone).await;
let _ = storage.put(&key_clone, &data_to_cache).await;
});
if is_tarball {
state.repo_index.invalidate("npm");
}
return with_content_type(is_tarball, data.into()).into_response();
return with_content_type(is_tarball, data_to_serve.into()).into_response();
}
}
StatusCode::NOT_FOUND.into_response()
}
/// Refetch metadata from upstream, rewrite URLs, update cache.
/// Returns None if upstream is unavailable (caller serves stale cache).
async fn refetch_metadata(state: &Arc<AppState>, path: &str, key: &str) -> Option<Vec<u8>> {
let proxy_url = state.config.npm.proxy.as_ref()?;
let url = format!("{}/{}", proxy_url.trim_end_matches('/'), path);
let data = fetch_from_proxy(
&state.http_client,
&url,
state.config.npm.proxy_timeout,
state.config.npm.proxy_auth.as_deref(),
)
.await
.ok()?;
let nora_base = nora_base_url(state);
let rewritten =
rewrite_tarball_urls(&data, &nora_base, proxy_url).unwrap_or_else(|_| data.clone());
let storage = state.storage.clone();
let key_clone = key.to_string();
let cache_data = rewritten.clone();
tokio::spawn(async move {
let _ = storage.put(&key_clone, &cache_data).await;
});
Some(rewritten)
}
// ============================================================================
// npm publish
// ============================================================================
/// Validate attachment filename: only safe characters, no path traversal.
fn is_valid_attachment_name(name: &str) -> bool {
!name.is_empty()
&& !name.contains("..")
&& !name.contains('/')
&& !name.contains('\\')
&& !name.contains('\0')
&& name
.chars()
.all(|c| c.is_ascii_alphanumeric() || matches!(c, '.' | '-' | '_' | '@'))
}
async fn handle_publish(
State(state): State<Arc<AppState>>,
Path(path): Path<String>,
body: Bytes,
) -> Response {
let package_name = path;
let payload: serde_json::Value = match serde_json::from_slice(&body) {
Ok(v) => v,
Err(e) => return (StatusCode::BAD_REQUEST, format!("Invalid JSON: {}", e)).into_response(),
};
// Security: verify payload name matches URL path
if let Some(payload_name) = payload.get("name").and_then(|n| n.as_str()) {
if payload_name != package_name {
tracing::warn!(
url_name = %package_name,
payload_name = %payload_name,
"SECURITY: npm publish name mismatch — possible spoofing attempt"
);
return (
StatusCode::BAD_REQUEST,
"Package name in URL does not match payload",
)
.into_response();
}
}
let attachments = match payload.get("_attachments").and_then(|a| a.as_object()) {
Some(a) => a,
None => return (StatusCode::BAD_REQUEST, "Missing _attachments").into_response(),
};
let new_versions = match payload.get("versions").and_then(|v| v.as_object()) {
Some(v) => v,
None => return (StatusCode::BAD_REQUEST, "Missing versions").into_response(),
};
// Load or create metadata
let metadata_key = format!("npm/{}/metadata.json", package_name);
let mut metadata = if let Ok(existing) = state.storage.get(&metadata_key).await {
serde_json::from_slice::<serde_json::Value>(&existing)
.unwrap_or_else(|_| serde_json::json!({}))
} else {
serde_json::json!({})
};
// Version immutability
if let Some(existing_versions) = metadata.get("versions").and_then(|v| v.as_object()) {
for ver in new_versions.keys() {
if existing_versions.contains_key(ver) {
return (
StatusCode::CONFLICT,
format!("Version {} already exists", ver),
)
.into_response();
}
}
}
// Store tarballs
for (filename, attachment_data) in attachments {
if !is_valid_attachment_name(filename) {
tracing::warn!(
filename = %filename,
package = %package_name,
"SECURITY: npm publish rejected — invalid attachment filename"
);
return (StatusCode::BAD_REQUEST, "Invalid attachment filename").into_response();
}
let base64_data = match attachment_data.get("data").and_then(|d| d.as_str()) {
Some(d) => d,
None => continue,
};
let tarball_bytes = match base64::engine::general_purpose::STANDARD.decode(base64_data) {
Ok(b) => b,
Err(_) => {
return (StatusCode::BAD_REQUEST, "Invalid base64 in attachment").into_response()
}
};
let tarball_key = format!("npm/{}/tarballs/{}", package_name, filename);
if state
.storage
.put(&tarball_key, &tarball_bytes)
.await
.is_err()
{
return StatusCode::INTERNAL_SERVER_ERROR.into_response();
}
// Store sha256
let hash = format!("{:x}", sha2::Sha256::digest(&tarball_bytes));
let hash_key = format!("{}.sha256", tarball_key);
let _ = state.storage.put(&hash_key, hash.as_bytes()).await;
}
// Merge versions
let meta_obj = metadata.as_object_mut().unwrap();
let stored_versions = meta_obj.entry("versions").or_insert(serde_json::json!({}));
if let Some(sv) = stored_versions.as_object_mut() {
for (ver, ver_data) in new_versions {
sv.insert(ver.clone(), ver_data.clone());
}
}
// Copy standard fields
for field in &["name", "_id", "description", "readme", "license"] {
if let Some(val) = payload.get(*field) {
meta_obj.insert(field.to_string(), val.clone());
}
}
// Merge dist-tags
if let Some(new_dist_tags) = payload.get("dist-tags").and_then(|d| d.as_object()) {
let stored_dist_tags = meta_obj.entry("dist-tags").or_insert(serde_json::json!({}));
if let Some(sdt) = stored_dist_tags.as_object_mut() {
for (tag, ver) in new_dist_tags {
sdt.insert(tag.clone(), ver.clone());
}
}
}
// Rewrite tarball URLs for published packages
let nora_base = nora_base_url(&state);
if let Some(versions) = metadata.get_mut("versions").and_then(|v| v.as_object_mut()) {
for (ver, ver_data) in versions.iter_mut() {
if let Some(dist) = ver_data.get_mut("dist") {
let short_name = package_name.split('/').next_back().unwrap_or(&package_name);
let tarball_url = format!(
"{}/npm/{}/-/{}-{}.tgz",
nora_base.trim_end_matches('/'),
package_name,
short_name,
ver
);
dist["tarball"] = serde_json::Value::String(tarball_url);
}
}
}
// Store metadata
match serde_json::to_vec(&metadata) {
Ok(bytes) => {
if state.storage.put(&metadata_key, &bytes).await.is_err() {
return StatusCode::INTERNAL_SERVER_ERROR.into_response();
}
}
Err(_) => return StatusCode::INTERNAL_SERVER_ERROR.into_response(),
}
state.metrics.record_upload("npm");
state.activity.push(ActivityEntry::new(
ActionType::Push,
package_name,
"npm",
"LOCAL",
));
state
.audit
.log(AuditEntry::new("push", "api", "", "npm", ""));
state.repo_index.invalidate("npm");
StatusCode::CREATED.into_response()
}
// ============================================================================
// Helpers
// ============================================================================
async fn fetch_from_proxy(
client: &reqwest::Client,
url: &str,
@@ -131,3 +450,129 @@ fn with_content_type(
(StatusCode::OK, [(header::CONTENT_TYPE, content_type)], data)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_rewrite_tarball_urls_regular_package() {
let metadata = serde_json::json!({
"name": "lodash",
"versions": {
"4.17.21": {
"dist": {
"tarball": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
"shasum": "abc123"
}
}
}
});
let data = serde_json::to_vec(&metadata).unwrap();
let result =
rewrite_tarball_urls(&data, "http://nora:5000", "https://registry.npmjs.org").unwrap();
let json: serde_json::Value = serde_json::from_slice(&result).unwrap();
assert_eq!(
json["versions"]["4.17.21"]["dist"]["tarball"],
"http://nora:5000/npm/lodash/-/lodash-4.17.21.tgz"
);
assert_eq!(json["versions"]["4.17.21"]["dist"]["shasum"], "abc123");
}
#[test]
fn test_rewrite_tarball_urls_scoped_package() {
let metadata = serde_json::json!({
"name": "@babel/core",
"versions": {
"7.26.0": {
"dist": {
"tarball": "https://registry.npmjs.org/@babel/core/-/core-7.26.0.tgz",
"integrity": "sha512-test"
}
}
}
});
let data = serde_json::to_vec(&metadata).unwrap();
let result =
rewrite_tarball_urls(&data, "http://nora:5000", "https://registry.npmjs.org").unwrap();
let json: serde_json::Value = serde_json::from_slice(&result).unwrap();
assert_eq!(
json["versions"]["7.26.0"]["dist"]["tarball"],
"http://nora:5000/npm/@babel/core/-/core-7.26.0.tgz"
);
}
#[test]
fn test_rewrite_tarball_urls_multiple_versions() {
let metadata = serde_json::json!({
"name": "express",
"versions": {
"4.18.2": { "dist": { "tarball": "https://registry.npmjs.org/express/-/express-4.18.2.tgz" } },
"4.19.0": { "dist": { "tarball": "https://registry.npmjs.org/express/-/express-4.19.0.tgz" } }
}
});
let data = serde_json::to_vec(&metadata).unwrap();
let result = rewrite_tarball_urls(
&data,
"https://demo.getnora.io",
"https://registry.npmjs.org",
)
.unwrap();
let json: serde_json::Value = serde_json::from_slice(&result).unwrap();
assert_eq!(
json["versions"]["4.18.2"]["dist"]["tarball"],
"https://demo.getnora.io/npm/express/-/express-4.18.2.tgz"
);
assert_eq!(
json["versions"]["4.19.0"]["dist"]["tarball"],
"https://demo.getnora.io/npm/express/-/express-4.19.0.tgz"
);
}
#[test]
fn test_rewrite_tarball_urls_no_versions() {
let metadata = serde_json::json!({ "name": "empty-pkg" });
let data = serde_json::to_vec(&metadata).unwrap();
let result =
rewrite_tarball_urls(&data, "http://nora:5000", "https://registry.npmjs.org").unwrap();
let json: serde_json::Value = serde_json::from_slice(&result).unwrap();
assert_eq!(json["name"], "empty-pkg");
}
#[test]
fn test_rewrite_invalid_json() {
assert!(rewrite_tarball_urls(
b"not json",
"http://nora:5000",
"https://registry.npmjs.org"
)
.is_err());
}
#[test]
fn test_valid_attachment_names() {
assert!(is_valid_attachment_name("lodash-4.17.21.tgz"));
assert!(is_valid_attachment_name("core-7.26.0.tgz"));
assert!(is_valid_attachment_name("my_package-1.0.0.tgz"));
assert!(is_valid_attachment_name("@scope-pkg-1.0.0.tgz"));
}
#[test]
fn test_path_traversal_attachment_names() {
assert!(!is_valid_attachment_name("../../etc/passwd"));
assert!(!is_valid_attachment_name(
"../docker/nginx/manifests/latest.json"
));
assert!(!is_valid_attachment_name("foo/bar.tgz"));
assert!(!is_valid_attachment_name("foo\\bar.tgz"));
}
#[test]
fn test_empty_and_null_attachment_names() {
assert!(!is_valid_attachment_name(""));
assert!(!is_valid_attachment_name("foo\0bar.tgz"));
}
}

View File

@@ -173,12 +173,14 @@ async fn build_docker_index(storage: &Storage) -> Vec<RepoInfo> {
}
if let Some(rest) = key.strip_prefix("docker/") {
// Support namespaced repos: docker/{ns}/{name}/manifests/{tag}.json
// and flat repos: docker/{name}/manifests/{tag}.json
if let Some(manifests_pos) = rest.find("/manifests/") {
let name = rest[..manifests_pos].to_string();
let after_manifests = &rest[manifests_pos + "/manifests/".len()..];
if !after_manifests.is_empty() && key.ends_with(".json") {
// Support both single-segment and namespaced images:
// docker/alpine/manifests/latest.json → name="alpine"
// docker/library/alpine/manifests/latest.json → name="library/alpine"
let parts: Vec<_> = rest.split('/').collect();
let manifest_pos = parts.iter().position(|&p| p == "manifests");
if let Some(pos) = manifest_pos {
if pos >= 1 && key.ends_with(".json") {
let name = parts[..pos].join("/");
let entry = repos.entry(name).or_insert((0, 0, 0));
entry.0 += 1;
@@ -244,14 +246,20 @@ async fn build_npm_index(storage: &Storage) -> Vec<RepoInfo> {
let keys = storage.list("npm/").await;
let mut packages: HashMap<String, (usize, u64, u64)> = HashMap::new();
// Count tarballs first, then fall back to metadata.json for proxy-cached packages
// Count tarballs instead of parsing metadata.json (faster than parsing JSON)
for key in &keys {
if let Some(rest) = key.strip_prefix("npm/") {
// Pattern: npm/{package}/tarballs/{file}.tgz
// Scoped: npm/@scope/package/tarballs/{file}.tgz
if rest.contains("/tarballs/") && key.ends_with(".tgz") {
// Pattern: npm/{package}/tarballs/{file}.tgz
let parts: Vec<_> = rest.split('/').collect();
if !parts.is_empty() {
let name = parts[0].to_string();
// Scoped packages: @scope/package → parts[0]="@scope", parts[1]="package"
let name = if parts[0].starts_with('@') && parts.len() >= 4 {
format!("{}/{}", parts[0], parts[1])
} else {
parts[0].to_string()
};
let entry = packages.entry(name).or_insert((0, 0, 0));
entry.0 += 1;
@@ -262,21 +270,6 @@ async fn build_npm_index(storage: &Storage) -> Vec<RepoInfo> {
}
}
}
} else if rest.ends_with("/metadata.json") {
// Proxy-cached package: npm/{package}/metadata.json
// Show package in list but don't inflate version count from upstream metadata
if let Some(name) = rest.strip_suffix("/metadata.json") {
if !name.contains('/') {
packages.entry(name.to_string()).or_insert((0, 0, 0));
if let Some(stat) = storage.stat(key).await {
let entry = packages.get_mut(name).unwrap();
entry.1 += stat.size;
if stat.modified > entry.2 {
entry.2 = stat.modified;
}
}
}
}
}
}
}

File diff suppressed because it is too large Load Diff