73 Commits

Author SHA1 Message Date
a968016815 release: bump version to v0.2.34 2026-03-20 19:46:42 +00:00
281cc0418b ui: fix table cell padding alignment
Add px-4 to all td cells in Mount Points and Activity tables
to match th header padding. Remove non-functional px-4 from
tbody elements (CSS padding does not apply to tbody).
2026-03-20 19:44:14 +00:00
4ec95fed43 fix(deps): update tar 0.4.44 -> 0.4.45
Fixes CVE-2026-33055 (PAX size header bypass) and
CVE-2026-33056 (symlink chmod directory traversal).
2026-03-20 19:32:46 +00:00
23d79e2465 ui: group consecutive identical activity entries
Repeated cache hits for the same artifact now show as
"artifact (x4)" instead of 4 identical rows.
Reduces visual noise in dashboard activity log.
2026-03-20 19:23:41 +00:00
1d31fddc6b docs: remove hardcoded scorecard version from README 2026-03-20 11:35:14 +00:00
de3dae5d51 docs: restructure README for conversion
- Move badges from top to Security & Trust section
- Add dashboard GIF (EN/RU crossfade) as first visual
- Add "Why NORA" section with key differentiators
- Add "Used by" production reference
- Add binary install option
- Add Supported Registries table with mount points
- Streamline features into scannable list
- Remove emoji from footer
- Add comparison link placeholder
2026-03-20 11:25:32 +00:00
5517789300 test: add 82 unit tests across 7 modules
Coverage targets:
- activity_log: ActionType display, ActivityLog push/recent/all/bounded
- audit: AuditEntry, AuditLog write/read with tempdir
- config: defaults for all sub-configs, env overrides, TOML parsing
- dashboard_metrics: record_download/upload, cache_hit_rate, persistence
- error: constructors, Display, IntoResponse for all variants
- metrics: detect_registry for all protocol paths
- repo_index: paginate, RegistryIndex basics, RepoIndex invalidate

Total tests: 103 -> 185
2026-03-20 10:08:49 +00:00
4aedba9f9f ci: add test coverage with tarpaulin and dynamic badge via gist 2026-03-20 09:32:22 +00:00
97c356fb36 chore: remove internal QA scripts from public repo 2026-03-19 12:42:53 +00:00
f4c9d1419e chore: add CODEOWNERS, CHANGELOG v0.2.33, SLSA provenance, QA scripts 2026-03-19 12:39:58 +00:00
206bc06927 fix: update cosign-installer SHA to v3.8.0 2026-03-19 11:42:53 +00:00
32a0d97b2a release: bump version to v0.2.33 (#46) 2026-03-19 11:41:06 +00:00
6fa5dfd534 release: bump version to v0.2.33 2026-03-19 11:08:51 +00:00
26e1e12e64 fix: use tag for codeql-action in scorecard (webapp rejects SHA pins) 2026-03-19 10:42:14 +00:00
29516f4ea3 fix: add repo_token and permissions to scorecard workflow 2026-03-19 10:35:57 +00:00
28ff719508 fix: revert scorecard-action to tag (Docker action incompatible with SHA pin) 2026-03-19 10:33:27 +00:00
d260ff8b5e fix: use commit SHA for scorecard-action (not tag SHA) 2026-03-19 09:21:29 +00:00
578cdd7dd6 fix: correct scorecard-action SHA pin for v2.4.3 2026-03-19 09:19:41 +00:00
186855e892 ci: retrigger scorecard workflow 2026-03-19 09:18:00 +00:00
78dd91795d ci: improve OpenSSF Scorecard rating (#45)
- Add CodeQL workflow for SAST analysis (Actions language)
- Pin scorecard-action and codeql-action by SHA in scorecard.yml
- Add cargo-audit SARIF upload for security tab integration
2026-03-19 11:51:11 +03:00
c1f6430aa9 security: harden Docker registry and container runtime
- Verify blob digest (SHA256) on upload, reject mismatches (DIGEST_INVALID)
- Reject sha512 digests (only sha256 supported)
- Add upload session limits: max 100 concurrent, 2GB per session, 30min TTL
- Bind upload sessions to repository name (prevent session fixation)
- Filter .meta.json from Docker tag list (fix ArgoCD Image Updater recursion)
- Fix catalog to show namespaced images (library/alpine instead of library)
- Add security headers: CSP, X-Frame-Options, X-Content-Type-Options, Referrer-Policy
- Run containers as non-root user (USER nora) in all 3 Dockerfiles
- Add configurable NORA_MAX_UPLOAD_SESSIONS and NORA_MAX_UPLOAD_SESSION_SIZE_MB
2026-03-19 08:29:28 +00:00
52e59a8272 fix: pin ClusterFuzzLite base image by SHA, fix Docker tag double-suffix 2026-03-18 13:20:35 +00:00
8b1b9c8401 fix: use project gitleaks config in CI, relax rules for documentation examples 2026-03-18 12:48:05 +00:00
62027c44dc docs: add public roadmap, cosign verification in install script 2026-03-18 12:36:51 +00:00
68365dfe98 community: add issue/PR templates, code of conduct, update contributing guide 2026-03-18 12:22:10 +00:00
59cdd4530b chore: remove unused crates and demo traffic scripts
- Remove nora-cli (unimplemented stub)
- Remove nora-storage (standalone S3 server, not used)
- Remove demo traffic generator and systemd service
2026-03-18 12:19:58 +00:00
1cc5c8cc86 security: simplify public gitleaks config to generic network rules only 2026-03-18 11:57:21 +00:00
e2919b83de security: extend leak detection — dev process patterns, soft warnings for borderline content 2026-03-18 11:49:25 +00:00
c035561fd2 security: custom gitleaks rules for internal information leak prevention 2026-03-18 11:42:52 +00:00
1a38902b0c style: clean up code comments 2026-03-18 11:23:11 +00:00
3b9b2ee0a0 chore: repo cleanup — remove dead crates from workspace, stale files, duplicate assets
- Remove nora-cli and nora-storage from workspace (stub crates, not used)
- Remove root install.sh (duplicate of dist/install.sh)
- Remove root logo.jpg (duplicate of ui/logo.jpg)
- Remove committed SBOM .cdx.json files (generated by CI in release)
- Remove stale .githooks/ (real hook is in .git/hooks/)
- Update version in docs-ru to 0.2.32
- Add *.cdx.json to .gitignore
2026-03-18 11:20:22 +00:00
b7cb458edf test: E2E smoke tests + Playwright browser tests (23 tests)
smoke.sh:
- Full E2E smoke test: health, npm proxy/publish/security, Maven, PyPI, Docker, Raw, UI, mirror CLI
- Self-contained: starts NORA, runs tests, cleans up

Playwright (tests/e2e/):
- Dashboard: page load, registry sections visible, npm count > 0, Docker stats
- npm: URL rewriting, scoped packages, tarball download, publish, immutability, security
- Docker: v2 check, catalog, manifest push/pull, tags list
- Maven: proxy download, upload
- PyPI: simple index, package page
- Raw: upload and download
- Health, metrics, OpenAPI endpoints

All 23 tests pass in 4.7s against live NORA instance.
2026-03-18 11:04:19 +00:00
e1a1d80a77 docs: add CII Best Practices passing badge 2026-03-18 10:46:51 +00:00
b50dd6386e security: pin Docker base images by SHA, cosign signing in release, branch protection
- Pin alpine:3.20 by SHA digest in all Dockerfiles (Pinned-Dependencies)
- Add cosign keyless signing for Docker images and binary (Signed-Releases)
- Enable branch protection: strict status checks, linear history, no force push
- Add .sig and .pem to GitHub Release assets
2026-03-18 09:49:45 +00:00
6b5a397862 docs: changelog v0.2.32 2026-03-18 09:43:49 +00:00
6b4d627fa2 fix: allow NCSA license for libfuzzer-sys in cargo-deny 2026-03-18 09:27:30 +00:00
659e7730de fix: add MIT license to nora-fuzz crate (cargo-deny compliance) 2026-03-18 09:23:31 +00:00
d0441f31d1 fix: correct cargo-deny key for unused license allowance 2026-03-18 09:19:50 +00:00
1956401932 fix: allow unused license entries in cargo-deny config 2026-03-18 09:15:25 +00:00
e415f0f1ce fix: Docker dashboard for namespaced images, library/ auto-prepend for Hub official images (v0.2.32)
Docker dashboard:
- build_docker_index now finds manifests segment by position, not fixed index
- Correctly indexes library/alpine, grafana/grafana, and other namespaced images

Docker proxy:
- Auto-prepend library/ for single-segment names when upstream returns 404
- Applies to both manifests and blobs
- nginx, alpine, node now work without explicit library/ prefix
- Cached under original name for future local hits
2026-03-18 08:07:53 +00:00
aa86633a04 security: add cargo-fuzz targets and ClusterFuzzLite config
Fuzz targets:
- fuzz_validation: storage key, Docker name, digest, reference validators
- fuzz_docker_manifest: Docker/OCI manifest media type detection

Infrastructure:
- lib.rs exposing validation module and docker_fuzz for fuzz harnesses
- ClusterFuzzLite project config (libfuzzer + ASan)
2026-03-17 11:20:17 +00:00
31afa1f70b fix: use tags for scorecard webapp verification 2026-03-17 11:04:48 +00:00
f36abd82ef fix: use scorecard-action by tag for webapp verification 2026-03-17 11:02:14 +00:00
ea6a86b0f1 docs: add OpenSSF Scorecard badge 2026-03-17 10:41:00 +00:00
638f99d8dc Merge pull request #32 from getnora-io/dependabot/cargo/tracing-subscriber-0.3.23
chore(deps): bump tracing-subscriber from 0.3.22 to 0.3.23
2026-03-17 13:38:22 +03:00
c55307a3af Merge pull request #31 from getnora-io/dependabot/cargo/clap-4.6.0
chore(deps): bump clap from 4.5.60 to 4.6.0
2026-03-17 13:38:20 +03:00
cc416f3adf Merge pull request #30 from getnora-io/dependabot/cargo/tempfile-3.27.0
chore(deps): bump tempfile from 3.26.0 to 3.27.0
2026-03-17 13:38:17 +03:00
30aedac238 Merge pull request #33 from getnora-io/security/scorecard-hardening
security: OpenSSF Scorecard hardening
2026-03-17 13:36:40 +03:00
34e85acd6e security: harden OpenSSF Scorecard compliance
- Pin all GitHub Actions by SHA hash (Pinned-Dependencies)
- Add top-level permissions: read-all (Token-Permissions)
- Add explicit job-level permissions (least privilege)
- Add OpenSSF Scorecard workflow with weekly schedule
- Publish scorecard results to scorecard.dev and GitHub Security tab
2026-03-17 10:30:15 +00:00
dependabot[bot]
41eefdd90d chore(deps): bump tracing-subscriber from 0.3.22 to 0.3.23
Bumps [tracing-subscriber](https://github.com/tokio-rs/tracing) from 0.3.22 to 0.3.23.
- [Release notes](https://github.com/tokio-rs/tracing/releases)
- [Commits](https://github.com/tokio-rs/tracing/compare/tracing-subscriber-0.3.22...tracing-subscriber-0.3.23)

---
updated-dependencies:
- dependency-name: tracing-subscriber
  dependency-version: 0.3.23
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:25:13 +00:00
dependabot[bot]
94ca418155 chore(deps): bump clap from 4.5.60 to 4.6.0
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.60 to 4.6.0.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.60...clap_complete-v4.6.0)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:25:03 +00:00
dependabot[bot]
e72648a6c4 chore(deps): bump tempfile from 3.26.0 to 3.27.0
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.26.0 to 3.27.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.26.0...v3.27.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-version: 3.27.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-17 04:24:55 +00:00
18e93d23a9 docs: Russian documentation — admin guide, user guide, technical spec (Минцифры) 2026-03-16 13:58:17 +00:00
db05adb060 docs: add CycloneDX SBOM (238 components, 0 vulnerabilities) 2026-03-16 13:29:59 +00:00
a57de6690e feat: nora mirror CLI + systemd + install script
nora mirror:
- Pre-fetch dependencies through NORA proxy cache
- npm: --lockfile (v1/v2/v3) and --packages with --all-versions
- pip: requirements.txt parser
- cargo: Cargo.lock parser
- maven: dependency:list output parser
- Concurrent downloads (--concurrency, default 8)
- Progress bar with indicatif
- Health check before start

dist/:
- nora.service — systemd unit with security hardening
- nora.env.example — environment configuration template
- install.sh — automated install (binary + user + systemd + config)

Tested: 103 tests pass, 0 clippy warnings, cargo audit clean.
Smoke: mirrored 70 npm packages from real lockfile in 5.4s.
2026-03-16 13:27:37 +00:00
d3439ae33d docs: changelog v0.2.31 2026-03-16 12:51:10 +00:00
b3b74b8b2d feat: npm full proxy — URL rewriting, scoped packages, publish, integrity cache (v0.2.31)
npm proxy:
- Rewrite tarball URLs in metadata to point to NORA (was broken — tarballs bypassed NORA)
- Scoped packages (@scope/package) full support in handler and repo index
- Metadata cache TTL (NORA_NPM_METADATA_TTL, default 300s) with stale-while-revalidate
- proxy_auth now wired into fetch_from_proxy (was configured but unused)

npm publish:
- PUT /npm/{package} — accepts standard npm publish payload
- Version immutability — 409 Conflict on duplicate version
- Tarball URL rewriting in published metadata

Security:
- SHA256 integrity verification on cached tarballs (immutable cache)
- Attachment filename validation (path traversal protection)
- Package name mismatch detection (URL vs payload)

Config:
- npm.metadata_ttl — configurable cache TTL (env: NORA_NPM_METADATA_TTL)
2026-03-16 12:32:16 +00:00
d41b55fa3a style: cargo fmt 2026-03-16 08:58:27 +00:00
5a68bfd695 fix: dashboard — docker namespaced repos, npm proxy cache, upstream display (v0.2.30) 2026-03-16 08:55:33 +00:00
9c8fee5a5d docs: rewrite README — new slogan, roadmap, trim TLS/FSTEC, fix config example 2026-03-16 07:39:43 +00:00
bbff337b4c fix: clean up stale smoke test container before run 2026-03-15 22:25:37 +00:00
a73335c549 docs: trim README, link to docs site, fix website URL 2026-03-15 22:21:08 +00:00
ad6aba46b2 chore: remove internal release runbook from public repo 2026-03-15 21:57:05 +00:00
095270d113 fix: smoke test port mapping (4000, not 5000) 2026-03-15 21:54:13 +00:00
769f5fb01d docs: update CHANGELOG and README for v0.2.29 upstream auth 2026-03-15 21:50:14 +00:00
53884e143b v0.2.29: upstream auth, remove dead code, version bump
- Remove unused DockerAuth::fetch_with_auth() method
- Fix basic_auth_header docstring
- Bump to v0.2.29
2026-03-15 21:42:49 +00:00
0eb26f24f7 refactor: extract basic_auth_header helper, add plaintext credential warnings
- basic_auth_header() in config.rs replaces 6 inline STANDARD.encode calls
- warn_plaintext_credentials() logs warning at startup if auth is in config.toml
- All protocol handlers use shared helper instead of duplicating base64 logic
2026-03-15 21:37:51 +00:00
fa962b2d6e feat: upstream auth for all protocols (Docker, Maven, npm, PyPI)
Wire up basic auth credentials for upstream registry proxying:
- Docker: pass configured auth to Bearer token requests
- Maven: support url|auth format in NORA_MAVEN_PROXIES env var
- npm: add NORA_NPM_PROXY_AUTH env var
- PyPI: add NORA_PYPI_PROXY_AUTH env var
- Mask credentials in logs (never log plaintext passwords)

Config examples:
  NORA_DOCKER_UPSTREAMS="https://registry.corp.com|user:pass"
  NORA_MAVEN_PROXIES="https://nexus.corp.com/maven2|user:pass"
  NORA_NPM_PROXY_AUTH="user:pass"
  NORA_PYPI_PROXY_AUTH="user:pass"
2026-03-15 21:29:20 +00:00
a1da4fff1e fix: integration tests match actual protocol support
- Docker, Maven: full push/pull (write support exists)
- npm, PyPI, Cargo: endpoint checks only (read-only proxy, no publish yet)
2026-03-15 19:58:36 +00:00
868c4feca7 feat: add Maven, PyPI, Cargo integration tests
- Maven: PUT artifact, GET and verify checksum
- PyPI: twine upload + pip install
- Cargo: API endpoint check
- Now testing all 5 protocols: Docker, npm, Maven, PyPI, Cargo
2026-03-15 19:53:27 +00:00
5b4cba1392 fix: add npm auth token for integration test publish 2026-03-15 19:49:54 +00:00
ad890be56a feat: add integration tests, release runbook, cache fallback
- CI: integration job — build NORA, docker push/pull, npm publish/install, API checks
- release: cache-from with ignore-error=true (no dependency on localhost:5000)
- RELEASE_RUNBOOK.md: rollback procedure, deploy order, verification steps
2026-03-15 19:36:38 +00:00
3b9ea37b0e fix: cargo fmt, add .gitleaks.toml allowlist for doc examples
- remove extra blank lines in openapi.rs and secrets/mod.rs
- allowlist commit 92155cf (curl -u admin:yourpassword in README)
2026-03-15 19:27:36 +00:00
26 changed files with 258 additions and 1531 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

After

Width:  |  Height:  |  Size: 2.2 MiB

View File

@@ -144,7 +144,7 @@ jobs:
- name: Smoke test — verify alpine image starts and responds
run: |
docker rm -f nora-smoke 2>/dev/null || echo "WARNING: attestation failed, continuing without provenance"
docker rm -f nora-smoke 2>/dev/null || true
docker run --rm -d --name nora-smoke -p 5555:4000 -e NORA_HOST=0.0.0.0 \
${{ env.NORA }}/${{ env.IMAGE_NAME }}:latest
for i in $(seq 1 10); do
@@ -226,7 +226,7 @@ jobs:
cat nora-linux-amd64.sha256
- name: Generate SLSA provenance
uses: slsa-framework/slsa-github-generator/.github/actions/generate-builder@f7dd8c54c2067bafc12ca7a55595d5ee9b75204a # v2.1.0
uses: slsa-framework/slsa-github-generator/.github/actions/generate-builder@v2.1.0
id: provenance-generate
continue-on-error: true
@@ -234,7 +234,7 @@ jobs:
if: always()
run: |
# Generate provenance using gh attestation (built-in GitHub feature)
gh attestation create ./nora-linux-amd64 --repo ${{ github.repository }} --signer-workflow ${{ github.server_url }}/${{ github.repository }}/.github/workflows/release.yml 2>/dev/null || echo "WARNING: attestation failed, continuing without provenance"
gh attestation create ./nora-linux-amd64 --repo ${{ github.repository }} --signer-workflow ${{ github.server_url }}/${{ github.repository }}/.github/workflows/release.yml 2>/dev/null || true
# Also create a simple provenance file for scorecard
cat > nora-v${{ github.ref_name }}.provenance.json << 'PROVEOF'
{

17
.gitignore vendored
View File

@@ -4,14 +4,31 @@ data/
.env
.env.*
*.log
internal config
# Backup files
*.bak
# Internal files
SESSION*.md
TODO.md
docs-site/
docs/
*.txt
## Internal files
.internal/
examples/
# Generated by CI
*.cdx.json
# Dead crates (kept in repo for reference but excluded from workspace)
# nora-cli/ and nora-storage/ remain in git but are not built
# Playwright / Node
node_modules/
package.json
package-lock.json
/tmp/
scripts/

View File

@@ -3,11 +3,31 @@
title = "NORA gitleaks rules"
# Internal infrastructure — private IPs and domains
[[rules]]
id = "private-network"
description = "Private network addresses and internal domains"
regex = '''(10\.25\.1\.\d+|10\.0\.\d+\.\d+)'''
tags = ["network"]
[rules.allowlist]
regexTarget = "match"
regexes = ['''10\.0\.0\.0''']
[[rules]]
id = "internal-domains"
description = "Internal domain names"
regex = '''[a-z0-9]+\.(lab|internal|local)\b'''
tags = ["network"]
[[rules]]
id = "tailscale-hostnames"
description = "Tailscale MagicDNS hostnames"
regex = '''[a-z0-9]+\.tail[a-z0-9]+\.ts\.net'''
tags = ["network"]
[allowlist]
description = "Global allowlist for false positives"
description = "Allowlist for false positives"
paths = [
'''\.gitleaks\.toml$''',
'''\.gitignore$''',
]
regexTarget = "match"
# Test placeholder tokens (e.g. nra_00112233...)
regexes = ['''nra_0{2}[0-9a-f]{30}''']

View File

@@ -1,16 +1,5 @@
# Changelog
## [0.2.35] - 2026-03-20
### Added
- **Anonymous read mode** (`NORA_AUTH_ANONYMOUS_READ=true`): allow pull/download without credentials while requiring auth for push. Use case: public demo registries, read-only mirrors.
### Fixed
- Pin slsa-github-generator and codeql-action by SHA instead of tag
- Replace anonymous tuple with named struct in activity grouping (readability)
- Replace unwrap() with if-let pattern in activity grouping (safety)
- Add warning message on SLSA attestation failure instead of silent suppression
## [0.2.34] - 2026-03-20
### Fixed

View File

@@ -2,34 +2,6 @@
Thank you for your interest in contributing to NORA!
## Developer Certificate of Origin (DCO)
By submitting a pull request, you agree to the [Developer Certificate of Origin](https://developercertificate.org/).
Your contribution will be licensed under the [MIT License](LICENSE).
You confirm that you have the right to submit the code and that it does not violate any third-party rights.
## Project Governance
NORA uses a **Benevolent Dictator** governance model:
- **Maintainer:** [@devitway](https://github.com/devitway) — final decisions on features, releases, and architecture
- **Contributors:** anyone who submits issues, PRs, or docs improvements
- **Decision process:** proposals via GitHub Issues → discussion → maintainer decision
- **Release authority:** maintainer only
### Roles and Responsibilities
| Role | Person | Responsibilities |
|------|--------|-----------------|
| Maintainer | @devitway | Code review, releases, roadmap, security response |
| Contributor | anyone | Issues, PRs, documentation, testing |
| Dependabot | automated | Dependency updates |
### Continuity
The GitHub organization [getnora-io](https://github.com/getnora-io) has multiple admin accounts to ensure project continuity. Source code is MIT-licensed, enabling anyone to fork and continue the project.
## Getting Started
1. Fork the repository

73
Cargo.lock generated
View File

@@ -68,7 +68,7 @@ version = "1.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc"
dependencies = [
"windows-sys 0.61.2",
"windows-sys 0.60.2",
]
[[package]]
@@ -79,7 +79,7 @@ checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d"
dependencies = [
"anstyle",
"once_cell_polyfill",
"windows-sys 0.61.2",
"windows-sys 0.60.2",
]
[[package]]
@@ -97,18 +97,6 @@ dependencies = [
"derive_arbitrary",
]
[[package]]
name = "argon2"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c3610892ee6e0cbce8ae2700349fcf8f98adb0dbfbee85aec3c9179d29cc072"
dependencies = [
"base64ct",
"blake2",
"cpufeatures",
"password-hash",
]
[[package]]
name = "assert-json-diff"
version = "2.0.2"
@@ -200,12 +188,6 @@ version = "0.22.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]]
name = "base64ct"
version = "1.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2af50177e190e07a26ab74f8b1efbfe2ef87da2116221318cb1c2e82baf7de06"
[[package]]
name = "bcrypt"
version = "0.19.0"
@@ -225,15 +207,6 @@ version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
[[package]]
name = "blake2"
version = "0.10.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46502ad458c9a52b69d4d4d32775c788b7a1b85e8bc9d482d92250fc0e3f8efe"
dependencies = [
"digest",
]
[[package]]
name = "block-buffer"
version = "0.10.4"
@@ -502,7 +475,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.61.2",
"windows-sys 0.52.0",
]
[[package]]
@@ -1304,9 +1277,8 @@ dependencies = [
[[package]]
name = "nora-registry"
version = "0.3.0"
version = "0.2.34"
dependencies = [
"argon2",
"async-trait",
"axum",
"base64",
@@ -1321,7 +1293,6 @@ dependencies = [
"indicatif",
"lazy_static",
"parking_lot",
"percent-encoding",
"prometheus",
"reqwest",
"serde",
@@ -1349,7 +1320,7 @@ version = "0.50.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7957b9740744892f114936ab4a57b3f487491bbeafaf8083688b16841a4240e5"
dependencies = [
"windows-sys 0.61.2",
"windows-sys 0.60.2",
]
[[package]]
@@ -1406,17 +1377,6 @@ dependencies = [
"windows-link",
]
[[package]]
name = "password-hash"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "346f04948ba92c43e8469c1ee6736c7563d71012b17d40745260fe106aac2166"
dependencies = [
"base64ct",
"rand_core 0.6.4",
"subtle",
]
[[package]]
name = "percent-encoding"
version = "2.3.2"
@@ -1625,7 +1585,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
dependencies = [
"rand_chacha",
"rand_core 0.9.5",
"rand_core",
]
[[package]]
@@ -1635,16 +1595,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb"
dependencies = [
"ppv-lite86",
"rand_core 0.9.5",
]
[[package]]
name = "rand_core"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
dependencies = [
"getrandom 0.2.17",
"rand_core",
]
[[package]]
@@ -1816,7 +1767,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.61.2",
"windows-sys 0.52.0",
]
[[package]]
@@ -1845,9 +1796,9 @@ dependencies = [
[[package]]
name = "rustls-webpki"
version = "0.103.10"
version = "0.103.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
checksum = "d7df23109aa6c1567d1c575b9952556388da57401e4ace1d15f79eedad0d8f53"
dependencies = [
"ring",
"rustls-pki-types",
@@ -2105,7 +2056,7 @@ dependencies = [
"getrandom 0.4.1",
"once_cell",
"rustix",
"windows-sys 0.61.2",
"windows-sys 0.52.0",
]
[[package]]
@@ -2778,7 +2729,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.61.2",
"windows-sys 0.52.0",
]
[[package]]

View File

@@ -6,7 +6,7 @@ members = [
]
[workspace.package]
version = "0.3.0"
version = "0.2.34"
edition = "2021"
license = "MIT"
authors = ["DevITWay <devitway@gmail.com>"]

View File

@@ -15,14 +15,10 @@ Open [http://localhost:4000/ui/](http://localhost:4000/ui/) — your registry is
## Why NORA
- **Zero-config** — single 32 MB binary, no database, no dependencies. `docker run` and it works.
- **Production-tested** — Docker, Maven, npm, PyPI, Cargo, Go, Raw. Used in real CI/CD with ArgoCD, Buildx cache, and air-gapped environments.
- **Production-tested** — Docker, Maven, npm, PyPI, Cargo, Raw. Used in real CI/CD with ArgoCD, Buildx cache, and air-gapped environments.
- **Secure by default** — [OpenSSF Scorecard](https://scorecard.dev/viewer/?uri=github.com/getnora-io/nora), signed releases, SBOM, fuzz testing, 200+ unit tests.
[![Release](https://img.shields.io/github/v/release/getnora-io/nora)](https://github.com/getnora-io/nora/releases)
[![Image Size](https://ghcr-badge.egpl.dev/getnora-io/nora/size?label=image)](https://github.com/getnora-io/nora/pkgs/container/nora)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
**32 MB** binary | **< 100 MB** RAM | **3s** startup | **7** registries
**32 MB** binary | **< 100 MB** RAM | **3s** startup | **6** registries
> Used in production at [DevIT Academy](https://github.com/devitway) since January 2026 for Docker images, Maven artifacts, and npm packages.
@@ -31,11 +27,10 @@ Open [http://localhost:4000/ui/](http://localhost:4000/ui/) — your registry is
| Registry | Mount Point | Upstream Proxy | Auth |
|----------|------------|----------------|------|
| Docker Registry v2 | `/v2/` | Docker Hub, GHCR, any OCI | ✓ |
| Maven | `/maven2/` | Maven Central, custom | proxy-only |
| Maven | `/maven2/` | Maven Central, custom | |
| npm | `/npm/` | npmjs.org, custom | ✓ |
| Cargo | `/cargo/` | — | ✓ |
| PyPI | `/simple/` | pypi.org, custom | ✓ |
| Go Modules | `/go/` | proxy.golang.org, custom | ✓ |
| Raw files | `/raw/` | — | ✓ |
## Quick Start
@@ -87,12 +82,6 @@ npm config set registry http://localhost:4000/npm/
npm publish
```
### Go Modules
```bash
GOPROXY=http://localhost:4000/go go get golang.org/x/text@latest
```
## Features
- **Web UI** — dashboard with search, browse, i18n (EN/RU)
@@ -165,9 +154,6 @@ proxy_timeout = 60
[[docker.upstreams]]
url = "https://registry-1.docker.io"
[go]
proxy = "https://proxy.golang.org"
```
## CLI Commands
@@ -195,7 +181,6 @@ nora mirror # Sync packages for offline use
| `/npm/` | npm |
| `/cargo/` | Cargo |
| `/simple/` | PyPI |
| `/go/` | Go Modules |
## TLS / HTTPS
@@ -235,6 +220,7 @@ See [CHANGELOG.md](CHANGELOG.md) for release history.
[![CII Best Practices](https://www.bestpractices.dev/projects/12207/badge)](https://www.bestpractices.dev/projects/12207)
[![Coverage](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/devitway/0f0538f1ed16d5d9951e4f2d3f79b699/raw/nora-coverage.json)](https://github.com/getnora-io/nora/actions/workflows/ci.yml)
[![CI](https://img.shields.io/github/actions/workflow/status/getnora-io/nora/ci.yml?label=CI)](https://github.com/getnora-io/nora/actions)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
- **Signed releases** — every release is signed with [cosign](https://github.com/sigstore/cosign)
- **SBOM** — SPDX + CycloneDX in every release

View File

@@ -50,6 +50,4 @@ When deploying NORA:
## Acknowledgments
We appreciate responsible disclosure and will acknowledge security researchers who report valid vulnerabilities in our release notes and CHANGELOG, unless the reporter requests anonymity.
If you have previously reported a vulnerability and would like to be credited, please let us know.
We appreciate responsible disclosure and will acknowledge security researchers who report valid vulnerabilities.

View File

@@ -49,9 +49,7 @@ tower_governor = "0.8"
governor = "0.10"
parking_lot = "0.12"
zeroize = { version = "1.8", features = ["derive"] }
argon2 = { version = "0.5", features = ["std", "rand"] }
tower-http = { version = "0.6", features = ["set-header"] }
percent-encoding = "2"
[dev-dependencies]
tempfile = "3"

View File

@@ -94,16 +94,6 @@ pub async fn auth_middleware(
return next.run(request).await;
}
// Allow anonymous read if configured
let is_read_method = matches!(
*request.method(),
axum::http::Method::GET | axum::http::Method::HEAD
);
if state.config.auth.anonymous_read && is_read_method {
// Read requests allowed without auth
return next.run(request).await;
}
// Extract Authorization header
let auth_header = request
.headers()

View File

@@ -26,8 +26,6 @@ pub struct Config {
#[serde(default)]
pub docker: DockerConfig,
#[serde(default)]
pub go: GoConfig,
#[serde(default)]
pub raw: RawConfig,
#[serde(default)]
pub auth: AuthConfig,
@@ -129,48 +127,6 @@ pub struct PypiConfig {
pub proxy_timeout: u64,
}
/// Go module proxy configuration (GOPROXY protocol)
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GoConfig {
/// Upstream Go module proxy URL (default: https://proxy.golang.org)
#[serde(default = "default_go_proxy")]
pub proxy: Option<String>,
#[serde(default)]
pub proxy_auth: Option<String>, // "user:pass" for basic auth
#[serde(default = "default_timeout")]
pub proxy_timeout: u64,
/// Separate timeout for .zip downloads (default: 120s, zips can be large)
#[serde(default = "default_go_zip_timeout")]
pub proxy_timeout_zip: u64,
/// Maximum module zip size in bytes (default: 100MB)
#[serde(default = "default_go_max_zip_size")]
pub max_zip_size: u64,
}
fn default_go_proxy() -> Option<String> {
Some("https://proxy.golang.org".to_string())
}
fn default_go_zip_timeout() -> u64 {
120
}
fn default_go_max_zip_size() -> u64 {
104_857_600 // 100MB
}
impl Default for GoConfig {
fn default() -> Self {
Self {
proxy: default_go_proxy(),
proxy_auth: None,
proxy_timeout: 30,
proxy_timeout_zip: 120,
max_zip_size: 104_857_600,
}
}
}
/// Docker registry configuration with upstream proxy support
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DockerConfig {
@@ -244,9 +200,6 @@ fn default_max_file_size() -> u64 {
pub struct AuthConfig {
#[serde(default)]
pub enabled: bool,
/// Allow anonymous read access (pull/download without auth, push requires auth)
#[serde(default)]
pub anonymous_read: bool,
#[serde(default = "default_htpasswd_file")]
pub htpasswd_file: String,
#[serde(default = "default_token_storage")]
@@ -326,7 +279,6 @@ impl Default for AuthConfig {
fn default() -> Self {
Self {
enabled: false,
anonymous_read: false,
htpasswd_file: "users.htpasswd".to_string(),
token_storage: "data/tokens".to_string(),
}
@@ -431,10 +383,6 @@ impl Config {
);
}
}
// Go
if self.go.proxy_auth.is_some() && std::env::var("NORA_GO_PROXY_AUTH").is_err() {
tracing::warn!("Go proxy credentials in config.toml are plaintext — consider NORA_GO_PROXY_AUTH env var");
}
// npm
if self.npm.proxy_auth.is_some() && std::env::var("NORA_NPM_PROXY_AUTH").is_err() {
tracing::warn!("npm proxy credentials in config.toml are plaintext — consider NORA_NPM_PROXY_AUTH env var");
@@ -509,9 +457,6 @@ impl Config {
if let Ok(val) = env::var("NORA_AUTH_ENABLED") {
self.auth.enabled = val.to_lowercase() == "true" || val == "1";
}
if let Ok(val) = env::var("NORA_AUTH_ANONYMOUS_READ") {
self.auth.anonymous_read = val.to_lowercase() == "true" || val == "1";
}
if let Ok(val) = env::var("NORA_AUTH_HTPASSWD_FILE") {
self.auth.htpasswd_file = val;
}
@@ -677,7 +622,6 @@ impl Default for Config {
maven: MavenConfig::default(),
npm: NpmConfig::default(),
pypi: PypiConfig::default(),
go: GoConfig::default(),
docker: DockerConfig::default(),
raw: RawConfig::default(),
auth: AuthConfig::default(),
@@ -797,40 +741,10 @@ mod tests {
fn test_auth_config_default() {
let a = AuthConfig::default();
assert!(!a.enabled);
assert!(!a.anonymous_read);
assert_eq!(a.htpasswd_file, "users.htpasswd");
assert_eq!(a.token_storage, "data/tokens");
}
#[test]
fn test_auth_anonymous_read_from_toml() {
let toml = r#"
[server]
host = "127.0.0.1"
port = 4000
[storage]
mode = "local"
[auth]
enabled = true
anonymous_read = true
"#;
let config: Config = toml::from_str(toml).unwrap();
assert!(config.auth.enabled);
assert!(config.auth.anonymous_read);
}
#[test]
fn test_env_override_anonymous_read() {
let mut config = Config::default();
std::env::set_var("NORA_AUTH_ANONYMOUS_READ", "true");
config.apply_env_overrides();
assert!(config.auth.anonymous_read);
std::env::remove_var("NORA_AUTH_ANONYMOUS_READ");
}
#[test]
fn test_maven_proxy_entry_simple() {
let entry = MavenProxyEntry::Simple("https://repo.example.com".to_string());

View File

@@ -326,8 +326,7 @@ async fn run_server(config: Config, storage: Storage) {
.merge(registry::npm_routes())
.merge(registry::cargo_routes())
.merge(registry::pypi_routes())
.merge(registry::raw_routes())
.merge(registry::go_routes());
.merge(registry::raw_routes());
// Routes WITHOUT rate limiting (health, metrics, UI)
let public_routes = Router::new()

View File

@@ -1,522 +0,0 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
//! Go module proxy (GOPROXY protocol).
//!
//! Implements the 5 required endpoints:
//! GET /go/{module}/@v/list — list known versions
//! GET /go/{module}/@v/{ver}.info — version metadata (JSON)
//! GET /go/{module}/@v/{ver}.mod — go.mod file
//! GET /go/{module}/@v/{ver}.zip — module zip archive
//! GET /go/{module}/@latest — latest version info
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::{proxy_fetch, proxy_fetch_text, ProxyError};
use crate::AppState;
use axum::{
extract::{Path, State},
http::{header, HeaderValue, StatusCode},
response::{IntoResponse, Response},
routing::get,
Router,
};
use percent_encoding::percent_decode;
use std::sync::Arc;
pub fn routes() -> Router<Arc<AppState>> {
Router::new().route("/go/{*path}", get(handle))
}
/// Main handler — parses the wildcard path and dispatches to the right logic.
async fn handle(State(state): State<Arc<AppState>>, Path(path): Path<String>) -> Response {
// URL-decode the path: Go client sends %21 for !, Axum wildcard may not decode it
let path = percent_decode(path.as_bytes())
.decode_utf8()
.map(|s| s.into_owned())
.unwrap_or(path);
tracing::debug!(path = %path, "Go proxy request");
// Validate path: no traversal, no null bytes
if !is_safe_path(&path) {
tracing::debug!(path = %path, "Go proxy: unsafe path");
return StatusCode::BAD_REQUEST.into_response();
}
// Split: "github.com/!azure/sdk/@v/v1.0.0.info" → module + file
let (module_encoded, file) = match split_go_path(&path) {
Some(parts) => parts,
None => {
tracing::debug!(path = %path, "Go proxy: cannot split path");
return StatusCode::NOT_FOUND.into_response();
}
};
let storage_key = format!("go/{}", path);
let content_type = content_type_for(&file);
// Mutable endpoints: @v/list and @latest can be refreshed from upstream
let is_mutable = file == "@v/list" || file == "@latest";
// Immutable: .info, .mod, .zip — once cached, never overwrite
let is_immutable = !is_mutable;
// 1. Try local cache (for immutable files, this is authoritative)
if let Ok(data) = state.storage.get(&storage_key).await {
state.metrics.record_download("go");
state.metrics.record_cache_hit();
state.activity.push(ActivityEntry::new(
ActionType::CacheHit,
format_artifact(&module_encoded, &file),
"go",
"CACHE",
));
return with_content_type(data.to_vec(), content_type);
}
// 2. Try upstream proxy
let proxy_url = match &state.config.go.proxy {
Some(url) => url.clone(),
None => return StatusCode::NOT_FOUND.into_response(),
};
// Validate module path encoding (but keep encoded for upstream — proxy.golang.org expects ! encoding)
if decode_module_path(&module_encoded).is_err() {
return StatusCode::BAD_REQUEST.into_response();
}
let upstream_url = format!(
"{}/{}",
proxy_url.trim_end_matches('/'),
format_upstream_path(&module_encoded, &file)
);
// Use longer timeout for .zip files
let timeout = if file.ends_with(".zip") {
state.config.go.proxy_timeout_zip
} else {
state.config.go.proxy_timeout
};
// Fetch: binary for .zip, text for everything else
let data = if file.ends_with(".zip") {
proxy_fetch(
&state.http_client,
&upstream_url,
timeout,
state.config.go.proxy_auth.as_deref(),
)
.await
} else {
proxy_fetch_text(
&state.http_client,
&upstream_url,
timeout,
state.config.go.proxy_auth.as_deref(),
None,
)
.await
.map(|s| s.into_bytes())
};
match data {
Ok(bytes) => {
// Enforce size limit for .zip
if file.ends_with(".zip") && bytes.len() as u64 > state.config.go.max_zip_size {
tracing::warn!(
module = module_encoded,
size = bytes.len(),
limit = state.config.go.max_zip_size,
"Go module zip exceeds size limit"
);
return StatusCode::PAYLOAD_TOO_LARGE.into_response();
}
state.metrics.record_download("go");
state.metrics.record_cache_miss();
state.activity.push(ActivityEntry::new(
ActionType::ProxyFetch,
format_artifact(&module_encoded, &file),
"go",
"PROXY",
));
state
.audit
.log(AuditEntry::new("proxy_fetch", "api", "", "go", ""));
// Background cache: immutable = put_if_absent, mutable = always overwrite
let storage = state.storage.clone();
let key = storage_key.clone();
let data_clone = bytes.clone();
tokio::spawn(async move {
if is_immutable {
// Only write if not already cached (immutability guarantee)
if storage.stat(&key).await.is_none() {
let _ = storage.put(&key, &data_clone).await;
}
} else {
let _ = storage.put(&key, &data_clone).await;
}
});
state.repo_index.invalidate("go");
with_content_type(bytes, content_type)
}
Err(ProxyError::NotFound) => StatusCode::NOT_FOUND.into_response(),
Err(e) => {
tracing::debug!(
module = module_encoded,
file = file,
error = ?e,
"Go upstream proxy error"
);
StatusCode::BAD_GATEWAY.into_response()
}
}
}
// ============================================================================
// Module path encoding/decoding
// ============================================================================
/// Decode Go module path: `!x` → `X`
///
/// Go module proxy spec requires uppercase letters to be encoded as `!`
/// followed by the lowercase letter. Raw uppercase in encoded path is invalid.
fn decode_module_path(encoded: &str) -> Result<String, ()> {
let mut result = String::with_capacity(encoded.len());
let mut chars = encoded.chars();
while let Some(c) = chars.next() {
if c == '!' {
match chars.next() {
Some(next) if next.is_ascii_lowercase() => {
result.push(next.to_ascii_uppercase());
}
_ => return Err(()),
}
} else if c.is_ascii_uppercase() {
// Raw uppercase in encoded path is invalid per spec
return Err(());
} else {
result.push(c);
}
}
Ok(result)
}
/// Encode Go module path: `X` → `!x`
#[cfg(test)]
fn encode_module_path(path: &str) -> String {
let mut result = String::with_capacity(path.len() + 8);
for c in path.chars() {
if c.is_ascii_uppercase() {
result.push('!');
result.push(c.to_ascii_lowercase());
} else {
result.push(c);
}
}
result
}
// ============================================================================
// Path parsing helpers
// ============================================================================
/// Split Go path into (encoded_module, file).
///
/// Examples:
/// "github.com/user/repo/@v/v1.0.0.info" → ("github.com/user/repo", "@v/v1.0.0.info")
/// "github.com/user/repo/v2/@v/list" → ("github.com/user/repo/v2", "@v/list")
/// "github.com/user/repo/@latest" → ("github.com/user/repo", "@latest")
fn split_go_path(path: &str) -> Option<(String, String)> {
// Try @latest first (it's simpler)
if let Some(pos) = path.rfind("/@latest") {
let module = &path[..pos];
if !module.is_empty() {
return Some((module.to_string(), "@latest".to_string()));
}
}
// Try @v/ — find the last occurrence (handles /v2/@v/ correctly)
if let Some(pos) = path.rfind("/@v/") {
let module = &path[..pos];
let file = &path[pos + 1..]; // "@v/..."
if !module.is_empty() && !file.is_empty() {
return Some((module.to_string(), file.to_string()));
}
}
None
}
/// Path validation: no traversal attacks
fn is_safe_path(path: &str) -> bool {
!path.contains("..")
&& !path.starts_with('/')
&& !path.contains("//")
&& !path.contains('\0')
&& !path.is_empty()
}
/// Content-Type for Go proxy responses
fn content_type_for(file: &str) -> &'static str {
if file.ends_with(".info") || file == "@latest" {
"application/json"
} else if file.ends_with(".zip") {
"application/zip"
} else {
// .mod, @v/list
"text/plain; charset=utf-8"
}
}
/// Build upstream URL path (uses decoded module path)
fn format_upstream_path(module_decoded: &str, file: &str) -> String {
format!("{}/{}", module_decoded, file)
}
/// Human-readable artifact name for activity log
fn format_artifact(module: &str, file: &str) -> String {
if file == "@v/list" || file == "@latest" {
format!("{} {}", module, file)
} else if let Some(version_file) = file.strip_prefix("@v/") {
// "v1.0.0.info" → "module@v1.0.0"
let version = version_file
.rsplit_once('.')
.map(|(v, _ext)| v)
.unwrap_or(version_file);
format!("{}@{}", module, version)
} else {
format!("{}/{}", module, file)
}
}
/// Build response with Content-Type header
fn with_content_type(data: Vec<u8>, content_type: &'static str) -> Response {
(
StatusCode::OK,
[(header::CONTENT_TYPE, HeaderValue::from_static(content_type))],
data,
)
.into_response()
}
// ============================================================================
// Tests
// ============================================================================
#[cfg(test)]
mod tests {
use super::*;
// ── Encoding/decoding ───────────────────────────────────────────────
#[test]
fn test_decode_azure() {
assert_eq!(
decode_module_path("github.com/!azure/sdk").unwrap(),
"github.com/Azure/sdk"
);
}
#[test]
fn test_decode_multiple_uppercase() {
assert_eq!(
decode_module_path("!google!cloud!platform/foo").unwrap(),
"GoogleCloudPlatform/foo"
);
}
#[test]
fn test_decode_no_uppercase() {
assert_eq!(
decode_module_path("github.com/user/repo").unwrap(),
"github.com/user/repo"
);
}
#[test]
fn test_decode_invalid_bang_at_end() {
assert!(decode_module_path("foo!").is_err());
}
#[test]
fn test_decode_invalid_bang_followed_by_uppercase() {
assert!(decode_module_path("foo!A").is_err());
}
#[test]
fn test_decode_raw_uppercase_is_invalid() {
assert!(decode_module_path("github.com/Azure/sdk").is_err());
}
#[test]
fn test_encode_roundtrip() {
let original = "github.com/Azure/azure-sdk-for-go";
let encoded = encode_module_path(original);
assert_eq!(encoded, "github.com/!azure/azure-sdk-for-go");
assert_eq!(decode_module_path(&encoded).unwrap(), original);
}
#[test]
fn test_encode_no_change() {
assert_eq!(
encode_module_path("github.com/user/repo"),
"github.com/user/repo"
);
}
// ── Path splitting ──────────────────────────────────────────────────
#[test]
fn test_split_version_info() {
let (module, file) = split_go_path("github.com/user/repo/@v/v1.0.0.info").unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@v/v1.0.0.info");
}
#[test]
fn test_split_version_list() {
let (module, file) = split_go_path("github.com/user/repo/@v/list").unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@v/list");
}
#[test]
fn test_split_latest() {
let (module, file) = split_go_path("github.com/user/repo/@latest").unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@latest");
}
#[test]
fn test_split_major_version_suffix() {
let (module, file) = split_go_path("github.com/user/repo/v2/@v/list").unwrap();
assert_eq!(module, "github.com/user/repo/v2");
assert_eq!(file, "@v/list");
}
#[test]
fn test_split_incompatible_version() {
let (module, file) =
split_go_path("github.com/user/repo/@v/v4.1.2+incompatible.info").unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@v/v4.1.2+incompatible.info");
}
#[test]
fn test_split_pseudo_version() {
let (module, file) =
split_go_path("github.com/user/repo/@v/v0.0.0-20210101000000-abcdef123456.info")
.unwrap();
assert_eq!(module, "github.com/user/repo");
assert_eq!(file, "@v/v0.0.0-20210101000000-abcdef123456.info");
}
#[test]
fn test_split_no_at() {
assert!(split_go_path("github.com/user/repo/v1.0.0").is_none());
}
#[test]
fn test_split_empty_module() {
assert!(split_go_path("/@v/list").is_none());
}
// ── Path safety ─────────────────────────────────────────────────────
#[test]
fn test_safe_path_normal() {
assert!(is_safe_path("github.com/user/repo/@v/list"));
}
#[test]
fn test_reject_traversal() {
assert!(!is_safe_path("../../etc/passwd"));
}
#[test]
fn test_reject_absolute() {
assert!(!is_safe_path("/etc/passwd"));
}
#[test]
fn test_reject_double_slash() {
assert!(!is_safe_path("github.com//evil/@v/list"));
}
#[test]
fn test_reject_null() {
assert!(!is_safe_path("github.com/\0evil/@v/list"));
}
#[test]
fn test_reject_empty() {
assert!(!is_safe_path(""));
}
// ── Content-Type ────────────────────────────────────────────────────
#[test]
fn test_content_type_info() {
assert_eq!(content_type_for("@v/v1.0.0.info"), "application/json");
}
#[test]
fn test_content_type_latest() {
assert_eq!(content_type_for("@latest"), "application/json");
}
#[test]
fn test_content_type_zip() {
assert_eq!(content_type_for("@v/v1.0.0.zip"), "application/zip");
}
#[test]
fn test_content_type_mod() {
assert_eq!(
content_type_for("@v/v1.0.0.mod"),
"text/plain; charset=utf-8"
);
}
#[test]
fn test_content_type_list() {
assert_eq!(content_type_for("@v/list"), "text/plain; charset=utf-8");
}
// ── Artifact formatting ─────────────────────────────────────────────
#[test]
fn test_format_artifact_version() {
assert_eq!(
format_artifact("github.com/user/repo", "@v/v1.0.0.info"),
"github.com/user/repo@v1.0.0"
);
}
#[test]
fn test_format_artifact_list() {
assert_eq!(
format_artifact("github.com/user/repo", "@v/list"),
"github.com/user/repo @v/list"
);
}
#[test]
fn test_format_artifact_latest() {
assert_eq!(
format_artifact("github.com/user/repo", "@latest"),
"github.com/user/repo @latest"
);
}
#[test]
fn test_format_artifact_zip() {
assert_eq!(
format_artifact("github.com/user/repo", "@v/v1.0.0.zip"),
"github.com/user/repo@v1.0.0"
);
}
}

View File

@@ -3,7 +3,7 @@
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::proxy_fetch;
use crate::config::basic_auth_header;
use crate::AppState;
use axum::{
body::Bytes,
@@ -14,6 +14,7 @@ use axum::{
Router,
};
use std::sync::Arc;
use std::time::Duration;
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
@@ -52,7 +53,7 @@ async fn download(State(state): State<Arc<AppState>>, Path(path): Path<String>)
for proxy in &state.config.maven.proxies {
let url = format!("{}/{}", proxy.url().trim_end_matches('/'), path);
match proxy_fetch(
match fetch_from_proxy(
&state.http_client,
&url,
state.config.maven.proxy_timeout,
@@ -127,6 +128,25 @@ async fn upload(
}
}
async fn fetch_from_proxy(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
auth: Option<&str>,
) -> Result<Vec<u8>, ()> {
let mut request = client.get(url).timeout(Duration::from_secs(timeout_secs));
if let Some(credentials) = auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
let response = request.send().await.map_err(|_| ())?;
if !response.status().is_success() {
return Err(());
}
response.bytes().await.map(|b| b.to_vec()).map_err(|_| ())
}
fn with_content_type(
path: &str,
data: Bytes,

View File

@@ -4,7 +4,6 @@
mod cargo_registry;
pub mod docker;
pub mod docker_auth;
mod go;
mod maven;
mod npm;
mod pypi;
@@ -13,132 +12,7 @@ mod raw;
pub use cargo_registry::routes as cargo_routes;
pub use docker::routes as docker_routes;
pub use docker_auth::DockerAuth;
pub use go::routes as go_routes;
pub use maven::routes as maven_routes;
pub use npm::routes as npm_routes;
pub use pypi::routes as pypi_routes;
pub use raw::routes as raw_routes;
use crate::config::basic_auth_header;
use std::time::Duration;
/// Fetch from upstream proxy with timeout and 1 retry.
///
/// On transient errors (timeout, connection reset), retries once after a short delay.
/// Non-retryable errors (4xx) fail immediately.
pub(crate) async fn proxy_fetch(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
auth: Option<&str>,
) -> Result<Vec<u8>, ProxyError> {
for attempt in 0..2 {
let mut request = client.get(url).timeout(Duration::from_secs(timeout_secs));
if let Some(credentials) = auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
match request.send().await {
Ok(response) => {
if response.status().is_success() {
return response
.bytes()
.await
.map(|b| b.to_vec())
.map_err(|e| ProxyError::Network(e.to_string()));
}
let status = response.status().as_u16();
// Don't retry client errors (4xx)
if (400..500).contains(&status) {
return Err(ProxyError::NotFound);
}
// Server error (5xx) — retry
if attempt == 0 {
tracing::debug!(url, status, "upstream 5xx, retrying in 1s");
tokio::time::sleep(Duration::from_secs(1)).await;
continue;
}
return Err(ProxyError::Upstream(status));
}
Err(e) => {
if attempt == 0 {
tracing::debug!(url, error = %e, "upstream error, retrying in 1s");
tokio::time::sleep(Duration::from_secs(1)).await;
continue;
}
return Err(ProxyError::Network(e.to_string()));
}
}
}
Err(ProxyError::Network("max retries exceeded".into()))
}
#[derive(Debug)]
#[allow(dead_code)]
pub(crate) enum ProxyError {
NotFound,
Upstream(u16),
Network(String),
}
/// Fetch text content from upstream proxy with timeout and 1 retry.
/// Same as proxy_fetch but returns String (for HTML pages like PyPI simple index).
pub(crate) async fn proxy_fetch_text(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
auth: Option<&str>,
extra_headers: Option<(&str, &str)>,
) -> Result<String, ProxyError> {
for attempt in 0..2 {
let mut request = client.get(url).timeout(Duration::from_secs(timeout_secs));
if let Some(credentials) = auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
if let Some((key, val)) = extra_headers {
request = request.header(key, val);
}
match request.send().await {
Ok(response) => {
if response.status().is_success() {
return response
.text()
.await
.map_err(|e| ProxyError::Network(e.to_string()));
}
let status = response.status().as_u16();
if (400..500).contains(&status) {
return Err(ProxyError::NotFound);
}
if attempt == 0 {
tracing::debug!(url, status, "upstream 5xx, retrying in 1s");
tokio::time::sleep(Duration::from_secs(1)).await;
continue;
}
return Err(ProxyError::Upstream(status));
}
Err(e) => {
if attempt == 0 {
tracing::debug!(url, error = %e, "upstream error, retrying in 1s");
tokio::time::sleep(Duration::from_secs(1)).await;
continue;
}
return Err(ProxyError::Network(e.to_string()));
}
}
}
Err(ProxyError::Network("max retries exceeded".into()))
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_proxy_fetch_invalid_url() {
let client = reqwest::Client::new();
let result = proxy_fetch(&client, "http://127.0.0.1:1/nonexistent", 2, None).await;
assert!(matches!(result, Err(ProxyError::Network(_))));
}
}

View File

@@ -3,7 +3,7 @@
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::proxy_fetch;
use crate::config::basic_auth_header;
use crate::AppState;
use axum::{
body::Bytes,
@@ -16,6 +16,7 @@ use axum::{
use base64::Engine;
use sha2::Digest;
use std::sync::Arc;
use std::time::Duration;
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
@@ -139,7 +140,7 @@ async fn handle_request(State(state): State<Arc<AppState>>, Path(path): Path<Str
if let Some(proxy_url) = &state.config.npm.proxy {
let url = format!("{}/{}", proxy_url.trim_end_matches('/'), path);
if let Ok(data) = proxy_fetch(
if let Ok(data) = fetch_from_proxy(
&state.http_client,
&url,
state.config.npm.proxy_timeout,
@@ -207,7 +208,7 @@ async fn refetch_metadata(state: &Arc<AppState>, path: &str, key: &str) -> Optio
let proxy_url = state.config.npm.proxy.as_ref()?;
let url = format!("{}/{}", proxy_url.trim_end_matches('/'), path);
let data = proxy_fetch(
let data = fetch_from_proxy(
&state.http_client,
&url,
state.config.npm.proxy_timeout,
@@ -418,6 +419,25 @@ async fn handle_publish(
// Helpers
// ============================================================================
async fn fetch_from_proxy(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
auth: Option<&str>,
) -> Result<Vec<u8>, ()> {
let mut request = client.get(url).timeout(Duration::from_secs(timeout_secs));
if let Some(credentials) = auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
let response = request.send().await.map_err(|_| ())?;
if !response.status().is_success() {
return Err(());
}
response.bytes().await.map(|b| b.to_vec()).map_err(|_| ())
}
fn with_content_type(
is_tarball: bool,
data: Bytes,

View File

@@ -3,7 +3,7 @@
use crate::activity_log::{ActionType, ActivityEntry};
use crate::audit::AuditEntry;
use crate::registry::{proxy_fetch, proxy_fetch_text};
use crate::config::basic_auth_header;
use crate::AppState;
use axum::{
extract::{Path, State},
@@ -13,6 +13,7 @@ use axum::{
Router,
};
use std::sync::Arc;
use std::time::Duration;
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
@@ -86,12 +87,11 @@ async fn package_versions(
if let Some(proxy_url) = &state.config.pypi.proxy {
let url = format!("{}/{}/", proxy_url.trim_end_matches('/'), normalized);
if let Ok(html) = proxy_fetch_text(
if let Ok(html) = fetch_package_page(
&state.http_client,
&url,
state.config.pypi.proxy_timeout,
state.config.pypi.proxy_auth.as_deref(),
Some(("Accept", "text/html")),
)
.await
{
@@ -142,18 +142,17 @@ async fn download_file(
// First, fetch the package page to find the actual download URL
let page_url = format!("{}/{}/", proxy_url.trim_end_matches('/'), normalized);
if let Ok(html) = proxy_fetch_text(
if let Ok(html) = fetch_package_page(
&state.http_client,
&page_url,
state.config.pypi.proxy_timeout,
state.config.pypi.proxy_auth.as_deref(),
Some(("Accept", "text/html")),
)
.await
{
// Find the URL for this specific file
if let Some(file_url) = find_file_url(&html, &filename) {
if let Ok(data) = proxy_fetch(
if let Ok(data) = fetch_file(
&state.http_client,
&file_url,
state.config.pypi.proxy_timeout,
@@ -206,6 +205,49 @@ fn normalize_name(name: &str) -> String {
name.to_lowercase().replace(['-', '_', '.'], "-")
}
/// Fetch package page from upstream
async fn fetch_package_page(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
auth: Option<&str>,
) -> Result<String, ()> {
let mut request = client
.get(url)
.timeout(Duration::from_secs(timeout_secs))
.header("Accept", "text/html");
if let Some(credentials) = auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
let response = request.send().await.map_err(|_| ())?;
if !response.status().is_success() {
return Err(());
}
response.text().await.map_err(|_| ())
}
/// Fetch file from upstream
async fn fetch_file(
client: &reqwest::Client,
url: &str,
timeout_secs: u64,
auth: Option<&str>,
) -> Result<Vec<u8>, ()> {
let mut request = client.get(url).timeout(Duration::from_secs(timeout_secs));
if let Some(credentials) = auth {
request = request.header("Authorization", basic_auth_header(credentials));
}
let response = request.send().await.map_err(|_| ())?;
if !response.status().is_success() {
return Err(());
}
response.bytes().await.map(|b| b.to_vec()).map_err(|_| ())
}
/// Rewrite PyPI links to point to our registry
fn rewrite_pypi_links(html: &str, package_name: &str) -> String {
// Simple regex-free approach: find href="..." and rewrite

View File

@@ -80,8 +80,6 @@ pub struct RepoIndex {
pub npm: RegistryIndex,
pub cargo: RegistryIndex,
pub pypi: RegistryIndex,
pub go: RegistryIndex,
pub raw: RegistryIndex,
}
impl RepoIndex {
@@ -92,8 +90,6 @@ impl RepoIndex {
npm: RegistryIndex::new(),
cargo: RegistryIndex::new(),
pypi: RegistryIndex::new(),
go: RegistryIndex::new(),
raw: RegistryIndex::new(),
}
}
@@ -105,8 +101,6 @@ impl RepoIndex {
"npm" => self.npm.invalidate(),
"cargo" => self.cargo.invalidate(),
"pypi" => self.pypi.invalidate(),
"go" => self.go.invalidate(),
"raw" => self.raw.invalidate(),
_ => {}
}
}
@@ -119,8 +113,6 @@ impl RepoIndex {
"npm" => &self.npm,
"cargo" => &self.cargo,
"pypi" => &self.pypi,
"go" => &self.go,
"raw" => &self.raw,
_ => return Arc::new(Vec::new()),
};
@@ -140,8 +132,6 @@ impl RepoIndex {
"npm" => build_npm_index(storage).await,
"cargo" => build_cargo_index(storage).await,
"pypi" => build_pypi_index(storage).await,
"go" => build_go_index(storage).await,
"raw" => build_raw_index(storage).await,
_ => Vec::new(),
};
info!(registry = registry, count = data.len(), "Index rebuilt");
@@ -152,15 +142,13 @@ impl RepoIndex {
}
/// Get counts for stats (no rebuild, just current state)
pub fn counts(&self) -> (usize, usize, usize, usize, usize, usize, usize) {
pub fn counts(&self) -> (usize, usize, usize, usize, usize) {
(
self.docker.count(),
self.maven.count(),
self.npm.count(),
self.cargo.count(),
self.pypi.count(),
self.go.count(),
self.raw.count(),
)
}
}
@@ -341,57 +329,6 @@ async fn build_pypi_index(storage: &Storage) -> Vec<RepoInfo> {
to_sorted_vec(packages)
}
async fn build_go_index(storage: &Storage) -> Vec<RepoInfo> {
let keys = storage.list("go/").await;
let mut modules: HashMap<String, (usize, u64, u64)> = HashMap::new();
for key in &keys {
if let Some(rest) = key.strip_prefix("go/") {
// Pattern: go/{module}/@v/{version}.zip
// Count .zip files as versions (authoritative artifacts)
if rest.contains("/@v/") && key.ends_with(".zip") {
// Extract module path: everything before /@v/
if let Some(pos) = rest.rfind("/@v/") {
let module = &rest[..pos];
let entry = modules.entry(module.to_string()).or_insert((0, 0, 0));
entry.0 += 1;
if let Some(meta) = storage.stat(key).await {
entry.1 += meta.size;
if meta.modified > entry.2 {
entry.2 = meta.modified;
}
}
}
}
}
}
to_sorted_vec(modules)
}
async fn build_raw_index(storage: &Storage) -> Vec<RepoInfo> {
let keys = storage.list("raw/").await;
let mut files: HashMap<String, (usize, u64, u64)> = HashMap::new();
for key in &keys {
if let Some(rest) = key.strip_prefix("raw/") {
// Group by top-level directory
let group = rest.split('/').next().unwrap_or(rest).to_string();
let entry = files.entry(group).or_insert((0, 0, 0));
entry.0 += 1;
if let Some(meta) = storage.stat(key).await {
entry.1 += meta.size;
if meta.modified > entry.2 {
entry.2 = meta.modified;
}
}
}
}
to_sorted_vec(files)
}
/// Convert HashMap to sorted Vec<RepoInfo>
fn to_sorted_vec(map: HashMap<String, (usize, u64, u64)>) -> Vec<RepoInfo> {
let mut result: Vec<_> = map
@@ -545,8 +482,8 @@ mod tests {
#[test]
fn test_repo_index_new() {
let idx = RepoIndex::new();
let (d, m, n, c, p, g, r) = idx.counts();
assert_eq!((d, m, n, c, p, g, r), (0, 0, 0, 0, 0, 0, 0));
let (d, m, n, c, p) = idx.counts();
assert_eq!((d, m, n, c, p), (0, 0, 0, 0, 0));
}
#[test]
@@ -558,15 +495,14 @@ mod tests {
idx.invalidate("npm");
idx.invalidate("cargo");
idx.invalidate("pypi");
idx.invalidate("raw");
idx.invalidate("unknown"); // should be a no-op
}
#[test]
fn test_repo_index_default() {
let idx = RepoIndex::default();
let (d, m, n, c, p, g, r) = idx.counts();
assert_eq!((d, m, n, c, p, g, r), (0, 0, 0, 0, 0, 0, 0));
let (d, m, n, c, p) = idx.counts();
assert_eq!((d, m, n, c, p), (0, 0, 0, 0, 0));
}
#[test]

View File

@@ -1,14 +1,9 @@
// Copyright (c) 2026 Volkov Pavel | DevITWay
// SPDX-License-Identifier: MIT
use argon2::{
password_hash::{rand_core::OsRng, PasswordHash, PasswordHasher, PasswordVerifier, SaltString},
Argon2,
};
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
use std::fs;
use std::os::unix::fs::PermissionsExt;
use std::path::{Path, PathBuf};
use std::time::{SystemTime, UNIX_EPOCH};
use thiserror::Error;
@@ -71,12 +66,8 @@ pub struct TokenStore {
impl TokenStore {
/// Create a new token store
pub fn new(storage_path: &Path) -> Self {
// Ensure directory exists with restricted permissions
// Ensure directory exists
let _ = fs::create_dir_all(storage_path);
#[cfg(unix)]
{
let _ = fs::set_permissions(storage_path, fs::Permissions::from_mode(0o700));
}
Self {
storage_path: storage_path.to_path_buf(),
}
@@ -96,9 +87,7 @@ impl TokenStore {
TOKEN_PREFIX,
Uuid::new_v4().to_string().replace("-", "")
);
let token_hash = hash_token_argon2(&raw_token)?;
// Use SHA256 of token as filename (deterministic, for lookup)
let file_id = sha256_hex(&raw_token);
let token_hash = hash_token(&raw_token);
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
@@ -108,7 +97,7 @@ impl TokenStore {
let expires_at = now + (ttl_days * 24 * 60 * 60);
let info = TokenInfo {
token_hash,
token_hash: token_hash.clone(),
user: user.to_string(),
created_at: now,
expires_at,
@@ -117,12 +106,13 @@ impl TokenStore {
role,
};
// Save to file with restricted permissions
let file_path = self.storage_path.join(format!("{}.json", &file_id[..16]));
// Save to file
let file_path = self
.storage_path
.join(format!("{}.json", &token_hash[..16]));
let json =
serde_json::to_string_pretty(&info).map_err(|e| TokenError::Storage(e.to_string()))?;
fs::write(&file_path, &json).map_err(|e| TokenError::Storage(e.to_string()))?;
set_file_permissions_600(&file_path);
fs::write(&file_path, json).map_err(|e| TokenError::Storage(e.to_string()))?;
Ok(raw_token)
}
@@ -133,43 +123,22 @@ impl TokenStore {
return Err(TokenError::InvalidFormat);
}
let file_id = sha256_hex(token);
let file_path = self.storage_path.join(format!("{}.json", &file_id[..16]));
let token_hash = hash_token(token);
let file_path = self
.storage_path
.join(format!("{}.json", &token_hash[..16]));
// TOCTOU fix: read directly, handle NotFound from IO error
let content = match fs::read_to_string(&file_path) {
Ok(c) => c,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
return Err(TokenError::NotFound);
}
Err(e) => return Err(TokenError::Storage(e.to_string())),
};
if !file_path.exists() {
return Err(TokenError::NotFound);
}
let content =
fs::read_to_string(&file_path).map_err(|e| TokenError::Storage(e.to_string()))?;
let mut info: TokenInfo =
serde_json::from_str(&content).map_err(|e| TokenError::Storage(e.to_string()))?;
// Verify hash: try Argon2id first, fall back to legacy SHA256
let hash_valid = if info.token_hash.starts_with("$argon2") {
verify_token_argon2(token, &info.token_hash)
} else {
// Legacy SHA256 hash (no salt) — verify and migrate
let legacy_hash = sha256_hex(token);
if info.token_hash == legacy_hash {
// Migrate to Argon2id
if let Ok(new_hash) = hash_token_argon2(token) {
info.token_hash = new_hash;
if let Ok(json) = serde_json::to_string_pretty(&info) {
let _ = fs::write(&file_path, &json);
set_file_permissions_600(&file_path);
}
}
true
} else {
false
}
};
if !hash_valid {
// Verify hash matches
if info.token_hash != token_hash {
return Err(TokenError::NotFound);
}
@@ -186,8 +155,7 @@ impl TokenStore {
// Update last_used
info.last_used = Some(now);
if let Ok(json) = serde_json::to_string_pretty(&info) {
let _ = fs::write(&file_path, &json);
set_file_permissions_600(&file_path);
let _ = fs::write(&file_path, json);
}
Ok((info.user, info.role))
@@ -217,12 +185,13 @@ impl TokenStore {
pub fn revoke_token(&self, hash_prefix: &str) -> Result<(), TokenError> {
let file_path = self.storage_path.join(format!("{}.json", hash_prefix));
// TOCTOU fix: try remove directly
match fs::remove_file(&file_path) {
Ok(()) => Ok(()),
Err(e) if e.kind() == std::io::ErrorKind::NotFound => Err(TokenError::NotFound),
Err(e) => Err(TokenError::Storage(e.to_string())),
if !file_path.exists() {
return Err(TokenError::NotFound);
}
fs::remove_file(&file_path).map_err(|e| TokenError::Storage(e.to_string()))?;
Ok(())
}
/// Revoke all tokens for a user
@@ -245,41 +214,13 @@ impl TokenStore {
}
}
/// Hash a token using Argon2id with random salt
fn hash_token_argon2(token: &str) -> Result<String, TokenError> {
let salt = SaltString::generate(&mut OsRng);
let argon2 = Argon2::default();
argon2
.hash_password(token.as_bytes(), &salt)
.map(|h| h.to_string())
.map_err(|e| TokenError::Storage(format!("hash error: {e}")))
}
/// Verify a token against an Argon2id hash
fn verify_token_argon2(token: &str, hash: &str) -> bool {
match PasswordHash::new(hash) {
Ok(parsed) => Argon2::default()
.verify_password(token.as_bytes(), &parsed)
.is_ok(),
Err(_) => false,
}
}
/// SHA256 hex digest (used for file naming and legacy hash verification)
fn sha256_hex(input: &str) -> String {
/// Hash a token using SHA256
fn hash_token(token: &str) -> String {
let mut hasher = Sha256::new();
hasher.update(input.as_bytes());
hasher.update(token.as_bytes());
format!("{:x}", hasher.finalize())
}
/// Set file permissions to 600 (owner read/write only)
fn set_file_permissions_600(path: &Path) {
#[cfg(unix)]
{
let _ = fs::set_permissions(path, fs::Permissions::from_mode(0o600));
}
}
#[derive(Debug, Error)]
pub enum TokenError {
#[error("Invalid token format")]
@@ -313,19 +254,6 @@ mod tests {
assert_eq!(token.len(), 4 + 32); // prefix + uuid without dashes
}
#[test]
fn test_token_hash_is_argon2() {
let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path());
let token = store
.create_token("testuser", 30, None, Role::Write)
.unwrap();
let tokens = store.list_tokens("testuser");
assert!(tokens[0].token_hash.starts_with("$argon2"));
}
#[test]
fn test_verify_valid_token() {
let temp_dir = TempDir::new().unwrap();
@@ -363,80 +291,24 @@ mod tests {
let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path());
// Create token and manually set it as expired
let token = store
.create_token("testuser", 1, None, Role::Write)
.unwrap();
let file_id = sha256_hex(&token);
let file_path = temp_dir.path().join(format!("{}.json", &file_id[..16]));
let token_hash = hash_token(&token);
let file_path = temp_dir.path().join(format!("{}.json", &token_hash[..16]));
// Read and modify the token to be expired
let content = std::fs::read_to_string(&file_path).unwrap();
let mut info: TokenInfo = serde_json::from_str(&content).unwrap();
info.expires_at = 0;
info.expires_at = 0; // Set to epoch (definitely expired)
std::fs::write(&file_path, serde_json::to_string(&info).unwrap()).unwrap();
// Token should now be expired
let result = store.verify_token(&token);
assert!(matches!(result, Err(TokenError::Expired)));
}
#[test]
fn test_legacy_sha256_migration() {
let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path());
// Simulate a legacy token with SHA256 hash
let raw_token = "nra_00112233445566778899aabbccddeeff";
let legacy_hash = sha256_hex(raw_token);
let file_id = sha256_hex(raw_token);
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let info = TokenInfo {
token_hash: legacy_hash.clone(),
user: "legacyuser".to_string(),
created_at: now,
expires_at: now + 86400,
last_used: None,
description: None,
role: Role::Read,
};
let file_path = temp_dir.path().join(format!("{}.json", &file_id[..16]));
fs::write(&file_path, serde_json::to_string_pretty(&info).unwrap()).unwrap();
// Verify should work with legacy hash
let (user, role) = store.verify_token(raw_token).unwrap();
assert_eq!(user, "legacyuser");
assert_eq!(role, Role::Read);
// After verification, hash should be migrated to Argon2id
let content = fs::read_to_string(&file_path).unwrap();
let updated: TokenInfo = serde_json::from_str(&content).unwrap();
assert!(updated.token_hash.starts_with("$argon2"));
}
#[test]
fn test_file_permissions() {
let temp_dir = TempDir::new().unwrap();
let store = TokenStore::new(temp_dir.path());
let token = store
.create_token("testuser", 30, None, Role::Write)
.unwrap();
let file_id = sha256_hex(&token);
let file_path = temp_dir.path().join(format!("{}.json", &file_id[..16]));
#[cfg(unix)]
{
let metadata = fs::metadata(&file_path).unwrap();
let mode = metadata.permissions().mode() & 0o777;
assert_eq!(mode, 0o600);
}
}
#[test]
fn test_list_tokens() {
let temp_dir = TempDir::new().unwrap();
@@ -464,13 +336,16 @@ mod tests {
let token = store
.create_token("testuser", 30, None, Role::Write)
.unwrap();
let file_id = sha256_hex(&token);
let hash_prefix = &file_id[..16];
let token_hash = hash_token(&token);
let hash_prefix = &token_hash[..16];
// Verify token works
assert!(store.verify_token(&token).is_ok());
// Revoke
store.revoke_token(hash_prefix).unwrap();
// Verify token no longer works
let result = store.verify_token(&token);
assert!(matches!(result, Err(TokenError::NotFound)));
}
@@ -509,8 +384,10 @@ mod tests {
.create_token("testuser", 30, None, Role::Write)
.unwrap();
// First verification
store.verify_token(&token).unwrap();
// Check last_used is set
let tokens = store.list_tokens("testuser");
assert!(tokens[0].last_used.is_some());
}

View File

@@ -23,8 +23,6 @@ pub struct RegistryStats {
pub npm: usize,
pub cargo: usize,
pub pypi: usize,
pub go: usize,
pub raw: usize,
}
#[derive(Serialize)]
@@ -116,18 +114,14 @@ pub async fn api_stats(State(state): State<Arc<AppState>>) -> Json<RegistryStats
let _ = state.repo_index.get("npm", &state.storage).await;
let _ = state.repo_index.get("cargo", &state.storage).await;
let _ = state.repo_index.get("pypi", &state.storage).await;
let _ = state.repo_index.get("go", &state.storage).await;
let _ = state.repo_index.get("raw", &state.storage).await;
let (docker, maven, npm, cargo, pypi, go, raw) = state.repo_index.counts();
let (docker, maven, npm, cargo, pypi) = state.repo_index.counts();
Json(RegistryStats {
docker,
maven,
npm,
cargo,
pypi,
go,
raw,
})
}
@@ -138,8 +132,6 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
let npm_repos = state.repo_index.get("npm", &state.storage).await;
let cargo_repos = state.repo_index.get("cargo", &state.storage).await;
let pypi_repos = state.repo_index.get("pypi", &state.storage).await;
let go_repos = state.repo_index.get("go", &state.storage).await;
let raw_repos = state.repo_index.get("raw", &state.storage).await;
// Calculate sizes from cached index
let docker_size: u64 = docker_repos.iter().map(|r| r.size).sum();
@@ -147,10 +139,7 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
let npm_size: u64 = npm_repos.iter().map(|r| r.size).sum();
let cargo_size: u64 = cargo_repos.iter().map(|r| r.size).sum();
let pypi_size: u64 = pypi_repos.iter().map(|r| r.size).sum();
let go_size: u64 = go_repos.iter().map(|r| r.size).sum();
let raw_size: u64 = raw_repos.iter().map(|r| r.size).sum();
let total_storage =
docker_size + maven_size + npm_size + cargo_size + pypi_size + go_size + raw_size;
let total_storage = docker_size + maven_size + npm_size + cargo_size + pypi_size;
// Count total versions/tags, not just repositories
let docker_versions: usize = docker_repos.iter().map(|r| r.versions).sum();
@@ -158,15 +147,8 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
let npm_versions: usize = npm_repos.iter().map(|r| r.versions).sum();
let cargo_versions: usize = cargo_repos.iter().map(|r| r.versions).sum();
let pypi_versions: usize = pypi_repos.iter().map(|r| r.versions).sum();
let go_versions: usize = go_repos.iter().map(|r| r.versions).sum();
let raw_versions: usize = raw_repos.iter().map(|r| r.versions).sum();
let total_artifacts = docker_versions
+ maven_versions
+ npm_versions
+ cargo_versions
+ pypi_versions
+ go_versions
+ raw_versions;
let total_artifacts =
docker_versions + maven_versions + npm_versions + cargo_versions + pypi_versions;
let global_stats = GlobalStats {
downloads: state.metrics.downloads.load(Ordering::Relaxed),
@@ -212,20 +194,6 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
uploads: 0,
size_bytes: pypi_size,
},
RegistryCardStats {
name: "go".to_string(),
artifact_count: go_versions,
downloads: state.metrics.get_registry_downloads("go"),
uploads: 0,
size_bytes: go_size,
},
RegistryCardStats {
name: "raw".to_string(),
artifact_count: raw_versions,
downloads: state.metrics.get_registry_downloads("raw"),
uploads: state.metrics.get_registry_uploads("raw"),
size_bytes: raw_size,
},
];
let mount_points = vec![
@@ -259,16 +227,6 @@ pub async fn api_dashboard(State(state): State<Arc<AppState>>) -> Json<Dashboard
mount_path: "/simple/".to_string(),
proxy_upstream: state.config.pypi.proxy.clone(),
},
MountPoint {
registry: "Go".to_string(),
mount_path: "/go/".to_string(),
proxy_upstream: state.config.go.proxy.clone(),
},
MountPoint {
registry: "Raw".to_string(),
mount_path: "/raw/".to_string(),
proxy_upstream: None,
},
];
let activity = state.activity.recent(20);
@@ -415,32 +373,12 @@ pub async fn get_registry_stats(storage: &Storage) -> RegistryStats {
.collect::<HashSet<_>>()
.len();
let go = all_keys
.iter()
.filter(|k| k.starts_with("go/") && k.ends_with(".zip"))
.filter_map(|k| {
let rest = k.strip_prefix("go/")?;
let pos = rest.rfind("/@v/")?;
Some(rest[..pos].to_string())
})
.collect::<HashSet<_>>()
.len();
let raw = all_keys
.iter()
.filter(|k| k.starts_with("raw/"))
.filter_map(|k| k.strip_prefix("raw/")?.split('/').next())
.collect::<HashSet<_>>()
.len();
RegistryStats {
docker,
maven,
npm,
cargo,
pypi,
go,
raw,
}
}
@@ -936,32 +874,6 @@ pub async fn get_pypi_detail(storage: &Storage, name: &str) -> PackageDetail {
PackageDetail { versions }
}
pub async fn get_go_detail(storage: &Storage, module: &str) -> PackageDetail {
let prefix = format!("go/{}/@v/", module);
let keys = storage.list(&prefix).await;
let mut versions = Vec::new();
for key in keys.iter().filter(|k| k.ends_with(".zip")) {
if let Some(rest) = key.strip_prefix(&prefix) {
if let Some(version) = rest.strip_suffix(".zip") {
let (size, published) = if let Some(meta) = storage.stat(key).await {
(meta.size, format_timestamp(meta.modified))
} else {
(0, "N/A".to_string())
};
versions.push(VersionInfo {
version: version.to_string(),
size,
published,
});
}
}
}
versions.sort_by(|a, b| b.version.cmp(&a.version));
PackageDetail { versions }
}
fn extract_pypi_version(name: &str, filename: &str) -> Option<String> {
// Handle both .tar.gz and .whl files
let clean_name = name.replace('-', "_");
@@ -985,26 +897,3 @@ fn extract_pypi_version(name: &str, filename: &str) -> Option<String> {
None
}
}
pub async fn get_raw_detail(storage: &Storage, group: &str) -> PackageDetail {
let prefix = format!("raw/{}/", group);
let keys = storage.list(&prefix).await;
let mut versions = Vec::new();
for key in &keys {
if let Some(filename) = key.strip_prefix(&prefix) {
let (size, published) = if let Some(meta) = storage.stat(key).await {
(meta.size, format_timestamp(meta.modified))
} else {
(0, "N/A".to_string())
};
versions.push(VersionInfo {
version: filename.to_string(),
size,
published,
});
}
}
PackageDetail { versions }
}

View File

@@ -90,7 +90,7 @@ fn sidebar_dark(active_page: Option<&str>, t: &Translations) -> String {
let docker_icon = r#"<path fill="currentColor" d="M13.983 11.078h2.119a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.119a.185.185 0 00-.185.185v1.888c0 .102.083.185.185.185m-2.954-5.43h2.118a.186.186 0 00.186-.186V3.574a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.186m0 2.716h2.118a.187.187 0 00.186-.186V6.29a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.887c0 .102.082.185.185.186m-2.93 0h2.12a.186.186 0 00.184-.186V6.29a.185.185 0 00-.185-.185H8.1a.185.185 0 00-.185.185v1.887c0 .102.083.185.185.186m-2.964 0h2.119a.186.186 0 00.185-.186V6.29a.185.185 0 00-.185-.185H5.136a.186.186 0 00-.186.185v1.887c0 .102.084.185.186.186m5.893 2.715h2.118a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.185m-2.93 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.083.185.185.185m-2.964 0h2.119a.185.185 0 00.185-.185V9.006a.185.185 0 00-.185-.186h-2.12a.186.186 0 00-.185.186v1.887c0 .102.084.185.186.185m-2.92 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.082.185.185.185M23.763 9.89c-.065-.051-.672-.51-1.954-.51-.338.001-.676.03-1.01.087-.248-1.7-1.653-2.53-1.716-2.566l-.344-.199-.226.327c-.284.438-.49.922-.612 1.43-.23.97-.09 1.882.403 2.661-.595.332-1.55.413-1.744.42H.751a.751.751 0 00-.75.748 11.376 11.376 0 00.692 4.062c.545 1.428 1.355 2.48 2.41 3.124 1.18.723 3.1 1.137 5.275 1.137.983.003 1.963-.086 2.93-.266a12.248 12.248 0 003.823-1.389c.98-.567 1.86-1.288 2.61-2.136 1.252-1.418 1.998-2.997 2.553-4.4h.221c1.372 0 2.215-.549 2.68-1.009.309-.293.55-.65.707-1.046l.098-.288Z"/>"#;
let maven_icon = r#"<path fill="currentColor" d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm-1 17.93c-3.95-.49-7-3.85-7-7.93 0-.62.08-1.21.21-1.79L9 15v1c0 1.1.9 2 2 2v1.93zm6.9-2.54c-.26-.81-1-1.39-1.9-1.39h-1v-3c0-.55-.45-1-1-1H8v-2h2c.55 0 1-.45 1-1V7h2c1.1 0 2-.9 2-2v-.41c2.93 1.19 5 4.06 5 7.41 0 2.08-.8 3.97-2.1 5.39z"/>"#;
let npm_icon = r#"<path fill="currentColor" d="M0 7.334v8h6.666v1.332H12v-1.332h12v-8H0zm6.666 6.664H5.334v-4H3.999v4H1.335V8.667h5.331v5.331zm4 0v1.336H8.001V8.667h5.334v5.332h-2.669v-.001zm12.001 0h-1.33v-4h-1.336v4h-1.335v-4h-1.33v4h-2.671V8.667h8.002v5.331zM10.665 10H12v2.667h-1.335V10z"/>"#;
let cargo_icon = r#"<path fill="currentColor" d="M6 2h12a1 1 0 011 1v8a1 1 0 01-1 1H6a1 1 0 01-1-1V3a1 1 0 011-1zm0 2v2h12V4H6zm0 3v2h12V7H6zM2 14h8a1 1 0 011 1v6a1 1 0 01-1 1H2a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16H2zM14 14h8a1 1 0 011 1v6a1 1 0 01-1 1h-8a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16h-8z"/>"#;
let cargo_icon = r#"<path fill="currentColor" d="M20 8h-3V4H3c-1.1 0-2 .9-2 2v11h2c0 1.66 1.34 3 3 3s3-1.34 3-3h6c0 1.66 1.34 3 3 3s3-1.34 3-3h2v-5l-3-4zM6 18.5c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5zm13.5-9l1.96 2.5H17V9.5h2.5zm-1.5 9c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5z"/>"#;
let pypi_icon = r#"<path fill="currentColor" d="M14.25.18l.9.2.73.26.59.3.45.32.34.34.25.34.16.33.1.3.04.26.02.2-.01.13V8.5l-.05.63-.13.55-.21.46-.26.38-.3.31-.33.25-.35.19-.35.14-.33.1-.3.07-.26.04-.21.02H8.83l-.69.05-.59.14-.5.22-.41.27-.33.32-.27.35-.2.36-.15.37-.1.35-.07.32-.04.27-.02.21v3.06H3.23l-.21-.03-.28-.07-.32-.12-.35-.18-.36-.26-.36-.36-.35-.46-.32-.59-.28-.73-.21-.88-.14-1.05L0 11.97l.06-1.22.16-1.04.24-.87.32-.71.36-.57.4-.44.42-.33.42-.24.4-.16.36-.1.32-.05.24-.01h.16l.06.01h8.16v-.83H6.24l-.01-2.75-.02-.37.05-.34.11-.31.17-.28.25-.26.31-.23.38-.2.44-.18.51-.15.58-.12.64-.1.71-.06.77-.04.84-.02 1.27.05 1.07.13zm-6.3 1.98l-.23.33-.08.41.08.41.23.34.33.22.41.09.41-.09.33-.22.23-.34.08-.41-.08-.41-.23-.33-.33-.22-.41-.09-.41.09-.33.22zM21.1 6.11l.28.06.32.12.35.18.36.27.36.35.35.47.32.59.28.73.21.88.14 1.04.05 1.23-.06 1.23-.16 1.04-.24.86-.32.71-.36.57-.4.45-.42.33-.42.24-.4.16-.36.09-.32.05-.24.02-.16-.01h-8.22v.82h5.84l.01 2.76.02.36-.05.34-.11.31-.17.29-.25.25-.31.24-.38.2-.44.17-.51.15-.58.13-.64.09-.71.07-.77.04-.84.01-1.27-.04-1.07-.14-.9-.2-.73-.25-.59-.3-.45-.33-.34-.34-.25-.34-.16-.33-.1-.3-.04-.25-.02-.2.01-.13v-5.34l.05-.64.13-.54.21-.46.26-.38.3-.32.33-.24.35-.2.35-.14.33-.1.3-.06.26-.04.21-.02.13-.01h5.84l.69-.05.59-.14.5-.21.41-.28.33-.32.27-.35.2-.36.15-.36.1-.35.07-.32.04-.28.02-.21V6.07h2.09l.14.01.21.03zm-6.47 14.25l-.23.33-.08.41.08.41.23.33.33.23.41.08.41-.08.33-.23.23-.33.08-.41-.08-.41-.23-.33-.33-.23-.41-.08-.41.08-.33.23z"/>"#;
// Dashboard label is translated, registry names stay as-is
@@ -109,13 +109,6 @@ fn sidebar_dark(active_page: Option<&str>, t: &Translations) -> String {
("npm", "/ui/npm", "npm", npm_icon, false),
("cargo", "/ui/cargo", "Cargo", cargo_icon, false),
("pypi", "/ui/pypi", "PyPI", pypi_icon, false),
(
"go",
"/ui/go",
"Go",
r#"<path fill="currentColor" d="M2.64 9.56s.24-.14.65-.38c.41-.24.97-.5 1.63-.7A7.85 7.85 0 017.53 8c.86 0 1.67.17 2.37.52.7.35 1.26.87 1.63 1.51.37.64.54 1.41.54 2.27v.2h-2.7v-.16c0-.47-.09-.86-.28-1.15a1.7 1.7 0 00-.77-.67 2.7 2.7 0 00-1.14-.22c-.56 0-1.06.13-1.46.4-.41.27-.72.66-.93 1.16-.21.5-.31 1.1-.31 1.8 0 .69.1 1.28.32 1.78.21.5.53.88.94 1.15.41.27.9.4 1.47.4.38 0 .73-.06 1.04-.17.31-.12.56-.29.74-.52.19-.23.29-.51.29-.84v-.14H7.15v-1.76h5.07v1.3c0 .8-.17 1.48-.52 2.04a3.46 3.46 0 01-1.5 1.3c-.66.3-1.44.45-2.35.45-.99 0-1.87-.18-2.63-.55a4.2 4.2 0 01-1.77-1.59C3.15 14.82 3 13.94 3 12.89v-.28c0-1.04.16-1.93.48-2.65a3.08 3.08 0 01-.84-.4zm12.1-1.34c.92 0 1.74.18 2.44.55a3.96 3.96 0 011.66 1.59c.4.7.6 1.54.6 2.53v.28c0 .99-.2 1.83-.6 2.53a3.96 3.96 0 01-1.66 1.59c-.7.37-1.52.55-2.44.55s-1.74-.18-2.44-.55a3.96 3.96 0 01-1.66-1.59c-.4-.7-.6-1.54-.6-2.53v-.28c0-.99.2-1.83.6-2.53a3.96 3.96 0 011.66-1.59c.7-.37 1.52-.55 2.44-.55zm0 2.12c-.44 0-.82.12-1.14.37-.32.24-.56.6-.73 1.06-.17.46-.26 1.01-.26 1.65v.28c0 .64.09 1.19.26 1.65.17.46.41.82.73 1.06.32.25.7.37 1.14.37.44 0 .82-.12 1.14-.37.32-.24.56-.6.73-1.06.17-.46.26-1.01.26-1.65v-.28c0-.64-.09-1.19-.26-1.65a2.17 2.17 0 00-.73-1.06 1.78 1.78 0 00-1.14-.37z"/>"#,
false,
),
];
let nav_html: String = nav_items.iter().map(|(id, href, label, icon_path, is_stroke)| {
@@ -284,15 +277,15 @@ pub fn render_registry_card(
) -> String {
format!(
r##"
<a href="{}" id="registry-{}" class="block bg-[#1e293b] rounded-lg border border-slate-700 p-3 hover:border-blue-400 transition-all">
<div class="flex items-center justify-between mb-2">
<svg class="w-6 h-6 text-slate-400" fill="currentColor" viewBox="0 0 24 24">
<a href="{}" id="registry-{}" class="block bg-[#1e293b] rounded-lg border border-slate-700 p-4 md:p-6 hover:border-blue-400 transition-all">
<div class="flex items-center justify-between mb-3">
<svg class="w-8 h-8 text-slate-400" fill="currentColor" viewBox="0 0 24 24">
{}
</svg>
<span class="text-[10px] font-medium text-green-400 bg-green-400/10 px-1.5 py-0.5 rounded-full">{}</span>
<span class="text-xs font-medium text-green-400 bg-green-400/10 px-2 py-1 rounded-full">{}</span>
</div>
<div class="text-sm font-semibold text-slate-200 mb-2">{}</div>
<div class="grid grid-cols-2 gap-1 text-xs">
<div class="text-lg font-semibold text-slate-200 mb-2">{}</div>
<div class="grid grid-cols-2 gap-2 text-sm">
<div>
<span class="text-slate-500">{}</span>
<div class="text-slate-300 font-medium">{}</div>
@@ -497,7 +490,7 @@ fn sidebar(active_page: Option<&str>) -> String {
let docker_icon = r#"<path fill="currentColor" d="M13.983 11.078h2.119a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.119a.185.185 0 00-.185.185v1.888c0 .102.083.185.185.185m-2.954-5.43h2.118a.186.186 0 00.186-.186V3.574a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.186m0 2.716h2.118a.187.187 0 00.186-.186V6.29a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.887c0 .102.082.185.185.186m-2.93 0h2.12a.186.186 0 00.184-.186V6.29a.185.185 0 00-.185-.185H8.1a.185.185 0 00-.185.185v1.887c0 .102.083.185.185.186m-2.964 0h2.119a.186.186 0 00.185-.186V6.29a.185.185 0 00-.185-.185H5.136a.186.186 0 00-.186.185v1.887c0 .102.084.185.186.186m5.893 2.715h2.118a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.185m-2.93 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.083.185.185.185m-2.964 0h2.119a.185.185 0 00.185-.185V9.006a.185.185 0 00-.185-.186h-2.12a.186.186 0 00-.185.186v1.887c0 .102.084.185.186.185m-2.92 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.082.185.185.185M23.763 9.89c-.065-.051-.672-.51-1.954-.51-.338.001-.676.03-1.01.087-.248-1.7-1.653-2.53-1.716-2.566l-.344-.199-.226.327c-.284.438-.49.922-.612 1.43-.23.97-.09 1.882.403 2.661-.595.332-1.55.413-1.744.42H.751a.751.751 0 00-.75.748 11.376 11.376 0 00.692 4.062c.545 1.428 1.355 2.48 2.41 3.124 1.18.723 3.1 1.137 5.275 1.137.983.003 1.963-.086 2.93-.266a12.248 12.248 0 003.823-1.389c.98-.567 1.86-1.288 2.61-2.136 1.252-1.418 1.998-2.997 2.553-4.4h.221c1.372 0 2.215-.549 2.68-1.009.309-.293.55-.65.707-1.046l.098-.288Z"/>"#;
let maven_icon = r#"<path fill="currentColor" d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm-1 17.93c-3.95-.49-7-3.85-7-7.93 0-.62.08-1.21.21-1.79L9 15v1c0 1.1.9 2 2 2v1.93zm6.9-2.54c-.26-.81-1-1.39-1.9-1.39h-1v-3c0-.55-.45-1-1-1H8v-2h2c.55 0 1-.45 1-1V7h2c1.1 0 2-.9 2-2v-.41c2.93 1.19 5 4.06 5 7.41 0 2.08-.8 3.97-2.1 5.39z"/>"#;
let npm_icon = r#"<path fill="currentColor" d="M0 7.334v8h6.666v1.332H12v-1.332h12v-8H0zm6.666 6.664H5.334v-4H3.999v4H1.335V8.667h5.331v5.331zm4 0v1.336H8.001V8.667h5.334v5.332h-2.669v-.001zm12.001 0h-1.33v-4h-1.336v4h-1.335v-4h-1.33v4h-2.671V8.667h8.002v5.331zM10.665 10H12v2.667h-1.335V10z"/>"#;
let cargo_icon = r#"<path fill="currentColor" d="M6 2h12a1 1 0 011 1v8a1 1 0 01-1 1H6a1 1 0 01-1-1V3a1 1 0 011-1zm0 2v2h12V4H6zm0 3v2h12V7H6zM2 14h8a1 1 0 011 1v6a1 1 0 01-1 1H2a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16H2zM14 14h8a1 1 0 011 1v6a1 1 0 01-1 1h-8a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16h-8z"/>"#;
let cargo_icon = r#"<path fill="currentColor" d="M20 8h-3V4H3c-1.1 0-2 .9-2 2v11h2c0 1.66 1.34 3 3 3s3-1.34 3-3h6c0 1.66 1.34 3 3 3s3-1.34 3-3h2v-5l-3-4zM6 18.5c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5zm13.5-9l1.96 2.5H17V9.5h2.5zm-1.5 9c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5z"/>"#;
let pypi_icon = r#"<path fill="currentColor" d="M14.25.18l.9.2.73.26.59.3.45.32.34.34.25.34.16.33.1.3.04.26.02.2-.01.13V8.5l-.05.63-.13.55-.21.46-.26.38-.3.31-.33.25-.35.19-.35.14-.33.1-.3.07-.26.04-.21.02H8.83l-.69.05-.59.14-.5.22-.41.27-.33.32-.27.35-.2.36-.15.37-.1.35-.07.32-.04.27-.02.21v3.06H3.23l-.21-.03-.28-.07-.32-.12-.35-.18-.36-.26-.36-.36-.35-.46-.32-.59-.28-.73-.21-.88-.14-1.05L0 11.97l.06-1.22.16-1.04.24-.87.32-.71.36-.57.4-.44.42-.33.42-.24.4-.16.36-.1.32-.05.24-.01h.16l.06.01h8.16v-.83H6.24l-.01-2.75-.02-.37.05-.34.11-.31.17-.28.25-.26.31-.23.38-.2.44-.18.51-.15.58-.12.64-.1.71-.06.77-.04.84-.02 1.27.05 1.07.13zm-6.3 1.98l-.23.33-.08.41.08.41.23.34.33.22.41.09.41-.09.33-.22.23-.34.08-.41-.08-.41-.23-.33-.33-.22-.41-.09-.41.09-.33.22zM21.1 6.11l.28.06.32.12.35.18.36.27.36.35.35.47.32.59.28.73.21.88.14 1.04.05 1.23-.06 1.23-.16 1.04-.24.86-.32.71-.36.57-.4.45-.42.33-.42.24-.4.16-.36.09-.32.05-.24.02-.16-.01h-8.22v.82h5.84l.01 2.76.02.36-.05.34-.11.31-.17.29-.25.25-.31.24-.38.2-.44.17-.51.15-.58.13-.64.09-.71.07-.77.04-.84.01-1.27-.04-1.07-.14-.9-.2-.73-.25-.59-.3-.45-.33-.34-.34-.25-.34-.16-.33-.1-.3-.04-.25-.02-.2.01-.13v-5.34l.05-.64.13-.54.21-.46.26-.38.3-.32.33-.24.35-.2.35-.14.33-.1.3-.06.26-.04.21-.02.13-.01h5.84l.69-.05.59-.14.5-.21.41-.28.33-.32.27-.35.2-.36.15-.36.1-.35.07-.32.04-.28.02-.21V6.07h2.09l.14.01.21.03zm-6.47 14.25l-.23.33-.08.41.08.41.23.33.33.23.41.08.41-.08.33-.23.23-.33.08-.41-.08-.41-.23-.33-.33-.23-.41-.08-.41.08-.33.23z"/>"#;
let nav_items = [
@@ -513,13 +506,6 @@ fn sidebar(active_page: Option<&str>) -> String {
("npm", "/ui/npm", "npm", npm_icon, false),
("cargo", "/ui/cargo", "Cargo", cargo_icon, false),
("pypi", "/ui/pypi", "PyPI", pypi_icon, false),
(
"go",
"/ui/go",
"Go",
r#"<path fill="currentColor" d="M2.64 9.56s.24-.14.65-.38c.41-.24.97-.5 1.63-.7A7.85 7.85 0 017.53 8c.86 0 1.67.17 2.37.52.7.35 1.26.87 1.63 1.51.37.64.54 1.41.54 2.27v.2h-2.7v-.16c0-.47-.09-.86-.28-1.15a1.7 1.7 0 00-.77-.67 2.7 2.7 0 00-1.14-.22c-.56 0-1.06.13-1.46.4-.41.27-.72.66-.93 1.16-.21.5-.31 1.1-.31 1.8 0 .69.1 1.28.32 1.78.21.5.53.88.94 1.15.41.27.9.4 1.47.4.38 0 .73-.06 1.04-.17.31-.12.56-.29.74-.52.19-.23.29-.51.29-.84v-.14H7.15v-1.76h5.07v1.3c0 .8-.17 1.48-.52 2.04a3.46 3.46 0 01-1.5 1.3c-.66.3-1.44.45-2.35.45-.99 0-1.87-.18-2.63-.55a4.2 4.2 0 01-1.77-1.59C3.15 14.82 3 13.94 3 12.89v-.28c0-1.04.16-1.93.48-2.65a3.08 3.08 0 01-.84-.4zm12.1-1.34c.92 0 1.74.18 2.44.55a3.96 3.96 0 011.66 1.59c.4.7.6 1.54.6 2.53v.28c0 .99-.2 1.83-.6 2.53a3.96 3.96 0 01-1.66 1.59c-.7.37-1.52.55-2.44.55s-1.74-.18-2.44-.55a3.96 3.96 0 01-1.66-1.59c-.4-.7-.6-1.54-.6-2.53v-.28c0-.99.2-1.83.6-2.53a3.96 3.96 0 011.66-1.59c.7-.37 1.52-.55 2.44-.55zm0 2.12c-.44 0-.82.12-1.14.37-.32.24-.56.6-.73 1.06-.17.46-.26 1.01-.26 1.65v.28c0 .64.09 1.19.26 1.65.17.46.41.82.73 1.06.32.25.7.37 1.14.37.44 0 .82-.12 1.14-.37.32-.24.56-.6.73-1.06.17-.46.26-1.01.26-1.65v-.28c0-.64-.09-1.19-.26-1.65a2.17 2.17 0 00-.73-1.06 1.78 1.78 0 00-1.14-.37z"/>"#,
false,
),
];
let nav_html: String = nav_items.iter().map(|(id, href, label, icon_path, is_stroke)| {
@@ -627,9 +613,7 @@ pub mod icons {
pub const DOCKER: &str = r#"<path fill="currentColor" d="M13.983 11.078h2.119a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.119a.185.185 0 00-.185.185v1.888c0 .102.083.185.185.185m-2.954-5.43h2.118a.186.186 0 00.186-.186V3.574a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.186m0 2.716h2.118a.187.187 0 00.186-.186V6.29a.186.186 0 00-.186-.185h-2.118a.185.185 0 00-.185.185v1.887c0 .102.082.185.185.186m-2.93 0h2.12a.186.186 0 00.184-.186V6.29a.185.185 0 00-.185-.185H8.1a.185.185 0 00-.185.185v1.887c0 .102.083.185.185.186m-2.964 0h2.119a.186.186 0 00.185-.186V6.29a.185.185 0 00-.185-.185H5.136a.186.186 0 00-.186.185v1.887c0 .102.084.185.186.186m5.893 2.715h2.118a.186.186 0 00.186-.185V9.006a.186.186 0 00-.186-.186h-2.118a.185.185 0 00-.185.185v1.888c0 .102.082.185.185.185m-2.93 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.083.185.185.185m-2.964 0h2.119a.185.185 0 00.185-.185V9.006a.185.185 0 00-.185-.186h-2.12a.186.186 0 00-.185.186v1.887c0 .102.084.185.186.185m-2.92 0h2.12a.185.185 0 00.184-.185V9.006a.185.185 0 00-.184-.186h-2.12a.185.185 0 00-.184.185v1.888c0 .102.082.185.185.185M23.763 9.89c-.065-.051-.672-.51-1.954-.51-.338.001-.676.03-1.01.087-.248-1.7-1.653-2.53-1.716-2.566l-.344-.199-.226.327c-.284.438-.49.922-.612 1.43-.23.97-.09 1.882.403 2.661-.595.332-1.55.413-1.744.42H.751a.751.751 0 00-.75.748 11.376 11.376 0 00.692 4.062c.545 1.428 1.355 2.48 2.41 3.124 1.18.723 3.1 1.137 5.275 1.137.983.003 1.963-.086 2.93-.266a12.248 12.248 0 003.823-1.389c.98-.567 1.86-1.288 2.61-2.136 1.252-1.418 1.998-2.997 2.553-4.4h.221c1.372 0 2.215-.549 2.68-1.009.309-.293.55-.65.707-1.046l.098-.288Z"/>"#;
pub const MAVEN: &str = r#"<path fill="currentColor" d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm-1 17.93c-3.95-.49-7-3.85-7-7.93 0-.62.08-1.21.21-1.79L9 15v1c0 1.1.9 2 2 2v1.93zm6.9-2.54c-.26-.81-1-1.39-1.9-1.39h-1v-3c0-.55-.45-1-1-1H8v-2h2c.55 0 1-.45 1-1V7h2c1.1 0 2-.9 2-2v-.41c2.93 1.19 5 4.06 5 7.41 0 2.08-.8 3.97-2.1 5.39z"/>"#;
pub const NPM: &str = r#"<path fill="currentColor" d="M0 7.334v8h6.666v1.332H12v-1.332h12v-8H0zm6.666 6.664H5.334v-4H3.999v4H1.335V8.667h5.331v5.331zm4 0v1.336H8.001V8.667h5.334v5.332h-2.669v-.001zm12.001 0h-1.33v-4h-1.336v4h-1.335v-4h-1.33v4h-2.671V8.667h8.002v5.331zM10.665 10H12v2.667h-1.335V10z"/>"#;
pub const CARGO: &str = r#"<path fill="currentColor" d="M6 2h12a1 1 0 011 1v8a1 1 0 01-1 1H6a1 1 0 01-1-1V3a1 1 0 011-1zm0 2v2h12V4H6zm0 3v2h12V7H6zM2 14h8a1 1 0 011 1v6a1 1 0 01-1 1H2a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16H2zM14 14h8a1 1 0 011 1v6a1 1 0 01-1 1h-8a1 1 0 01-1-1v-6a1 1 0 011-1zm0 2v1.5h8V16h-8z"/>"#;
pub const GO: &str = r#"<path fill="currentColor" d="M2.64 9.56s.24-.14.65-.38c.41-.24.97-.5 1.63-.7A7.85 7.85 0 017.53 8c.86 0 1.67.17 2.37.52.7.35 1.26.87 1.63 1.51.37.64.54 1.41.54 2.27v.2h-2.7v-.16c0-.47-.09-.86-.28-1.15a1.7 1.7 0 00-.77-.67 2.7 2.7 0 00-1.14-.22c-.56 0-1.06.13-1.46.4-.41.27-.72.66-.93 1.16-.21.5-.31 1.1-.31 1.8 0 .69.1 1.28.32 1.78.21.5.53.88.94 1.15.41.27.9.4 1.47.4.38 0 .73-.06 1.04-.17.31-.12.56-.29.74-.52.19-.23.29-.51.29-.84v-.14H7.15v-1.76h5.07v1.3c0 .8-.17 1.48-.52 2.04a3.46 3.46 0 01-1.5 1.3c-.66.3-1.44.45-2.35.45-.99 0-1.87-.18-2.63-.55a4.2 4.2 0 01-1.77-1.59C3.15 14.82 3 13.94 3 12.89v-.28c0-1.04.16-1.93.48-2.65a3.08 3.08 0 01-.84-.4zm12.1-1.34c.92 0 1.74.18 2.44.55a3.96 3.96 0 011.66 1.59c.4.7.6 1.54.6 2.53v.28c0 .99-.2 1.83-.6 2.53a3.96 3.96 0 01-1.66 1.59c-.7.37-1.52.55-2.44.55s-1.74-.18-2.44-.55a3.96 3.96 0 01-1.66-1.59c-.4-.7-.6-1.54-.6-2.53v-.28c0-.99.2-1.83.6-2.53a3.96 3.96 0 011.66-1.59c.7-.37 1.52-.55 2.44-.55zm0 2.12c-.44 0-.82.12-1.14.37-.32.24-.56.6-.73 1.06-.17.46-.26 1.01-.26 1.65v.28c0 .64.09 1.19.26 1.65.17.46.41.82.73 1.06.32.25.7.37 1.14.37.44 0 .82-.12 1.14-.37.32-.24.56-.6.73-1.06.17-.46.26-1.01.26-1.65v-.28c0-.64-.09-1.19-.26-1.65a2.17 2.17 0 00-.73-1.06 1.78 1.78 0 00-1.14-.37z"/>"#;
pub const RAW: &str = r#"<path fill="currentColor" d="M14 2H6a2 2 0 00-2 2v16a2 2 0 002 2h12a2 2 0 002-2V8l-6-6zm4 18H6V4h7v5h5v11z"/>"#;
pub const CARGO: &str = r#"<path fill="currentColor" d="M20 8h-3V4H3c-1.1 0-2 .9-2 2v11h2c0 1.66 1.34 3 3 3s3-1.34 3-3h6c0 1.66 1.34 3 3 3s3-1.34 3-3h2v-5l-3-4zM6 18.5c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5zm13.5-9l1.96 2.5H17V9.5h2.5zm-1.5 9c-.83 0-1.5-.67-1.5-1.5s.67-1.5 1.5-1.5 1.5.67 1.5 1.5-.67 1.5-1.5 1.5z"/>"#;
pub const PYPI: &str = r#"<path fill="currentColor" d="M14.25.18l.9.2.73.26.59.3.45.32.34.34.25.34.16.33.1.3.04.26.02.2-.01.13V8.5l-.05.63-.13.55-.21.46-.26.38-.3.31-.33.25-.35.19-.35.14-.33.1-.3.07-.26.04-.21.02H8.83l-.69.05-.59.14-.5.22-.41.27-.33.32-.27.35-.2.36-.15.37-.1.35-.07.32-.04.27-.02.21v3.06H3.23l-.21-.03-.28-.07-.32-.12-.35-.18-.36-.26-.36-.36-.35-.46-.32-.59-.28-.73-.21-.88-.14-1.05L0 11.97l.06-1.22.16-1.04.24-.87.32-.71.36-.57.4-.44.42-.33.42-.24.4-.16.36-.1.32-.05.24-.01h.16l.06.01h8.16v-.83H6.24l-.01-2.75-.02-.37.05-.34.11-.31.17-.28.25-.26.31-.23.38-.2.44-.18.51-.15.58-.12.64-.1.71-.06.77-.04.84-.02 1.27.05 1.07.13zm-6.3 1.98l-.23.33-.08.41.08.41.23.34.33.22.41.09.41-.09.33-.22.23-.34.08-.41-.08-.41-.23-.33-.33-.22-.41-.09-.41.09-.33.22zM21.1 6.11l.28.06.32.12.35.18.36.27.36.35.35.47.32.59.28.73.21.88.14 1.04.05 1.23-.06 1.23-.16 1.04-.24.86-.32.71-.36.57-.4.45-.42.33-.42.24-.4.16-.36.09-.32.05-.24.02-.16-.01h-8.22v.82h5.84l.01 2.76.02.36-.05.34-.11.31-.17.29-.25.25-.31.24-.38.2-.44.17-.51.15-.58.13-.64.09-.71.07-.77.04-.84.01-1.27-.04-1.07-.14-.9-.2-.73-.25-.59-.3-.45-.33-.34-.34-.25-.34-.16-.33-.1-.3-.04-.25-.02-.2.01-.13v-5.34l.05-.64.13-.54.21-.46.26-.38.3-.32.33-.24.35-.2.35-.14.33-.1.3-.06.26-.04.21-.02.13-.01h5.84l.69-.05.59-.14.5-.21.41-.28.33-.32.27-.35.2-.36.15-.36.1-.35.07-.32.04-.28.02-.21V6.07h2.09l.14.01.21.03zm-6.47 14.25l-.23.33-.08.41.08.41.23.33.33.23.41.08.41-.08.33-.23.23-.33.08-.41-.08-.41-.23-.33-.33-.23-.41-.08-.41.08-.33.23z"/>"#;
}

View File

@@ -87,10 +87,6 @@ pub fn routes() -> Router<Arc<AppState>> {
.route("/ui/cargo/{name}", get(cargo_detail))
.route("/ui/pypi", get(pypi_list))
.route("/ui/pypi/{name}", get(pypi_detail))
.route("/ui/go", get(go_list))
.route("/ui/go/{*name}", get(go_detail))
.route("/ui/raw", get(raw_list))
.route("/ui/raw/{*name}", get(raw_detail))
// API endpoints for HTMX
.route("/api/ui/stats", get(api_stats))
.route("/api/ui/dashboard", get(api_dashboard))
@@ -302,79 +298,3 @@ async fn pypi_detail(
let detail = get_pypi_detail(&state.storage, &name).await;
Html(render_package_detail("pypi", &name, &detail, lang))
}
// Go pages
async fn go_list(
State(state): State<Arc<AppState>>,
Query(query): Query<ListQuery>,
headers: axum::http::HeaderMap,
) -> impl IntoResponse {
let lang = extract_lang_from_list(&query, headers.get("cookie").and_then(|v| v.to_str().ok()));
let page = query.page.unwrap_or(1).max(1);
let limit = query.limit.unwrap_or(DEFAULT_PAGE_SIZE).min(100);
let all_modules = state.repo_index.get("go", &state.storage).await;
let (modules, total) = paginate(&all_modules, page, limit);
Html(render_registry_list_paginated(
"go",
"Go Modules",
&modules,
page,
limit,
total,
lang,
))
}
async fn go_detail(
State(state): State<Arc<AppState>>,
Path(name): Path<String>,
Query(query): Query<LangQuery>,
headers: axum::http::HeaderMap,
) -> impl IntoResponse {
let lang = extract_lang(
&Query(query),
headers.get("cookie").and_then(|v| v.to_str().ok()),
);
let detail = get_go_detail(&state.storage, &name).await;
Html(render_package_detail("go", &name, &detail, lang))
}
// Raw pages
async fn raw_list(
State(state): State<Arc<AppState>>,
Query(query): Query<ListQuery>,
headers: axum::http::HeaderMap,
) -> impl IntoResponse {
let lang = extract_lang_from_list(&query, headers.get("cookie").and_then(|v| v.to_str().ok()));
let page = query.page.unwrap_or(1).max(1);
let limit = query.limit.unwrap_or(DEFAULT_PAGE_SIZE).min(100);
let all_files = state.repo_index.get("raw", &state.storage).await;
let (files, total) = paginate(&all_files, page, limit);
Html(render_registry_list_paginated(
"raw",
"Raw Storage",
&files,
page,
limit,
total,
lang,
))
}
async fn raw_detail(
State(state): State<Arc<AppState>>,
Path(name): Path<String>,
Query(query): Query<LangQuery>,
headers: axum::http::HeaderMap,
) -> impl IntoResponse {
let lang = extract_lang(
&Query(query),
headers.get("cookie").and_then(|v| v.to_str().ok()),
);
let detail = get_raw_detail(&state.storage, &name).await;
Html(render_package_detail("raw", &name, &detail, lang))
}

View File

@@ -24,8 +24,22 @@ pub fn render_dashboard(data: &DashboardResponse, lang: Lang) -> String {
.registry_stats
.iter()
.map(|r| {
let icon = get_registry_icon(&r.name);
let display_name = get_registry_title(&r.name);
let icon = match r.name.as_str() {
"docker" => icons::DOCKER,
"maven" => icons::MAVEN,
"npm" => icons::NPM,
"cargo" => icons::CARGO,
"pypi" => icons::PYPI,
_ => icons::DOCKER,
};
let display_name = match r.name.as_str() {
"docker" => "Docker",
"maven" => "Maven",
"npm" => "npm",
"cargo" => "Cargo",
"pypi" => "PyPI",
_ => &r.name,
};
render_registry_card(
display_name,
icon,
@@ -61,56 +75,43 @@ pub fn render_dashboard(data: &DashboardResponse, lang: Lang) -> String {
)
} else {
// Group consecutive identical entries (same action+artifact+registry+source)
struct GroupedActivity {
time: String,
action: String,
artifact: String,
registry: String,
source: String,
count: usize,
}
let mut grouped: Vec<GroupedActivity> = Vec::new();
let mut grouped: Vec<(String, String, String, String, String, usize)> = Vec::new();
for entry in &data.activity {
let action = entry.action.to_string();
let is_repeat = grouped.last().is_some_and(|last| {
last.action == action
&& last.artifact == entry.artifact
&& last.registry == entry.registry
&& last.source == entry.source
});
let last_match = grouped
.last()
.map(|(_, a, art, reg, src, _)| {
*a == action
&& *art == entry.artifact
&& *reg == entry.registry
&& *src == entry.source
})
.unwrap_or(false);
if is_repeat {
if let Some(last) = grouped.last_mut() {
last.count += 1;
}
if last_match {
grouped.last_mut().unwrap().5 += 1;
} else {
grouped.push(GroupedActivity {
time: format_relative_time(&entry.timestamp),
let time_ago = format_relative_time(&entry.timestamp);
grouped.push((
time_ago,
action,
artifact: entry.artifact.clone(),
registry: entry.registry.clone(),
source: entry.source.clone(),
count: 1,
});
entry.artifact.clone(),
entry.registry.clone(),
entry.source.clone(),
1,
));
}
}
grouped
.iter()
.map(|g| {
let display_artifact = if g.count > 1 {
format!("{} (x{})", g.artifact, g.count)
.map(|(time, action, artifact, registry, source, count)| {
let display_artifact = if *count > 1 {
format!("{} (x{})", artifact, count)
} else {
g.artifact.clone()
artifact.clone()
};
render_activity_row(
&g.time,
&g.action,
&display_artifact,
&g.registry,
&g.source,
)
render_activity_row(time, action, &display_artifact, registry, source)
})
.collect()
};
@@ -141,7 +142,7 @@ pub fn render_dashboard(data: &DashboardResponse, lang: Lang) -> String {
{}
<div class="grid grid-cols-2 md:grid-cols-4 lg:grid-cols-7 gap-3 mb-6">
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-5 gap-4 mb-6">
{}
</div>
@@ -655,8 +656,6 @@ pub fn render_package_detail(
"pip install {} --index-url http://127.0.0.1:4000/simple",
name
),
"go" => format!("GOPROXY=http://127.0.0.1:4000/go go get {}", name),
"raw" => format!("curl -O http://127.0.0.1:4000/raw/{}/<file>", name),
_ => String::new(),
};
@@ -823,8 +822,6 @@ fn get_registry_icon(registry_type: &str) -> &'static str {
"npm" => icons::NPM,
"cargo" => icons::CARGO,
"pypi" => icons::PYPI,
"go" => icons::GO,
"raw" => icons::RAW,
_ => {
r#"<path fill="currentColor" d="M10 4H4c-1.1 0-1.99.9-1.99 2L2 18c0 1.1.9 2 2 2h16c1.1 0 2-.9 2-2V8c0-1.1-.9-2-2-2h-8l-2-2z"/>"#
}
@@ -838,8 +835,6 @@ fn get_registry_title(registry_type: &str) -> &'static str {
"npm" => "npm Registry",
"cargo" => "Cargo Registry",
"pypi" => "PyPI Repository",
"go" => "Go Modules",
"raw" => "Raw Storage",
_ => "Registry",
}
}

View File

@@ -181,148 +181,6 @@ else
fail "Dashboard npm count is $NPM_COUNT, expected > 0"
fi
echo ""
echo "--- Docker Push/Pull + Digest Verify ---"
# Create a minimal Docker image, push, pull, verify digest
DOCKER_AVAILABLE=true
if ! docker info >/dev/null 2>&1; then
echo " SKIP: Docker daemon not available"
DOCKER_AVAILABLE=false
fi
if [ "$DOCKER_AVAILABLE" = true ]; then
DOCKER_IMG="localhost:${PORT}/smoke-test/hello:v1"
# Create tiny image from scratch
DOCKER_BUILD_DIR=$(mktemp -d)
echo "FROM scratch" > "$DOCKER_BUILD_DIR/Dockerfile"
echo "smoke-test" > "$DOCKER_BUILD_DIR/data.txt"
echo "COPY data.txt /data.txt" >> "$DOCKER_BUILD_DIR/Dockerfile"
if docker build -t "$DOCKER_IMG" "$DOCKER_BUILD_DIR" >/dev/null 2>&1; then
pass "docker build smoke image"
else
fail "docker build smoke image"
fi
rm -rf "$DOCKER_BUILD_DIR"
# Push
if docker push "$DOCKER_IMG" >/dev/null 2>&1; then
pass "docker push to NORA"
else
fail "docker push to NORA"
fi
# Get digest from registry
MANIFEST=$(curl -sf -H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
"$BASE/v2/smoke-test/hello/manifests/v1" 2>/dev/null || echo "")
if [ -n "$MANIFEST" ] && echo "$MANIFEST" | python3 -c "import sys,json; json.load(sys.stdin)" >/dev/null 2>&1; then
pass "docker manifest retrievable from NORA"
else
fail "docker manifest not retrievable"
fi
# Remove local image and pull back
docker rmi "$DOCKER_IMG" >/dev/null 2>&1 || true
if docker pull "$DOCKER_IMG" >/dev/null 2>&1; then
pass "docker pull from NORA"
else
fail "docker pull from NORA"
fi
# Verify digest matches: push digest == pull digest
PUSH_DIGEST=$(docker inspect "$DOCKER_IMG" --format='{{index .RepoDigests 0}}' 2>/dev/null | cut -d@ -f2)
if [ -n "$PUSH_DIGEST" ] && echo "$PUSH_DIGEST" | grep -q "^sha256:"; then
pass "docker digest verified (${PUSH_DIGEST:0:20}...)"
else
fail "docker digest verification failed"
fi
# Cleanup
docker rmi "$DOCKER_IMG" >/dev/null 2>&1 || true
fi
echo ""
echo "--- npm Install + Integrity Verify ---"
# Test real npm client against NORA (not just curl)
NPM_TEST_DIR=$(mktemp -d)
cd "$NPM_TEST_DIR"
# Create minimal package.json
cat > package.json << 'PKGJSON'
{
"name": "nora-smoke-test",
"version": "1.0.0",
"dependencies": {
"chalk": "5.4.1"
}
}
PKGJSON
# npm install using NORA as registry
if npm install --registry "$BASE/npm/" --prefer-online --no-audit --no-fund >/dev/null 2>&1; then
pass "npm install chalk via NORA registry"
else
fail "npm install chalk via NORA registry"
fi
# Verify package was installed
if [ -f "node_modules/chalk/package.json" ]; then
INSTALLED_VER=$(python3 -c "import json; print(json.load(open('node_modules/chalk/package.json'))['version'])" 2>/dev/null || echo "")
if [ "$INSTALLED_VER" = "5.4.1" ]; then
pass "npm installed correct version (5.4.1)"
else
fail "npm installed wrong version: $INSTALLED_VER"
fi
else
fail "npm node_modules/chalk not found"
fi
# Verify integrity: check that package-lock.json has sha512 integrity
if [ -f "package-lock.json" ]; then
INTEGRITY=$(python3 -c "
import json
lock = json.load(open('package-lock.json'))
pkgs = lock.get('packages', {})
chalk = pkgs.get('node_modules/chalk', pkgs.get('chalk', {}))
print(chalk.get('integrity', ''))
" 2>/dev/null || echo "")
if echo "$INTEGRITY" | grep -q "^sha512-"; then
pass "npm integrity hash present (sha512)"
else
fail "npm integrity hash missing: $INTEGRITY"
fi
else
fail "npm package-lock.json not created"
fi
cd /tmp
rm -rf "$NPM_TEST_DIR"
echo ""
echo "--- Upstream Timeout Handling ---"
# Verify that requesting a non-existent package from upstream returns 404 quickly (not hang)
TIMEOUT_START=$(date +%s)
TIMEOUT_RESULT=$(curl -s -o /dev/null -w "%{http_code}" --max-time 15 \
"$BASE/npm/@nora-smoke-test/nonexistent-package-xyz-12345")
TIMEOUT_END=$(date +%s)
TIMEOUT_DURATION=$((TIMEOUT_END - TIMEOUT_START))
if [ "$TIMEOUT_RESULT" = "404" ]; then
pass "upstream 404 returned correctly"
else
fail "upstream returned $TIMEOUT_RESULT, expected 404"
fi
if [ "$TIMEOUT_DURATION" -lt 10 ]; then
pass "upstream 404 returned in ${TIMEOUT_DURATION}s (< 10s)"
else
fail "upstream 404 took ${TIMEOUT_DURATION}s (too slow, retry may hang)"
fi
echo ""
echo "--- Mirror CLI ---"
# Create a minimal lockfile