quality: MSRV, tarpaulin config, proptest for parsers (#84)

* fix: proxy dedup, multi-registry GC, TOCTOU and credential hygiene

- Deduplicate proxy_fetch/proxy_fetch_text into generic proxy_fetch_core
  with response extractor closure (removes ~50 lines of copy-paste)
- GC now scans all registry prefixes, not just docker/
- Add tracing::warn to fire-and-forget cache writes in docker proxy
- Mark S3 credentials as skip_serializing to prevent accidental leaks
- Remove TOCTOU race in LocalStorage get/delete (redundant exists check)

* chore: clean up root directory structure

- Move Dockerfile.astra and Dockerfile.redos to deploy/ (niche builds
  should not clutter the project root)
- Harden .gitignore to exclude session files, working notes, and
  internal review scripts

* refactor(metrics): replace 13 atomic fields with CounterMap

Per-registry download/upload counters were 13 individual AtomicU64
fields, each duplicated across new(), with_persistence(), save(),
record_download(), record_upload(), and get_registry_* (6 touch points
per counter). Adding a new registry required changes in 6+ places.

Now uses CounterMap (HashMap<String, AtomicU64>) for per-registry
counters. Adding a new registry = one entry in REGISTRIES const.
Added Go registry to REGISTRIES, gaining go metrics for free.

* quality: add MSRV, tarpaulin config, proptest for parsers

- Set rust-version = 1.75 in workspace Cargo.toml (MSRV policy)
- Add tarpaulin.toml: llvm engine, fail-under=25, json+html output
- Add coverage/ to .gitignore
- Update CI to use tarpaulin.toml instead of inline flags
- Add proptest dev-dependency and property tests:
  - validation.rs: 16 tests (never-panics + invariants for all 4 validators)
  - pypi.rs: 5 tests (extract_filename never-panics + format assertions)

* test: add unit tests for 14 modules, coverage 21% → 30%

Add 149 new tests across auth, backup, gc, metrics, mirror parsers,
docker (manifest detection, session cleanup, metadata serde),
docker_auth (token cache), maven, npm, pypi (normalize, rewrite, extract),
raw (content-type guessing), request_id, and s3 (URI encoding).

Update tarpaulin.toml: raise fail-under to 30, exclude UI/main from
coverage reporting as they require integration tests.

* bench: add criterion benchmarks for validation and manifest parsing

Add parsing benchmark suite with 14 benchmarks covering:
- Storage key, Docker name, digest, and reference validation
- Docker manifest media type detection (v2, OCI index, minimal, invalid)

Run with: cargo bench --package nora-registry --bench parsing

* test: add 48 integration tests via tower oneshot

Add integration tests for all HTTP handlers:
- health (3), raw (7), cargo (4), maven (4), request_id (2)
- pypi (5), npm (5), docker (12), auth (6)

Create test_helpers.rs with TestContext pattern.
Add tower and http-body-util dev-dependencies.
Update tarpaulin fail-under 30 to 40.

Coverage: 29.5% to 43.3% (2089/4825 lines)

* fix: clean clippy warnings in tests, fix flaky audit test

Add #[allow(clippy::unwrap_used)] to 18 test modules.
Fix 3 additional clippy lints: writeln_empty_string, needless_update,
unnecessary_get_then_check.
Fix flaky audit test: replace single sleep(50ms) with retry loop (max 1s).
Prefix unused token variable with underscore.

cargo clippy --all-targets = 0 warnings (was 245 errors)
This commit is contained in:
2026-04-05 10:01:50 +03:00
committed by GitHub
parent 35a9e34a3e
commit ac3a8a7c43
37 changed files with 3452 additions and 130 deletions

View File

@@ -1322,3 +1322,599 @@ async fn update_metadata_on_pull(storage: Storage, meta_key: String) {
let _ = storage.put(&meta_key, &json).await;
}
}
#[cfg(test)]
#[allow(clippy::unwrap_used)]
mod tests {
use super::*;
#[test]
fn test_image_metadata_default() {
let meta = ImageMetadata::default();
assert_eq!(meta.push_timestamp, 0);
assert_eq!(meta.last_pulled, 0);
assert_eq!(meta.downloads, 0);
assert_eq!(meta.size_bytes, 0);
assert_eq!(meta.os, "");
assert_eq!(meta.arch, "");
assert!(meta.variant.is_none());
assert!(meta.layers.is_empty());
}
#[test]
fn test_image_metadata_serialization() {
let meta = ImageMetadata {
push_timestamp: 1700000000,
last_pulled: 1700001000,
downloads: 42,
size_bytes: 1024000,
os: "linux".to_string(),
arch: "amd64".to_string(),
variant: None,
layers: vec![LayerInfo {
digest: "sha256:abc123".to_string(),
size: 512000,
}],
};
let json = serde_json::to_string(&meta).unwrap();
assert!(json.contains("\"os\":\"linux\""));
assert!(json.contains("\"arch\":\"amd64\""));
assert!(!json.contains("variant")); // None => skipped
}
#[test]
fn test_image_metadata_with_variant() {
let meta = ImageMetadata {
variant: Some("v8".to_string()),
..Default::default()
};
let json = serde_json::to_string(&meta).unwrap();
assert!(json.contains("\"variant\":\"v8\""));
}
#[test]
fn test_image_metadata_deserialization() {
let json = r#"{
"push_timestamp": 1700000000,
"last_pulled": 0,
"downloads": 5,
"size_bytes": 2048,
"os": "linux",
"arch": "arm64",
"variant": "v8",
"layers": [
{"digest": "sha256:aaa", "size": 1024},
{"digest": "sha256:bbb", "size": 1024}
]
}"#;
let meta: ImageMetadata = serde_json::from_str(json).unwrap();
assert_eq!(meta.os, "linux");
assert_eq!(meta.arch, "arm64");
assert_eq!(meta.variant, Some("v8".to_string()));
assert_eq!(meta.layers.len(), 2);
assert_eq!(meta.layers[0].digest, "sha256:aaa");
assert_eq!(meta.layers[1].size, 1024);
}
#[test]
fn test_layer_info_serialization_roundtrip() {
let layer = LayerInfo {
digest: "sha256:deadbeef".to_string(),
size: 999999,
};
let json = serde_json::to_value(&layer).unwrap();
let restored: LayerInfo = serde_json::from_value(json).unwrap();
assert_eq!(layer.digest, restored.digest);
assert_eq!(layer.size, restored.size);
}
#[test]
fn test_cleanup_expired_sessions_empty() {
let sessions: RwLock<HashMap<String, UploadSession>> = RwLock::new(HashMap::new());
cleanup_expired_sessions(&sessions);
assert_eq!(sessions.read().len(), 0);
}
#[test]
fn test_cleanup_expired_sessions_fresh() {
let sessions: RwLock<HashMap<String, UploadSession>> = RwLock::new(HashMap::new());
sessions.write().insert(
"uuid-1".to_string(),
UploadSession {
data: vec![1, 2, 3],
name: "test/image".to_string(),
created_at: std::time::Instant::now(),
},
);
cleanup_expired_sessions(&sessions);
assert_eq!(sessions.read().len(), 1); // not expired
}
#[test]
fn test_max_upload_sessions_default() {
// Without env var set, should return default
let max = max_upload_sessions();
assert!(max > 0);
assert_eq!(max, DEFAULT_MAX_UPLOAD_SESSIONS);
}
#[test]
fn test_max_session_size_default() {
let max = max_session_size();
assert_eq!(max, DEFAULT_MAX_SESSION_SIZE_MB * 1024 * 1024);
}
// --- detect_manifest_media_type tests ---
#[test]
fn test_detect_manifest_explicit_media_type() {
let manifest = serde_json::json!({
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"schemaVersion": 2
});
let result = detect_manifest_media_type(manifest.to_string().as_bytes());
assert_eq!(
result,
"application/vnd.docker.distribution.manifest.v2+json"
);
}
#[test]
fn test_detect_manifest_oci_media_type() {
let manifest = serde_json::json!({
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"schemaVersion": 2
});
let result = detect_manifest_media_type(manifest.to_string().as_bytes());
assert_eq!(result, "application/vnd.oci.image.manifest.v1+json");
}
#[test]
fn test_detect_manifest_schema_v1() {
let manifest = serde_json::json!({
"schemaVersion": 1,
"name": "test/image"
});
let result = detect_manifest_media_type(manifest.to_string().as_bytes());
assert_eq!(
result,
"application/vnd.docker.distribution.manifest.v1+json"
);
}
#[test]
fn test_detect_manifest_docker_v2_from_config() {
let manifest = serde_json::json!({
"schemaVersion": 2,
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"digest": "sha256:abc"
}
});
let result = detect_manifest_media_type(manifest.to_string().as_bytes());
assert_eq!(
result,
"application/vnd.docker.distribution.manifest.v2+json"
);
}
#[test]
fn test_detect_manifest_oci_from_config() {
let manifest = serde_json::json!({
"schemaVersion": 2,
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"digest": "sha256:abc"
}
});
let result = detect_manifest_media_type(manifest.to_string().as_bytes());
assert_eq!(result, "application/vnd.oci.image.manifest.v1+json");
}
#[test]
fn test_detect_manifest_no_config_media_type() {
let manifest = serde_json::json!({
"schemaVersion": 2,
"config": {
"digest": "sha256:abc"
}
});
let result = detect_manifest_media_type(manifest.to_string().as_bytes());
assert_eq!(
result,
"application/vnd.docker.distribution.manifest.v2+json"
);
}
#[test]
fn test_detect_manifest_index() {
let manifest = serde_json::json!({
"schemaVersion": 2,
"manifests": [
{"digest": "sha256:aaa", "platform": {"os": "linux", "architecture": "amd64"}}
]
});
let result = detect_manifest_media_type(manifest.to_string().as_bytes());
assert_eq!(result, "application/vnd.oci.image.index.v1+json");
}
#[test]
fn test_detect_manifest_invalid_json() {
let result = detect_manifest_media_type(b"not json at all");
assert_eq!(
result,
"application/vnd.docker.distribution.manifest.v2+json"
);
}
#[test]
fn test_detect_manifest_empty() {
let result = detect_manifest_media_type(b"{}");
assert_eq!(
result,
"application/vnd.docker.distribution.manifest.v2+json"
);
}
#[test]
fn test_detect_manifest_helm_chart() {
let manifest = serde_json::json!({
"schemaVersion": 2,
"config": {
"mediaType": "application/vnd.cncf.helm.config.v1+json",
"digest": "sha256:abc"
}
});
let result = detect_manifest_media_type(manifest.to_string().as_bytes());
assert_eq!(result, "application/vnd.oci.image.manifest.v1+json");
}
}
#[cfg(test)]
#[allow(clippy::unwrap_used)]
mod integration_tests {
use crate::test_helpers::{body_bytes, create_test_context, send};
use axum::body::Body;
use axum::http::{header, Method, StatusCode};
use sha2::Digest;
#[tokio::test]
async fn test_docker_v2_check() {
let ctx = create_test_context();
let resp = send(&ctx.app, Method::GET, "/v2/", Body::empty()).await;
assert_eq!(resp.status(), StatusCode::OK);
}
#[tokio::test]
async fn test_docker_catalog_empty() {
let ctx = create_test_context();
let resp = send(&ctx.app, Method::GET, "/v2/_catalog", Body::empty()).await;
assert_eq!(resp.status(), StatusCode::OK);
let body = body_bytes(resp).await;
let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert!(json["repositories"].as_array().unwrap().is_empty());
}
#[tokio::test]
async fn test_docker_put_get_manifest() {
let ctx = create_test_context();
let manifest = serde_json::json!({
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 0,
"digest": "sha256:0000000000000000000000000000000000000000000000000000000000000000"
},
"layers": []
});
let manifest_bytes = serde_json::to_vec(&manifest).unwrap();
let put_resp = send(
&ctx.app,
Method::PUT,
"/v2/alpine/manifests/latest",
Body::from(manifest_bytes.clone()),
)
.await;
assert_eq!(put_resp.status(), StatusCode::CREATED);
let digest_header = put_resp
.headers()
.get("docker-content-digest")
.unwrap()
.to_str()
.unwrap()
.to_string();
assert!(digest_header.starts_with("sha256:"));
let get_resp = send(
&ctx.app,
Method::GET,
"/v2/alpine/manifests/latest",
Body::empty(),
)
.await;
assert_eq!(get_resp.status(), StatusCode::OK);
let get_digest = get_resp
.headers()
.get("docker-content-digest")
.unwrap()
.to_str()
.unwrap()
.to_string();
assert_eq!(get_digest, digest_header);
let body = body_bytes(get_resp).await;
assert_eq!(body.as_ref(), manifest_bytes.as_slice());
}
#[tokio::test]
async fn test_docker_list_tags() {
let ctx = create_test_context();
let manifest = serde_json::json!({
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 0,
"digest": "sha256:0000000000000000000000000000000000000000000000000000000000000000"
},
"layers": []
});
send(
&ctx.app,
Method::PUT,
"/v2/alpine/manifests/latest",
Body::from(serde_json::to_vec(&manifest).unwrap()),
)
.await;
let list_resp = send(&ctx.app, Method::GET, "/v2/alpine/tags/list", Body::empty()).await;
assert_eq!(list_resp.status(), StatusCode::OK);
let body = body_bytes(list_resp).await;
let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert_eq!(json["name"], "alpine");
let tags = json["tags"].as_array().unwrap();
assert!(tags.contains(&serde_json::json!("latest")));
}
#[tokio::test]
async fn test_docker_delete_manifest() {
let ctx = create_test_context();
let manifest = serde_json::json!({
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 0,
"digest": "sha256:0000000000000000000000000000000000000000000000000000000000000000"
},
"layers": []
});
let put_resp = send(
&ctx.app,
Method::PUT,
"/v2/alpine/manifests/latest",
Body::from(serde_json::to_vec(&manifest).unwrap()),
)
.await;
let digest = put_resp
.headers()
.get("docker-content-digest")
.unwrap()
.to_str()
.unwrap()
.to_string();
let del = send(
&ctx.app,
Method::DELETE,
&format!("/v2/alpine/manifests/{}", digest),
Body::empty(),
)
.await;
assert_eq!(del.status(), StatusCode::ACCEPTED);
}
#[tokio::test]
async fn test_docker_monolithic_upload() {
let ctx = create_test_context();
let blob_data = b"test blob data";
let digest = format!("sha256:{}", hex::encode(sha2::Sha256::digest(blob_data)));
let post_resp = send(
&ctx.app,
Method::POST,
"/v2/alpine/blobs/uploads/",
Body::empty(),
)
.await;
assert_eq!(post_resp.status(), StatusCode::ACCEPTED);
let location = post_resp
.headers()
.get("location")
.unwrap()
.to_str()
.unwrap()
.to_string();
let uuid = location.rsplit('/').next().unwrap();
let put_url = format!("/v2/alpine/blobs/uploads/{}?digest={}", uuid, digest);
let put_resp = send(&ctx.app, Method::PUT, &put_url, Body::from(&blob_data[..])).await;
assert_eq!(put_resp.status(), StatusCode::CREATED);
}
#[tokio::test]
async fn test_docker_chunked_upload() {
let ctx = create_test_context();
let blob_data = b"test chunked blob";
let digest = format!("sha256:{}", hex::encode(sha2::Sha256::digest(blob_data)));
let post_resp = send(
&ctx.app,
Method::POST,
"/v2/alpine/blobs/uploads/",
Body::empty(),
)
.await;
assert_eq!(post_resp.status(), StatusCode::ACCEPTED);
let location = post_resp
.headers()
.get("location")
.unwrap()
.to_str()
.unwrap()
.to_string();
let uuid = location.rsplit('/').next().unwrap();
let patch_url = format!("/v2/alpine/blobs/uploads/{}", uuid);
let patch_resp = send(
&ctx.app,
Method::PATCH,
&patch_url,
Body::from(&blob_data[..]),
)
.await;
assert_eq!(patch_resp.status(), StatusCode::ACCEPTED);
let put_url = format!("/v2/alpine/blobs/uploads/{}?digest={}", uuid, digest);
let put_resp = send(&ctx.app, Method::PUT, &put_url, Body::empty()).await;
assert_eq!(put_resp.status(), StatusCode::CREATED);
}
#[tokio::test]
async fn test_docker_check_blob() {
let ctx = create_test_context();
let blob_data = b"test blob for head";
let digest = format!("sha256:{}", hex::encode(sha2::Sha256::digest(blob_data)));
let post_resp = send(
&ctx.app,
Method::POST,
"/v2/alpine/blobs/uploads/",
Body::empty(),
)
.await;
let location = post_resp
.headers()
.get("location")
.unwrap()
.to_str()
.unwrap()
.to_string();
let uuid = location.rsplit('/').next().unwrap();
let put_url = format!("/v2/alpine/blobs/uploads/{}?digest={}", uuid, digest);
send(&ctx.app, Method::PUT, &put_url, Body::from(&blob_data[..])).await;
let head_url = format!("/v2/alpine/blobs/{}", digest);
let head_resp = send(&ctx.app, Method::HEAD, &head_url, Body::empty()).await;
assert_eq!(head_resp.status(), StatusCode::OK);
let cl = head_resp
.headers()
.get(header::CONTENT_LENGTH)
.unwrap()
.to_str()
.unwrap()
.parse::<usize>()
.unwrap();
assert_eq!(cl, blob_data.len());
}
#[tokio::test]
async fn test_docker_download_blob() {
let ctx = create_test_context();
let blob_data = b"test blob for download";
let digest = format!("sha256:{}", hex::encode(sha2::Sha256::digest(blob_data)));
let post_resp = send(
&ctx.app,
Method::POST,
"/v2/alpine/blobs/uploads/",
Body::empty(),
)
.await;
let location = post_resp
.headers()
.get("location")
.unwrap()
.to_str()
.unwrap()
.to_string();
let uuid = location.rsplit('/').next().unwrap();
let put_url = format!("/v2/alpine/blobs/uploads/{}?digest={}", uuid, digest);
send(&ctx.app, Method::PUT, &put_url, Body::from(&blob_data[..])).await;
let get_url = format!("/v2/alpine/blobs/{}", digest);
let get_resp = send(&ctx.app, Method::GET, &get_url, Body::empty()).await;
assert_eq!(get_resp.status(), StatusCode::OK);
let body = body_bytes(get_resp).await;
assert_eq!(body.as_ref(), &blob_data[..]);
}
#[tokio::test]
async fn test_docker_blob_not_found() {
let ctx = create_test_context();
let fake_digest = "sha256:0000000000000000000000000000000000000000000000000000000000000000";
let head_url = format!("/v2/alpine/blobs/{}", fake_digest);
let resp = send(&ctx.app, Method::HEAD, &head_url, Body::empty()).await;
assert_eq!(resp.status(), StatusCode::NOT_FOUND);
}
#[tokio::test]
async fn test_docker_delete_blob() {
let ctx = create_test_context();
let blob_data = b"test blob for delete";
let digest = format!("sha256:{}", hex::encode(sha2::Sha256::digest(blob_data)));
let post_resp = send(
&ctx.app,
Method::POST,
"/v2/alpine/blobs/uploads/",
Body::empty(),
)
.await;
let location = post_resp
.headers()
.get("location")
.unwrap()
.to_str()
.unwrap()
.to_string();
let uuid = location.rsplit('/').next().unwrap();
let put_url = format!("/v2/alpine/blobs/uploads/{}?digest={}", uuid, digest);
send(&ctx.app, Method::PUT, &put_url, Body::from(&blob_data[..])).await;
let delete_url = format!("/v2/alpine/blobs/{}", digest);
let delete_resp = send(&ctx.app, Method::DELETE, &delete_url, Body::empty()).await;
assert_eq!(delete_resp.status(), StatusCode::ACCEPTED);
}
#[tokio::test]
async fn test_docker_namespaced_routes() {
let ctx = create_test_context();
let manifest = serde_json::json!({
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 0,
"digest": "sha256:0000000000000000000000000000000000000000000000000000000000000000"
},
"layers": []
});
let put_resp = send(
&ctx.app,
Method::PUT,
"/v2/library/alpine/manifests/latest",
Body::from(serde_json::to_vec(&manifest).unwrap()),
)
.await;
assert_eq!(put_resp.status(), StatusCode::CREATED);
assert!(put_resp
.headers()
.get("docker-content-digest")
.unwrap()
.to_str()
.unwrap()
.starts_with("sha256:"));
}
}