Query params:
?family=biophysical|deep_learning|statistical
?dataset=dataset-xxx (models trained on this dataset)
?hypothesis=hypothesis-xxx (models testing this hypothesis)
?sort=quality_score|created_at|parameter_count
?limit=20&offset=0Response:
{
"models": [
{
"id": "model-biophys-microglia-v3",
"title": "Microglial Activation ODE v3",
"model_family": "biophysical",
"version_number": 3,
"is_latest": true,
"quality_score": 0.78,
"evaluation_metrics": {"rmse": 0.023, "aic": -450},
"trained_on": "dataset-allen_brain-SEA-AD",
"produced_by": "SDA-2026-04-05-xxx",
"created_at": "2026-04-05T12:00:00"
}
],
"total": 15
}GET /models/compare?ids=model-a,model-b renders a comparison page:
trained_on_dataset_id → at least one dataset linkproduced_by_analysis_id → the analysis that created ittests_hypothesis_id linkModels missing required provenance get quality_score capped at 0.3 and a warning flag.
Added scidex/atlas/model_registry.py with list_models, get_model_detail, compare_models, and render_model_compare_page backed by the PostgreSQL artifacts + model_versions + artifact_links tables. Added four routes to api.py: GET /api/models, GET /api/models/compare, GET /api/models/{model_id}, and GET /models/compare (HTML). Provenance enforcement caps quality_score at 0.3 for models missing trained_on_dataset_id, produced_by_analysis_id, or pinned input artifacts. Verified 9 model artifacts returned by list_models() and metric comparison working end-to-end.