[Atlas] Build LLM extraction pipeline from paper abstracts and full text done coding:7 reasoning:6

← Experiment Extraction
Build Claude-powered pipeline that reads papers and produces structured experiment artifacts with full metadata ## REOPENED TASK — CRITICAL CONTEXT This task was previously marked 'done' but the audit could not verify the work actually landed on main. The original work may have been: - Lost to an orphan branch / failed push - Only a spec-file edit (no code changes) - Already addressed by other agents in the meantime - Made obsolete by subsequent work **Before doing anything else:** 1. **Re-evaluate the task in light of CURRENT main state.** Read the spec and the relevant files on origin/main NOW. The original task may have been written against a state of the code that no longer exists. 2. **Verify the task still advances SciDEX's aims.** If the system has evolved past the need for this work (different architecture, different priorities), close the task with reason "obsolete: " instead of doing it. 3. **Check if it's already done.** Run `git log --grep=''` and read the related commits. If real work landed, complete the task with `--no-sha-check --summary 'Already done in '`. 4. **Make sure your changes don't regress recent functionality.** Many agents have been working on this codebase. Before committing, run `git log --since='24 hours ago' -- ` to see what changed in your area, and verify you don't undo any of it. 5. **Stay scoped.** Only do what this specific task asks for. Do not refactor, do not "fix" unrelated issues, do not add features that weren't requested. Scope creep at this point is regression risk. If you cannot do this task safely (because it would regress, conflict with current direction, or the requirements no longer apply), escalate via `orchestra escalate` with a clear explanation instead of committing.

Completion Notes

Work verified on main. Code committed in 5e964d0b6 (281 lines). Squash-merged to main in 3517c2356. Spec work-log in 8861d5442. 428 extracted experiments, 647 experiment artifacts.

Git Commits (20)

Squash merge: orchestra/task/atl-ex-0-api-endpoints-for-experiment-browsing-se (7 commits)2026-04-26
Squash merge: orchestra/task/atl-ex-0-api-endpoints-for-experiment-browsing-se (7 commits)2026-04-26
[Atlas] Fix integration test assertions for experiment route smoke tests [task:atl-ex-08-API]2026-04-26
[Atlas] Fix replication route 500 + api.py route ordering + tests [task:atl-ex-08-API]2026-04-26
[Atlas] Fix experiment API route precedence [task:atl-ex-08-API]2026-04-25
[Atlas] Update spec work log for experiment API endpoints [task:atl-ex-08-API]2026-04-25
[Atlas] Add route-order regression test for experiment API endpoints [task:atl-ex-08-API]2026-04-25
[Atlas] Update spec work log for experiment API endpoints [task:atl-ex-08-API]2026-04-25
[Atlas] API endpoints for experiment browsing, search, and filtering [task:atl-ex-08-API]2026-04-25
Squash merge: atlas/atl-ex-04-QUAL-push (2 commits)2026-04-26
[Atlas] Update spec work log for extraction quality scoring [task:atl-ex-04-QUAL]2026-04-25
[Atlas] Extraction quality scoring and confidence calibration [task:atl-ex-04-QUAL]2026-04-25
Squash merge: orchestra/task/atl-ex-0-meta-analysis-support-aggregate-results (2 commits)2026-04-25
Squash merge: orchestra/task/atl-ex-0-meta-analysis-support-aggregate-results (2 commits)2026-04-25
[Atlas] Update spec work log for extraction quality scoring [task:atl-ex-04-QUAL]2026-04-25
[Verify] Meta-analysis spec verified and updated — all criteria implemented [task:atl-ex-06-META]2026-04-25
[Atlas] Extraction quality scoring and confidence calibration [task:atl-ex-04-QUAL]2026-04-25
[Atlas] Add meta-analysis module with pooled effect sizes and heterogeneity [task:atl-ex-06-META]2026-04-25
[Atlas] Replication tracking: clustering module + /api/experiments/replication/{entity} [task:atl-ex-05-REPL]2026-04-25
[Atlas] Replication tracking: clustering module + /api/experiments/replication/{entity} [task:atl-ex-05-REPL]2026-04-25
Spec File

Goal

Build an extraction pipeline that reads paper abstracts (and full text where available) and
produces structured experiment artifacts with complete metadata. The pipeline uses Claude to
parse scientific text into the structured schemas defined in atl-ex-01-SCHM.

Each extraction produces one or more experiment artifacts per paper — a single paper may
describe multiple experiments (e.g., a GWAS study followed by functional validation in mice).

Acceptance Criteria

experiment_extractor.py module with extract_experiments(pmid) function
☐ Uses Bedrock Claude Sonnet for extraction quality
☐ Outputs structured JSON matching experiment type schemas
☐ Handles multi-experiment papers (returns list of experiment records)
☐ Registers each experiment as an artifact via register_artifact(artifact_type='experiment')
☐ Creates derives_from link to paper artifact
☐ Stores extraction confidence score per experiment
☐ Handles missing data gracefully (marks fields as null with confidence reduction)
☐ Rate-limited to respect Bedrock quotas
☐ Batch mode: extract_from_all_papers(limit=N) for bulk processing
☐ Logging of extraction successes, failures, and confidence distribution

Approach

  • Build prompt template that includes the target schema and paper text
  • Implement structured output parsing with validation
  • Handle the paper-to-experiments mapping (1:N)
  • Register artifacts and create provenance links
  • Add batch processing with progress tracking
  • Test on 20 papers manually, verify extraction quality
  • Dependencies

    • atl-ex-01-SCHM — Experiment schemas define the extraction target

    Dependents

    • atl-ex-03-LINK — Extracted experiments need KG entity linking
    • atl-ex-04-QUAL — Quality scoring evaluates extraction output
    • atl-ex-07-BKFL — Backfill reuses the pipeline

    Work Log

    2026-04-15 — Slot 0

    • Read existing experiment_extractor.py (already existed with basic structure)
    • Added 'experiment' to ARTIFACT_TYPES in scidex/atlas/artifact_registry.py
    • Rewrote EXTRACTION_SYSTEM_PROMPT to match JSON schema field names from schemas/experiments/ (base_experiment.json, genetic_association.json, gene_expression.json, animal_model.json, cell_biology.json, clinical.json, protein_interaction.json)
    • Added _compute_extraction_confidence() for graceful missing-data handling (reduces confidence based on null fields per spec requirement)
    • Added quota-aware Bedrock rate limiting: _wait_for_bedrock_quota, _is_rate_limit_error, _record_bedrock_failure, _record_bedrock_success with exponential backoff
    • Fixed JSONDecodeError scope bug (content variable was undefined in except block)
    • Updated register_experiment_artifact to store full schema-aligned metadata (entities, source, tissue, species, extraction_metadata, ambiguities)
    • Verified syntax with py_compile and module import test
    • Committed and pushed to push-token remote
    Acceptance Criteria Status:
    experiment_extractor.py module with extract_experiments(pmid) function
    ☑ Uses Bedrock Claude Sonnet for extraction quality
    ☑ Outputs structured JSON matching experiment type schemas
    ☑ Handles multi-experiment papers (returns list of experiment records)
    ☑ Registers each experiment as an artifact via register_artifact(artifact_type='experiment')
    ☑ Creates derives_from link to paper artifact
    ☑ Stores extraction confidence score per experiment
    ☑ Handles missing data gracefully (marks fields as null with confidence reduction)
    ☑ Rate-limited to respect Bedrock quotas (quota-aware with exponential backoff)
    ☑ Batch mode: extract_from_all_papers(limit=N) for bulk processing
    ☑ Logging of extraction successes, failures, and confidence distribution

    Payload JSON
    {
      "requirements": {
        "coding": 7,
        "reasoning": 6
      }
    }

    Sibling Tasks in Quest (Experiment Extraction) ↗