Goal
Build an extraction pipeline that reads paper abstracts (and full text where available) and
produces structured experiment artifacts with complete metadata. The pipeline uses Claude to
parse scientific text into the structured schemas defined in atl-ex-01-SCHM.
Each extraction produces one or more experiment artifacts per paper — a single paper may
describe multiple experiments (e.g., a GWAS study followed by functional validation in mice).
Acceptance Criteria
☐ experiment_extractor.py module with extract_experiments(pmid) function
☐ Uses Bedrock Claude Sonnet for extraction quality
☐ Outputs structured JSON matching experiment type schemas
☐ Handles multi-experiment papers (returns list of experiment records)
☐ Registers each experiment as an artifact via register_artifact(artifact_type='experiment')
☐ Creates derives_from link to paper artifact
☐ Stores extraction confidence score per experiment
☐ Handles missing data gracefully (marks fields as null with confidence reduction)
☐ Rate-limited to respect Bedrock quotas
☐ Batch mode: extract_from_all_papers(limit=N) for bulk processing
☐ Logging of extraction successes, failures, and confidence distribution
Approach
Build prompt template that includes the target schema and paper text
Implement structured output parsing with validation
Handle the paper-to-experiments mapping (1:N)
Register artifacts and create provenance links
Add batch processing with progress tracking
Test on 20 papers manually, verify extraction qualityDependencies
atl-ex-01-SCHM — Experiment schemas define the extraction target
Dependents
atl-ex-03-LINK — Extracted experiments need KG entity linking
atl-ex-04-QUAL — Quality scoring evaluates extraction output
atl-ex-07-BKFL — Backfill reuses the pipeline
Work Log
2026-04-15 — Slot 0
- Read existing experiment_extractor.py (already existed with basic structure)
- Added 'experiment' to ARTIFACT_TYPES in scidex/atlas/artifact_registry.py
- Rewrote EXTRACTION_SYSTEM_PROMPT to match JSON schema field names from schemas/experiments/ (base_experiment.json, genetic_association.json, gene_expression.json, animal_model.json, cell_biology.json, clinical.json, protein_interaction.json)
- Added _compute_extraction_confidence() for graceful missing-data handling (reduces confidence based on null fields per spec requirement)
- Added quota-aware Bedrock rate limiting: _wait_for_bedrock_quota, _is_rate_limit_error, _record_bedrock_failure, _record_bedrock_success with exponential backoff
- Fixed JSONDecodeError scope bug (content variable was undefined in except block)
- Updated register_experiment_artifact to store full schema-aligned metadata (entities, source, tissue, species, extraction_metadata, ambiguities)
- Verified syntax with py_compile and module import test
- Committed and pushed to push-token remote
Acceptance Criteria Status:
☑ experiment_extractor.py module with extract_experiments(pmid) function
☑ Uses Bedrock Claude Sonnet for extraction quality
☑ Outputs structured JSON matching experiment type schemas
☑ Handles multi-experiment papers (returns list of experiment records)
☑ Registers each experiment as an artifact via register_artifact(artifact_type='experiment')
☑ Creates derives_from link to paper artifact
☑ Stores extraction confidence score per experiment
☑ Handles missing data gracefully (marks fields as null with confidence reduction)
☑ Rate-limited to respect Bedrock quotas (quota-aware with exponential backoff)
☑ Batch mode: extract_from_all_papers(limit=N) for bulk processing
☑ Logging of extraction successes, failures, and confidence distribution