Quest: Resource Intelligence — Value-Based Allocation
ID: q-resource-intelligence
Layer: Senate
Priority: 88
Status: active
Core idea
Continually evaluate the relative value of every quest, task, hypothesis,
gap, and artifact. Allocate LLM tokens, CPU, disk, and agent slots
proportionally to expected value. Create a feedback loop: measure actual
impact per token spent, update the EV model, improve allocation over time.
EV formula
EV(item) = importance × tractability × downstream_impact / estimated_cost
| Signal | Source |
|---|
| importance | quest priority, task priority, composite_score, gap importance |
| tractability | feasibility_score, or default 0.7 for tasks |
| downstream_impact | dependent_hypothesis_count (gaps), log(1+composite×10) (hypotheses) |
| estimated_cost | historical avg tokens per similar completion from cost_ledger |
Implementation
ev_scorer.py (delivered)
score_all() → ranked list of all open items by EV
score_tasks() / score_hypotheses() / score_gaps() — per-type scoring
get_priority_queue(budget) → items with budget_share + tokens_allocated
persist_allocations() → writes to resource_allocations table
- CLI:
python3 ev_scorer.py [budget_tokens]
Pages
/senate/resource-allocation — budget allocation chart + priority queue table
/api/resources/ev-scores?budget=N&limit=M — JSON API
Feedback loop (future)
After task completion:
Measure: hypotheses generated, KG edges, debate quality, market price Δ
Compute impact-per-token ratio
Update EV model — high-impact items get boosted next cycle
Track via resource_allocations.efficiency_scoreAcceptance criteria
☑ EV scorer ranks all open items
☑ Budget allocation proportional to EV
☑ /senate/resource-allocation dashboard
☑ JSON API endpoint
☐ Supervisor uses EV scores for task selection
☐ Feedback loop: actual impact updates EV model
☐ Per-quest budget caps derived from allocation
☐ Historical efficiency tracking over time