> ## Continuous-process anchor
>
> This spec describes an instance of one of the retired-script themes
> documented in docs/design/retired_scripts_patterns.md. Before
> implementing, read:
>
> 1. The "Design principles for continuous processes" section of that
> atlas — every principle is load-bearing. In particular:
> - LLMs for semantic judgment; rules for syntactic validation.
> - Gap-predicate driven, not calendar-driven.
> - Idempotent + version-stamped + observable.
> - No hardcoded entity lists, keyword lists, or canonical-name tables.
> - Three surfaces: FastAPI + orchestra + MCP.
> - Progressive improvement via outcome-feedback loop.
> 2. If a matching theme is not yet rebuilt as a continuous process,
> follow docs/planning/specs/rebuild_theme_template_spec.md to
> scaffold it BEFORE doing the per-instance work.
>
> **Specific scripts named below in this spec are retired and must not
> be rebuilt as one-offs.** Implement (or extend) the corresponding
> continuous process instead.
Quest cluster: Agent Ecosystem · Open Debates · Market Participants · Capital Markets · Atlas · Work Governance Created: 2026-04-10 Status: open Builds on: economics_participation_drivers_spec.md (v1) + economics_v2_credit_backprop_spec.md (v2)
> "Agents" throughout this doc means any actor — LLM personas,
> orchestra worker classes, market participants, human researchers.
> SciDEX is a hybrid collective; squad membership is uniform across
> kinds of contributors.
The v1 + v2 economics give the system the fuel — contribution credit,
discovery dividends, capability-based routing — but the *organizational
unit* of scientific work is still the atomic task. Real science doesn't
work by individuals picking up isolated tickets. It works by **small
focused teams** that:
A SciDEX gap, hypothesis, or challenge that's worth real attention
deserves a research squad: a transient, self-organizing,
pool-funded team that owns it for a few days, leaves a journal, and
bubbles its findings up to the core knowledge graph and exchanges.
This spec is the design + first implementation of that primitive.
A research squad is, conceptually, a **short-lived, agent-driven,
pool-funded sub-quest** with explicit membership and a bounded
lifespan. It composes with everything in v1 and v2:
CREATE TABLE research_squads (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
charter TEXT NOT NULL, -- one-paragraph goal
parent_quest_id TEXT, -- bubbles findings up here
target_artifact_type TEXT NOT NULL, -- 'gap'|'hypothesis'|'challenge'|'analysis'
target_artifact_id TEXT NOT NULL,
status TEXT NOT NULL, -- 'forming'|'recruiting'|'active'|'harvesting'|'disbanded'
success_criteria TEXT, -- JSON list of measurable criteria
formed_at TEXT NOT NULL,
recruitment_closes_at TEXT,
target_disband_at TEXT, -- soft deadline
disbanded_at TEXT,
pool_balance INTEGER NOT NULL DEFAULT 0,
pool_seed INTEGER NOT NULL DEFAULT 0,
pool_external INTEGER NOT NULL DEFAULT 0, -- contributions from agent wallets
parent_squad_id TEXT, -- for forks
created_by TEXT,
workspace_branch TEXT, -- git branch dedicated to this squad
findings_bubbled INTEGER NOT NULL DEFAULT 0,
metadata TEXT
);
CREATE TABLE squad_members (
squad_id TEXT NOT NULL,
agent_id TEXT NOT NULL,
role TEXT NOT NULL, -- 'lead'|'researcher'|'reviewer'|'observer'
joined_at TEXT NOT NULL,
left_at TEXT,
contribution_score REAL NOT NULL DEFAULT 0.0,
journal_entries INTEGER NOT NULL DEFAULT 0,
findings_authored INTEGER NOT NULL DEFAULT 0,
PRIMARY KEY (squad_id, agent_id)
);
CREATE TABLE squad_journal (
id TEXT PRIMARY KEY,
squad_id TEXT NOT NULL,
author_agent_id TEXT NOT NULL,
entry_type TEXT NOT NULL, -- 'progress'|'finding'|'block'|'decision'|'milestone'|'note'
title TEXT NOT NULL,
body TEXT NOT NULL,
references_artifacts TEXT, -- JSON list
created_at TEXT NOT NULL
);
CREATE TABLE squad_findings (
id TEXT PRIMARY KEY,
squad_id TEXT NOT NULL,
finding_type TEXT NOT NULL, -- 'hypothesis'|'evidence'|'kg_edge'|'wiki_edit'|'analysis'|'review'
target_artifact_type TEXT,
target_artifact_id TEXT,
title TEXT NOT NULL,
summary TEXT,
confidence REAL NOT NULL DEFAULT 0.5,
bubble_up_status TEXT NOT NULL DEFAULT 'pending',
-- 'pending'|'reviewed'|'merged'|'rejected'
reviewed_by TEXT, -- agent_id of reviewer
bubbled_at TEXT,
metadata TEXT,
created_at TEXT NOT NULL
);
CREATE TABLE squad_grants (
id TEXT PRIMARY KEY,
squad_id TEXT NOT NULL,
from_account TEXT NOT NULL, -- 'system' | 'matching_pool' | agent_id
amount INTEGER NOT NULL,
purpose TEXT, -- 'seed'|'quadratic_funding'|'sponsorship'|'milestone_bonus'
granted_at TEXT NOT NULL
);All five tables get appropriate indexes on squad_id, status,
bubble_up_status, and (target_artifact_type, target_artifact_id).
┌─────────┐
│ FORMING │ created by autoseed driver or by an agent
└────┬────┘
│ (charter set, success criteria locked)
↓
┌────────────┐
│ RECRUITING │ open enrollment window (~24h),
└────┬───────┘ agents bid/apply for membership
│
↓
┌────────┐
│ ACTIVE │ bounded execution window (3-7d),
└────┬───┘ members journal & produce findings
│
↓
┌─────────────┐
│ HARVESTING │ lead/judge reviews findings,
└────┬────────┘ marks ones to bubble up
│
↓
┌───────────┐
│ DISBANDED │ pool distributed proportional to
└───────────┘ contribution_score; squad becomes
historical recordA squad can also fork during ACTIVE state — when a sub-question
emerges, the lead spawns a child squad with parent_squad_id set.
A squad can merge with another by transferring members + pool
into one of them.
Three mechanisms, in order of leverage:
importance_score × 5000 tokens. A P95 importance gap gets a(Σ √cᵢ)² so 100 agents giving 10 tokens eachThe combined effect: agents that look at the squad market see big
pools on important targets, with multiplied future dividends, and
naturally gravitate toward where the broader community has signaled
priority. **The market does the routing — no central scheduler
required.**
Each squad gets a long-lived git branch named
squad/{slug} (e.g. squad/gap-amyloid-clearance). Squad members
check out work into that branch instead of orchestra/task/.... When
a finding is marked merged by the bubble-up driver, the relevant
commits from the squad branch are cherry-picked or merged into main
with the squad provenance attached to the commit message.
This is identical to the existing orchestra worktree pattern,
just with a longer-lived branch and squad-scoped membership. No new
git infrastructure needed.
economics_drivers/squads/ as submodules
autoseed.py, journal.py, bubble_up.py, recruit.py. CLI: python3 -m
economics_drivers.squads {form|join|log|status|bubble|disband}.In one paragraph: **agents form temporary research squads around the
gaps and challenges that matter most to SciDEX, work in private
shared context for days, journal their thinking as they go, and
bubble polished findings into the core knowledge graph. The pool size
and dividend multipliers naturally pull agents toward high-priority
areas without anyone scheduling them. When a squad's findings later
turn out to be important, every contributing agent gets a backprop
windfall — including the human researchers who joined for an
afternoon and the LLM personas that grinded out the routine debate
rounds.** This is the organizational primitive that lets a hybrid
agent/human collective do focused, accountable, and rewarding science
without a central PI.
_now() returns ISO string for TEXT column compatibility; replaced _credit_distribution() with _distribute_pool() using tl.transfer(); added _ensure_squad_account_funded() to unlock seed tokens; full-table-scan + Python filter avoids corrupt idx_research_squads_status.datetime.max replaced with datetime.max.replace(tzinfo=timezone.utc) so None target_disband_at values sort correctly alongside timezone-aware parsed timestamps.python3 -m py_compile economics_drivers/squads/harvest.py passed; python3 -m economics_drivers.squads.harvest --dry-run ran cleanly, harvesting 10 squads.71fbdf76f [task:3e1a8177-4e47-4067-8ae1-62102de6528d]origin/main after the previous deploy attempt was blocked by a protected-main push.PYTHONPATH=. pytest -q tests/test_squad_qf.py tests/test_squad_qf_contribute.py passed; python3 -m py_compile economics_drivers/squads/qf.py passed.python3 -m pytest tests/test_squad_qf_contribute.py -q passed; qf_contribute --dry-run --limit 3 planned contributions correctly.python3 -m economics_drivers.squads.qf --dry-run --limit 5 failed when token_ledger.get_account() returned a Decimal balance and _ensure_matching_pool_account() added it to a float in the top-up logging path.qf --dry-run called the matching-pool top-up helper, so dry-runs could create or mint the QF matching-pool account before reporting results._ensure_matching_pool_account(dry_run=...) now normalizes balances to float, returns the usable balance, and avoids create_account/mint calls during dry-runs; run() uses the simulated balance for cap calculations.tests/test_squad_qf.py for Decimal top-up and missing-account dry-run behavior while preserving the existing PostgreSQL JSON serialization test.python3 -m pytest tests/test_squad_qf.py tests/test_squad_qf_contribute.py -q passed; python3 -m economics_drivers.squads.qf --dry-run --limit 5 now exits cleanly; python3 -m py_compile economics_drivers/squads/qf.py economics_drivers/squads/qf_contribute.py economics_drivers/squads/journal.py tests/test_squad_qf.py passed.scidex status; python3 -m pytest tests/test_squad_qf_contribute.py -q; qf_contribute --dry-run --limit 3; qf --dry-run --limit 10.qf_contribute --limit 50: 19 agents processed, 36 grants recorded, 824.07 tokens moved from agent wallets to squad pools.qf --limit 50: 50 squads processed, 5 squads matched, 335.64 tokens moved from qf_matching_pool to squad pools.qf --dry-run crashed when PostgreSQL returned qf_last_matched_at as a datetime, because CLI output used raw json.dumps().economics_drivers/squads/qf.py for PostgreSQL datetime and Decimal values; added tests/test_squad_qf.py regression coverage._agora_router import + app.include_router(_agora_router) and close_thread_local_dbs() from _market_consumer_loop, causing 404s on all Agora API routes and pool slot leak in background thread.api_routes.agora registration and close_thread_local_dbs() call.curl http://localhost:8000/api/status returns 200; _agora_router present at lines 963 and 969; close_thread_local_dbs() present at line 341.python3 -m pytest tests/test_squad_qf_contribute.py -v passes (2/2).python3 -m economics_drivers.squads.qf_contribute --dry-run --limit 3 runs cleanly.python3 -m economics_drivers.squads.qf --dry-run --limit 10 runs cleanly (no pending matches — contributions from prior run already matched).orchestra/task/1b83a79e-quadratic-funding-for-squad-pools-extens.via_squad multiplier was already implemented in backprop_credit.py on origin/main (applied at analysis, hypothesis, and debate_session artifact levels in _walk_provenance)DB_PATH monkey-patching from test_backprop_credit_squad.py (leftover from pre-PG migration era — _walk_provenance now accepts a conn parameter directly, no path override needed)test_squad_multiplier, test_multiple_contributions_squad, test_hypothesis_squad_multiplier)origin/main.python3 -m economics_drivers.squads.qf_contribute --dry-run --limit 3 found eligible wallet contributions.python3 -m economics_drivers.squads.qf --dry-run --limit 10 ran cleanly with no pending matches before new contributions.python3 -m compileall -q economics_drivers/squads/qf.py economics_drivers/squads/qf_contribute.py economics_drivers/squads/journal.py economics_drivers/squads/schema.py passed.
qf_contribute --limit 50: 16 agents processed, 33 grants recorded, 833.04 tokens moved from agent wallets to squad pools.qf --limit 100: 100 squads processed, 6 squads matched, 621.71 tokens moved from qf_matching_pool to squad pools.sq-1b8a98740e11 matched 205.37 tokens from 10 contributors; sq-afc7cfd0dae7 and sq-7c1e5d2d502a each matched 204.12 tokens from 9 contributors.qf_contribute invocation in the same hour could skip already-funded squads and fund the next three, exceeding the intended per-agent cycle cap.qf_contribute.get_squads_for_agent() to subtract already-funded squads from MAX_SQUADS_PER_AGENT and return no squads once an agent has hit the cycle cap.tests/test_squad_qf_contribute.py regression coverage for full-cap and partial-cap retry behavior.python3 -m pytest tests/test_squad_qf_contribute.py -q passed; python3 -m economics_drivers.squads.qf_contribute --dry-run --limit 3 returned no extra targets for the top already-funded agents.idx_research_squads_status is partially corrupt — WHERE status='recruiting' hits the index, returns the first matching row correctly, but crashes with "database disk image is malformed" on subsequent index entries. The corruption is in the index B-tree, not the table data itself.WHERE status = 'recruiting' query with a full table scan (SELECT * FROM research_squads LIMIT 300), then filter in Python. This works because SQLite's LIMIT without ORDER BY does a straight table scan, never touching the corrupt index.recruitment_closes_at since we removed ORDER BY from SQL.economics_drivers/squads/recruit.py changed; pushed to branch orchestra/task/3b25ded3-squad-open-enrollment-recruitment-driverpython3 -m economics_drivers.squads.harvestno-op: no squads ready to harvest — correctharvest.py fully operational on origin/main; no code changes needed this cyclepython3 -m economics_drivers.squads.harvestno-op: no squads ready to harvest — correctsetup_squad_harvest_cron.sh — registers cron entry every-1h at :30 for driver #22bash setup_squad_harvest_cron.shrun() SQL query: added target_disband_at IS NULL condition so squads withharvesting→disbanded transition, proportionalpython3 -m economics_drivers.squads.harvest --dry-run → clean no-op on live DBqf.py): proportionality cap used current squad's last_matched for ALL remainingall_raw; fixed to use each squad's own r["qf_last_matched_at"].matching_pool_balance split into _before and _after in run() output.setup_squad_qf_cron.sh is in root of repo for human to run once sudo access available.qf_contribute.py keyword pattern was %kw1%kw2%kw3% (requires sequential match) →test_capability_keyword_or_matching to cover the fixtest_squads_qf.py, setup_squad_qf_cron.sh, andeconomics_drivers/squads/qf_contribute.py (fixed) to origin/mainpython3 -m economics_drivers.squads.cli autoseed --limit 5{"squads_formed": 0} — correct idempotent behavior--dry-run to CLIbubble_up.py was already complete (213 lines): queries squad_findings with bubble_up_status='reviewed', inserts agent_contributions rows (type squad_finding) for every active squad member, marks findings merged, increments research_squads.findings_bubbled. Fully idempotent via reference_id metadata check.cli.py bubble subcommand to pass --dry-run through to bubble_mod.run()python3 -m economics_drivers.squads.cli bubble → clean no-op (no reviewed findings yet)python3 -m economics_drivers.squads.cli bubble --dry-run → clean no-op with dry-runpython3 -m pytest test_squads_qf.py -vqf.py, qf_contribute.py, __init__.py) confirmed in origin/maintest_squads_qf.py and setup_squad_qf_cron.sh committed locally (branch: senate/1b83a79e-squad-qf)test_squads_qf.py (39 tests, all passing)TestQFMatchFormula — verifies (Σ √cᵢ)² formula, consensus > whale amplification, dust filterTestGetSquadContributionsSince — since-timestamp filtering, matching-pool exclusion, aggregationTestQFRunDryRun — dry-run idempotency, match amount computation, inactive squad exclusionTestQFMatchCapFraction — MAX_MATCH_FRACTION sanity, multi-squad run stabilityTestGetEligibleAgents — balance threshold, inactive-agent filter, orderingTestComputeContributionAmount — scaling, cap, min guaranteeTestGetSquadsForAgent — member priority, cycle dedup, MAX_SQUADS_PER_AGENT boundTestQFContributeRunDryRun — dry-run no writes, contribution detail shapeTestQFEndToEndFormula — 100-agent × 10-token amplification, whale vs crowd
setup_squad_qf_cron.shqf_contribute (every 1h): agents auto-contribute to squads they care aboutqf (every 6h at 00/06/12/18): QF matching distributes (Σ √cᵢ)² from pool to squadseconomics_drivers/squads/qf_contribute.py (267 lines):journal_mod.contribute_to_squad() to record grants
economics_drivers/squads/__init__.py to import qf and qf_contributeeconomics_drivers/squads/cli.py with qf-contribute subcommandpython3 -m economics_drivers.squads.qf_contribute --dry-run → 9 agents processed, contributions to 3 squads eacheconomics_drivers/squads/recruit.py (381 lines):squad_join_bids tablerecruitment_closes_at passeseconomics_drivers/squads/__init__.py to import both harvest and recruitresearch_squads_spec.md driver #21 status: dispatched → implementedpython3 -m economics_drivers.squads.recruit --dry-run → clean no-op (5 squads processed)python3 -m economics_drivers.squads.recruitpython3 -m economics_drivers.squads.cli bubble --limit 50findings_bubbled=0, members_credited_total=0 — clean no-opmerged findings, only acts on reviewedpython3 -m economics_drivers.squads.recruitpython3 -m economics_drivers.squads.recruitpython3 -m economics_drivers.squads.recruittest_backprop_credit_squad.py _setup_test_db() was missing debate_sessions and debate_rounds tables; _walk_provenance() queries both when traversing analysis artifacts, causing all 3 tests to fail with sqlite3.OperationalError: no such table: debate_sessionsCREATE TABLE IF NOT EXISTS debate_sessions and CREATE TABLE IF NOT EXISTS debate_rounds to the test schemapython3 -m economics_drivers.test_backprop_credit_squad --verbosetest_squad_multiplier: ratio=2.00x ✓test_multiple_contributions_squad: ratio=2.20x ✓test_hypothesis_squad_multiplier: squad agent in weights ✓
task-46ae57a3-squad-fixeconomics_drivers/backprop_credit.py contains the 2× squad multiplier in _walk_provenance():agent_contributions group by agent_id, checks json_extract(metadata, '$.via_squad') — if non-null/non-empty, squad_multiplier = 2.0, else 1.0agent_weight += weight squad_multiplier (1.0 + min(0.5, 0.05 * (n-1)))
economics_drivers/test_backprop_credit_squad.py has 3 passing tests:test_squad_multiplier: squad agent gets exactly 2.0× weight vs normal (ratio=2.00× ✓)test_multiple_contributions_squad: n=3 squad contributions → 2.2× ratio (2.0 × 1.1 bonus ✓)test_hypothesis_squad_multiplier: multiplier applies at hypothesis traversal level ✓
python3 -m economics_drivers.test_backprop_credit_squad --verboseeconomics_drivers/squads/recruit.py verified correct on origin/mainpython3 -m economics_drivers.squads.recruit --dry-runpython3 -m economics_drivers.squads.recruitrecruit.py on origin/main; no code changes required this cyclepython3 -m economics_drivers.squads.cli bubble --limit 50findings_bubbled=0, members_credited_total=0 — clean no-oppython3 -m economics_drivers.squads.recruitrecruit.py on origin/main, no code changes neededpython3 -m economics_drivers.squads.recruitpython3 -m economics_drivers.squads.cli bubble --limit 50{"findings_bubbled": 0, "members_credited_total": 0} — clean no-opreviewed findings; prior merged finding unchanged
python3 -m economics_drivers.squads.recruitrecruit.py on origin/main; driver fully operational, no code changes neededpython3 -m economics_drivers.squads.recruitrecruit.py on origin/main; driver operational, no code changes needed this cyclepython3 -m economics_drivers.squads.cli autoseed --limit 5{"squads_formed": 0} — clean no-opresearch_squads table shows 22 gaps with active squads (forming/recruiting/active/harvesting status)gap-debate-20260412-094630-d15b2ac1 is both in autoseed candidate list and has an active squad — properly skipped by _has_active_squad check
autoseed.run() fetched limit * 4 rows and filtered in Python; when all top-20 rows had active squads, 0 squads were formed despite 2638 eligible unattended gapsNOT EXISTS subquery; fetch exactly limit unattended gaps directlypython3 -m economics_drivers.squads.cli autoseed --limit 5 → {"squads_formed": 5}economics_drivers/squads/autoseed.py — run() querysqlite3.DatabaseError: database disk image is malformed when querying research_squads via the corrupt status index (tree 344, same root cause as driver #21 fix)WHERE status IN (...) queries with full table scans (SELECT ... FROM research_squads LIMIT 300), then filter + sort in Pythonqf.py run(): replaced WHERE status IN (...) ORDER BY pool_balance DESC LIMIT ? with full table scan + Python filter/sortqf_contribute.py get_squads_for_agent(): three status-based queries replaced with full table scans + Python filtering; added _squad_matches_keywords() helper for capability matching
qf --dry-run and qf_contribute --dry-run now execute cleanly (5 squads processed, 3 agents processed)orchestra/task/1b83a79e-quadratic-funding-for-squad-pools-extensqf_contribute: 11 agents × 3 squads = 33 contributions, 874.89 tokens moved from agent wallets to squad poolsqf match: 22 squads processed, 3 squads received QF matching funds:conn.row_factory = sqlite3.Row which fails on PGShimConnection (has __slots__, rejects arbitrary attributes)token_ledger.get_db() called itself recursively (import cycle with scidex.core.database)qf._ensure_schema_ext() caught sqlite3.OperationalError which doesn't exist under psycopg; failed transaction was not rolled back, leaving subsequent queries in aborted state_conn() functions to return dict directly from economics_drivers._db.get_conn() (already uses _pg_row_factory)token_ledger.get_db() now imports scidex.core.database.get_db locally to break the recursion cycleqf._ensure_schema_ext() now catches Exception and does rollback() on the specific "already exists" case; re-raises on unexpected errorsqf --dry-run → 100 squads processed, qf_contribute --dry-run → 13 agents processed, recruit --dry-run → 45 invites, bubble_up --dry-run → clean no-op, autoseed --dry-run → 5 new squads formedeconomics_drivers/squads/qf.py, qf_contribute.py, bubble_up.py, journal.py, autoseed.py, recruit.py, scidex/exchange/token_ledger.pyorchestra/task/1b83a79e-quadratic-funding-for-squad-pools-extens_conn() tried to set conn.row_factory = sqlite3.Row on PGShimConnection (has __slots__, rejects arbitrary attributes)ensure_schema() called conn.executescript(DDL) which is SQLite-only; fails on psycopg connectionsrow_factory assignment (PGShimConnection already uses _pg_row_factory); replaced executescript with per-statement conn.execute() loop in schema.pypython3 -m economics_drivers.squads.cli bubble → {"findings_bubbled": 0, "members_credited_total": 0} (0 reviewed findings in queue — correct no-op)economics_drivers/squads/bubble_up.py, economics_drivers/squads/schema.pyqf_contribute --dry-run --limit 5 planned member contributions; qf --dry-run --limit 10 processed 10 squads and skipped them correctly for no new contributions.qf_contribute --limit 1 failed in token_ledger.transfer() because it still executed SQLite-only BEGIN IMMEDIATE, which PostgreSQL rejects with syntax error at or near "IMMEDIATE".SELECT ... FOR UPDATE, converted Decimal balances to floats for arithmetic, and changed token-ledger timestamps from SQLite datetime('now') to CURRENT_TIMESTAMP.journal.contribute_to_squad() now casts pool_balance to float when returning new_pool_balance.python3 -m py_compile scidex/exchange/token_ledger.py economics_drivers/squads/journal.py economics_drivers/squads/qf_contribute.py economics_drivers/squads/qf.py passed.qf_contribute --limit 1 completed 3 contributions totaling 150 tokens; qf --limit 10 distributed 4 matches totaling 200 tokens from the matching pool; subsequent dry-runs remained healthy.qf_contribute --limit 1: 1 agent processed, 3 squad contributions, 150 tokens moved from wallet to squad pools.qf --limit 10: 10 squads processed, 3 matches distributed, 150 tokens moved from qf_matching_pool to squad pools.economics_drivers.squads.__init__ to lazily load submodules so python3 -m economics_drivers.squads.qf* no longer emits runpy eager-import warnings.py_compile for token ledger and squad QF modules; dry-run and bounded live QF cycles succeeded on PostgreSQL.