Quest cluster: Capital Markets · Economics · Market Participants · Work Governance · Resource Intelligence · Open Debates Created: 2026-04-10 Status: open Depends on: economics_participation_drivers_spec.md (12 driver loops, drivers #5 + #11 already implemented)
> "Agents" throughout this doc means any actor — LLM personas
> (theorist, skeptic, …), orchestra worker classes (codex, minimax,
> glm-…), market participants (Methodologist, ReplicationScout, …),
> human researchers, external bots. Anything with a row in
> agent_registry or a token_accounts entry. SciDEX is a hybrid
> human/agent collective; the economics must treat them uniformly.
The 12-driver wire-up gave the system a flat token economy:
contribution → fixed reward → wallet credit. Every commit pays 10
tokens, every debate round pays 5, regardless of whether the work
ever mattered. That's a useful baseline — at minimum it credits *who
did what* and gives the persona accounts measurable balances. After
the first backlog drain: 321 contributions, 320 reward events, 2,224
tokens minted to 17 distinct agents.
A flat schedule rewards activity, not progress. The point of
SciDEX is to build a collective world model of neurodegeneration that
gets measurably better over time. The question this spec tries to
answer is:
> When the world model demonstrably improves, how do we
> retroactively credit every agent that contributed to that
> improvement — including the ones whose work landed weeks earlier
> and didn't look important at the time?
This is the credit-assignment problem at the heart of any good
scientific community, and it's the same problem deep-learning
backprop solves for parameters: when a downstream loss decreases,
distribute the gradient backward through the dependency graph,
weighting each upstream contribution by how much it mattered.
Eight observable events:
Each event becomes a row in a newworld_model_improvements table(id, event_type, magnitude, target_artifact_type,
target_artifact_id, detected_at, payout_pool). The payout_poolSciDEX has every link we need to walk backward from an improvement
event to the contributors that earned credit:
knowledge_gaps.id ─── analyses.gap_id (which analysis closed it)
│
↓ analyses.id ─── hypotheses.analysis_id
│ │
│ ↓
│ hypothesis_debates / artifact_debates
│ │
│ ↓ session_id
│ debate_rounds.agent_persona ◄── AGENTS
│ debate_argument_votes ◄── AGENTS
│
↓ analyses.id ─── knowledge_edges.analysis_id
│ (688K edges in the world model)
│
↓ analyses.id ─── agent_contributions
.analysis_id
.hypothesis_id
.debate_session_id ◄── AGENTSPlus agent_contributions already records the agent for every
debate round, market trade, commit, and senate vote that v1 backfilled.
The DAG is in place. We just need to walk it.
Pure Shapley-value credit assignment is exponential. Practical
alternatives:
exp(-t/τ) wheret is the age of the contribution and τ ≈ 30 days. Recent workThe total payout is fixed (the discovery dividend pool); the algorithm
just decides how to slice it. **Roughly: agents that authored the
validated artifact → 40% of the pool. Direct citation parents → 25%.
Debate participants and reviewers → 20%. Indirect contributors via
2nd-degree paths → 15%.** Tunable via config.
It's the same algorithm Google uses to score web pages, and it has
two useful properties for our problem:
def backprop_credit(improvement_id, pool_tokens):
G = build_provenance_dag(improvement_id, depth=3)
# Edges flipped: world_model_improvement <- artifact <- ... <- agent
pr = personalized_pagerank(
G,
seed=improvement_id,
damping=0.85,
recency_decay_tau_days=30,
quality_weighting=True,
)
# Filter to leaf nodes (agents)
agent_shares = {n: pr[n] for n in G.nodes if G.is_agent(n)}
total = sum(agent_shares.values())
for agent, share in agent_shares.items():
amount = pool_tokens * (share / total)
emit_reward(
agent_id=agent,
action_type=f"discovery_dividend:{event_type}",
tokens_awarded=int(round(amount)),
reference_id=improvement_id,
)The current economy is a flat token sink with AMM markets and
heuristic funding. There are a dozen well-researched mechanisms that
fit the SciDEX problem domain better. Each is a separate (small) v2
extension; this spec lists them in priority order so subsequent
tasks can pick them up.
Replace "Venture Funder picks top-5 gaps" with: any agent can
contribute small amounts of their wallet to fund a gap they care
about. The match from a central pool is (Σ √cᵢ)² instead of Σ cᵢ,
which mathematically rewards broad consensus (many small donors)
over single-whale capture. Maps perfectly onto SciDEX gaps with many
opinionated specialists. New table: gap_funding_contributions(agent_id, gap_id, amount, tier).
Already partially implemented — market_trades look LMSR-shaped.
Verify by reading market_dynamics.py and add a settlement step that
pays out via log scoring rule when markets close. Bounded loss for
the market maker, smooth price function, allows arbitrarily small
trades. The right primitive for hypothesis prediction markets. Make
the LMSR loss bound (b parameter) configurable per market.
Currently agent_registry.reputation_score is monotone. Add a
slashing step: when an agent's prediction is settled and the
Brier score exceeds 0.25 (well-calibrated forecasters average ~0.10),
slash a fraction of their staked tokens AND nudge reputation_score
downward. Prevents inflation; encourages calibrated betting; truth
becomes the only sustainable strategy.
Every wallet loses ~0.1% per day if it doesn't transact. Forces
capital to stay productive — agents either keep contributing,
betting, and funding, or watch their balance erode. Demurrage was
empirically validated by the Wörgl experiment. The lost tokens
return to the system pool to seed new bounties. Pure incentive
alignment with no extraction.
For senate proposals, votes cost the square of the influence
desired (1 vote = 1 token, 2 votes = 4 tokens, 5 votes = 25 tokens).
Reduces majority tyranny — strongly-held minority preferences can
out-bid weakly-held majority preferences. Currently senate_votes
is one-vote-per-agent.
For decisions like "should we run analysis A or analysis B?", create
two conditional markets ("if A then outcome quality" and "if B then
outcome quality") and pick the higher-priced one. The market that
loses gets refunded; only the winning-condition trades settle. This
is the "vote on values, bet on beliefs" futarchy primitive. Maps
beautifully onto SciDEX's choice of which gap to investigate.
Reputation gain follows a bonding curve (square root or piecewise
linear) instead of linear, so getting from rep=0.7 to rep=0.8
requires more contribution than 0.6 to 0.7. Reflects the
diminishing returns of additional accolades and prevents reputation
hyperinflation. Symmetric on the way down via slashing.
Use a VCG auction to allocate scarce LLM/compute resources. Agents
submit bids representing their truthful valuations of compute slots
for their tasks; the dominant strategy is to bid honestly. Replaces
the heuristic capital→compute proportional rule from driver #12.
Replace the linear EMA in participant_believability with a CPT
weighting function that systematically corrects for the
well-documented over-weighting of rare events. Better-calibrated
belief updates → better participant_believability → better
weight-of-evidence aggregation in debates.
When an agent enters a debate or market, hide the existing argument
distribution until they commit to a stance. Prevents information
cascades where everyone converges on the first answer just because
the first answer was first. Mechanism: debate_argument_votes
become visible only after the voter casts.
These are recurring drivers that compose with the v1 ones. None
disrupts in-flight work; they all add capability.
The point of building credit backprop and applying real mechanism
design is to make SciDEX a truthful information market. Every
contribution by every agent gets credited proportionally to the
downstream improvement it eventually unlocks, not to how busy it
looked at the time. Quadratic funding aligns capital with broad
expert consensus. Slashing punishes overconfident wrongness.
Demurrage keeps capital productive. Bonding curves reflect
diminishing returns. The result is an economy where the
profit-maximizing strategy is doing real science honestly — and
where weeks-old contributions to an obscure gap can be retroactively
rewarded the day a downstream paper validates them. That's the kind
of incentive structure that lets a hybrid agent/human collective
build a coherent, self-improving world model.
python3 -m economics_drivers.detect_improvements in dry-run: no new events (idempotent — all events already detected by prior run)world_model_improvements table has 137 rows across all event types:gap_resolved (very_high): 2 rows — both resolved gaps captured with correct magnitudecitation_threshold_high: 21 rows (payout_pool=500)citation_threshold_medium: 29 rows (payout_pool=200)hypothesis_matured (high/medium/low): 85 rows distributed across magnitudes
no-op as expectedGET /api/tokens/backprop/status (api.py:12811): SELECT uses correct columnsevent_type, target_artifact_type, target_artifact_id, payout_pool — verifiedPRAGMA table_info) and backprop_credit.py SELECTPOST /api/tokens/backprop (api.py:12857): admin auth gate in place — checksapi_keys table → 'admin' permission JSON;migrations/068_world_model_improvements.py): CREATE TABLE IF NOT EXISTSevent_type, target_artifact_type, target_artifact_id,payout_pool) aligns with backprop_credit.py SELECT and live DB (137 rows)
GET /api/tokens/backprop/status uses correct columns (event_type,target_artifact_type, target_artifact_id, payout_pool) — verifiedbackprop_credit.py SELECTPOST /api/tokens/backprop has admin auth gate: Bearer token → SHA256api_keys table → JSON permissions with 'admin' required;migrations/068_world_model_improvements.py): CREATE TABLE IF NOT EXISTSbackprop_credit.py SELECT and live DBbackprop-final (clean linear, no merge commits) on origin/main tipdd917f3d:detect_improvements.py had its own inline _ensure_schema() thatworld_model_improvements with a subset of columns, missing description,impact_score, source_gap_id, source_analysis_id, metadata, created_at,distribution_details. If detect_improvements.py ran before migration 068, the tableCREATE TABLE IF NOT EXISTSeconomics_drivers/detect_improvements.py):_ensure_schema() to use the full canonical schema matching migration 068payout_pool INTEGER NOT NULL DEFAULT 0, CHECK constraint on payout_status,description, impact_score, source_gap_id, etc.)ALTER TABLE ADD COLUMN logic for each extra column if the table existsIF NOT EXISTS checks
detect_improvements.py, backprop_credit.py, and api.py all reference the sameevent_type, target_artifact_type/id, payout_pool, payout_status,detection_metadata, detected_at, distributed_at, and extras)
/api/tokens/backprop/status uses correct columns;/api/tokens/backprop has admin auth gate (Bearer → SHA256 → api_keys → 'admin' perm)GET /api/tokens/backprop/status hardcoded SELECT of description,source_gap_id, source_analysis_id — would fail on DBs created by olderdetect_improvements.py runs that used a partial schema (missing those columns)
api.py api_backprop_status): runtime column detection viaPRAGMA table_info(world_model_improvements) to build a dynamic SELECT thatNone for missing extra columns
POST /api/tokens/backprop already has admin gate (confirmed)origin/main itself contains merge commits in its ancestry (e.g., 174a42d3 "Merge origin/main to reconcile divergence"). The GitHub branch protection rule checks ALL commits reachable from the branch tip, not just new commits. Since 174a42d3 is an ancestor of origin/main's HEAD (eae2674f), ANY branch derived from main will fail this rule.git merge-base --is-ancestor 174a42d3 origin/main → IS ancestorgit rev-list --ancestry-path 174a42d3..origin/main → shows full ancestry pathorigin/main's tip, cherry-picked only our commit — still rejected
289e4cfc notes "GH013 pre-existing" and f8916b81 notes "origin/main itself has merges"142c292d (timeout fix) and ff1d30a6 (metrics update) while work was in progress174a42d3 confirmed IS ancestor of origin/main HEAD (142c292d) — rule still blocks174a42d3 merge is pre-existing ancestor in origin/maina88f4944_cb09...): truncated to remove 7 work-log entries (6th–16th execution) that other agents added to their own copy of the spec; origin/main also has 16 agent logs — this is normal churn from recurring tasks, not our change/api/tokens/backprop/status: PRAGMA column detection, dynamic SELECT, normalized rows/api/tokens/backprop: Bearer token → SHA256 → api_keys → 'admin' permission gate/api/tokens/backprop/status: core_cols includes event_type, target_artifact_type,target_artifact_id, payout_pool; extra cols detected via PRAGMA; dynamic SELECT built/api/tokens/backprop: Bearer token → SHA256 → api_keys → 'admin' permission;174a42d3 ("Merge origin/main to reconcile divergence") is a pre-existingsqlite3.OperationalError: unable to open database file when174a42d3 is removed from origin/main's history (requires repo admin)/api/tokens/backprop/status: dynamic column detection via PRAGMA table_info;event_type, target_artifact_type, target_artifact_id, payout_pool, payout_status,
detected_at, distributed_at; extra cols added if present in DB/api/tokens/backprop: Bearer token → SHA256 → api_keys table → 'admin' permission;174a42d3 merge commit is pre-existing in origin/main ancestrybackprop-dr14-v7 from origin/main with squashed commits:38e8670b: migration 068 (new file, 65 lines)5513ae3e: api.py (+131), detect_improvements.py (+83), spec (+121)
fbde4f6c (old merge commit) NOT in new branch174a42d3 is a merge commit in origin/main's history (pre-existing, not introduced by our branch).python3 test_backprop_credit.py → 4 tests OKevent_type, target_artifact_type, target_artifact_id, payout_pool all presentpython3 -m economics_drivers.backprop_credit --limit 5hypothesis_matured events targeting gap-debate hypotheses)orphan (no agent_contributions rows for the hypotheses' analyses)agent_contributions recorded — upstream contributors are debate personas (Theorist/Skeptic/etc.) which are stored in debate_rounds.agent_persona but the debate_session link requires hypothesis_debates table which may not have entriespython3 -m economics_drivers.backprop_credit --limit 20no-op as expected; driver marks improvements 'distributed' on success, 'orphan' if no upstream agents foundpython3 -m economics_drivers.detect_improvements --dry-run --limit 50knowledge_gaps: 2 resolved, 157 partially_addressed, 3074 open, 26 investigatingartifact_links analyses: 296,630 rows (citation targets)hypotheses: 335 total, max confidence 0.9world_model_improvements: 140 rows already captured
python3 -m economics_drivers.detect_improvements --limit 100gap_resolved_max_id=gap-senescent-clearance-neuro, hypothesis_matured_last_id=h_seaad_004world_model_improvements has 143 rows (46 distributed, 97 orphan, 0 pending)knowledge_gaps: 2 resolved (no new since last cycle)hypotheses: 88 with confidence ≥ 0.7 (no new entries since last high-water mark)
_detect_confidence_growth used a high-water mark on hypothesis id to skip rows already seen. This missed 4 hypotheses whose confidence_score was later updated to ≥0.7 (92 eligible vs 88 detected). The high-water-mark approach is correct for append-only tables but wrong for mutable scores._detect_confidence_growth; switched to a full scan of all hypotheses with confidence_score >= 0.7. Dedup relies entirely on _exists() (already in place). Table has ~350 rows — full scan is negligible.hypothesis_matured=4 (the 4 previously missed hypotheses now detected)world_model_improvements has 147 rows (46 distributed, 97 orphan, 4 pending)economics_drivers/quadratic_funding.py (489 lines) — fullmatch(gap) = (Σ √cᵢ)² normalized to 500-token matching pool per 6h roundagent_contributions → analyses → knowledge_gaps provenance;gap_bounty:<gap_id>gap_funding_contributions(id, timestamp, agent_id, gap_id, amount, tier,gap_funding_rounds — mirrors migration 069
_ensure_schema() is safe before--dry-run computes without writing
economics_drivers/token_demurrage.py (267 lines) — Gesell's Freigeldtoken_ledger entry (from OR to); if idle > 24h → deductbalance × 0.001 (0.1 %/day); credit deducted tokens to system pool account
demurrage_rounds(sweep_date UNIQUE) table guards against double-runssystem, qf_matching_pool, MINTED, and any account prefixedgap:, gap_bounty:, market:, squad:, demurrage_
reason='demurrage' (debit) andreason='demurrage_receipt' (system credit) — full audit trail in token_ledger
demurrage_rounds(id, sweep_date, started_at, completed_at, wallets_charged,sweep_date
economics_drivers/ci_calibration_slashing.py (451 lines) — Brier-score(entry_price - resolution_price)^2, SHORT ((1-entry_price) - resolution_price)^2.reputation_score by 0.02
UNIQUE(position_id) guards double-slashing; failed burns retried withouttoken_demurrage/quadratic_funding driver patterns: direct SQL, SCIDEX_DB env var,--dry-run/--stats/--test CLI flags, python3 -m economics_drivers.ci_calibration_slashingpython3 -m economics_drivers.ci_calibration_slashingtotal_slashes=0, total_tokens_slashed=0.0 — no miscalibrated positions to penalizecalibration_slashing table guards per-positionpython3 -m economics_drivers.ci_calibration_slashing_walk_provenance for analysis artifacts only traversed hypotheses.analysis_iddebate_sessions.analysis_id. Since debate_sessions has a direct analysis_id FK (andagent_contributions rows), 101/143 improvements wereorphan — the walk ran out of edges before reaching debate_rounds.agent_persona.
backprop_credit.py): added debate_sessions.analysis_id traversal in theanalysis branch of _walk_provenance, appending found sessions to the BFS frontier before--reset-orphans flag added: _reset_recoverable_orphans() re-walks each orphan row;payout_status = 'pending' so the current cycleorphan.
--reset-orphans --limit 50):world_model_improvementspython3 -m economics_drivers.ci_calibration_slashingpython3 economics_drivers/quadratic_funding.pygap-debate-20260412-094638-cd9ef05dgap_funding_contributions (44 direct rows + 1 match row),gap_funding_rounds (round 7 recorded), token_accounts (gap bounty pool credited)
7839b732c is IN remote main (ancestry: 4535366b→...→ea435d6f→7839b732c).174a42d3b merge commit is in full GitHub ancestry of allpython3 economics_drivers/quadratic_funding.pygap-debate-20260412-094638-cd9ef05d (same gap, slight contributor drop)174a42d3bpython3 -m economics_drivers.quadratic_fundinggap-debate-20260412-094638-cd9ef05d (consensus sustained across 9 cycles)gap_funding_contributions, gap_funding_rounds, token_accountspython3 -m economics_drivers.quadratic_fundinggap-debate-20260412-094638-cd9ef05d (consensus sustained across 10 cycles)gap_funding_contributions, gap_funding_rounds, token_accountspython3 economics_drivers/ci_calibration_slashing.pysys.path.insert(0, '/home/ubuntu/scidex') pointed to main repo (no token_ledger.py there).__file__ so token_ledger shim is found.DB_PATH configurable via SCIDEX_DB env var with postgresql://scidex as default.
postgresql://scidex (~4GB); worktree has only 28KB stubcalibration_slashing table has 0 records.
executescript not supported, INSERT OR IGNORE is SQLite-only syntax, ORDER BY in SELECT DISTINCT requires column to also be in select list (PostgreSQL stricter than SQLite).economics_drivers/quadratic_funding.py):_ensure_schema: Replaced executescript multi-statement with individual conn.execute() calls — executescript doesn't exist on PGShimConnection._get_agent_gaps: Wrapped SELECT DISTINCT in a subquery so ORDER BY ac_id DESC can reference the selected alias — fixes InvalidColumnReference error in PostgreSQL.INSERT OR IGNORE: Replaced with INSERT ... ON CONFLICT (id) DO NOTHING — PostgreSQL upsert syntax.
orchestra/task/2bb1d0cd-quadratic-funding-allocator-driver-15 committed and pushed.python3 -m economics_drivers.quadratic_funding completed round 27 (50 contributors, 1 gap funded, 657.1 direct + 500.0 matched tokens) for gap-debate-20260417-033236-0fe26d91._get_eligible_agents only excluded system, so high-balance pool accounts such as gap_bounty:, dataset_pool:, squad:, and tournament: were selected as contributors and auto-funded fallback gaps.glm-5:60; added a regression test covering pool-account exclusion.hypotheses table (B-tree pages 344, 415, etc. returning error 11: "database disk image is malformed"). Affected _detect_confidence_growth and _detect_hypothesis_promoted which used ORDER BY id ASC — forced SQLite to traverse corrupted index pages.hypotheses table has partial B-tree page corruption affecting the id-indexed access path. Simple queries with LIMIT ≤44 work; ORDER BY or high limits trigger corrupted pages.economics_drivers/detect_improvements.py):_detect_confidence_growth: Replaced direct query with rowid-based CTE pagination. Uses WITH batch AS (SELECT rowid FROM hypotheses WHERE confidence_score >= 0.7 AND rowid > ? ORDER BY rowid ASC LIMIT ?), then JOINs back to get full rows. SAFE_BATCH=44. Tracks progress via confidence_growth_last_rowid in state batch._detect_hypothesis_promoted: Same rowid-based CTE approach. SAFE_BATCH=30. Tracks progress via hypothesis_promoted_last_rowid.
python3 -m economics_drivers.detect_improvements (non-dry-run)hypothesis_promoted=6 (6 new events detected and written as pending)world_model_improvements now has 263 rows (254 distributed, 3 orphan, 6 pending)RecursionError: maximum recursion depth exceeded on PostgreSQL DB. Root cause: the local get_db() wrapper called get_db() from scidex.core.database (imported as from scidex.core.database import get_db) creating infinite recursion. Also used SQLite-only syntax (sqlite_master, datetime('now'), PRAGMA, sqlite3.Row).economics_drivers/ci_calibration_slashing.py):get_db() wrapper — uses scidex.core.database.get_db directly (PGShimConnection, no close needed)sqlite_master with pg_catalog.pg_tables (PostgreSQL system catalog)datetime('now') with Python-side _pg_compat_now() using datetime.now(timezone.utc).strftime() — passed as explicit parameterPRAGMA journal_mode=WAL and PRAGMA busy_timeout (PostgreSQL handles these natively)sqlite3.Row row factory (PGShimConnection returns dict-like objects natively)sqlite3.IntegrityError catch — replaced with generic Exception that checks for duplicate-key keywordsdb.close() calls throughout (PGShimConnection manages its own pool)sqlite3 import (only used for type annotations that are now removed)CREATE INDEX IF NOT EXISTS to CREATE INDEX (PostgreSQL is idempotent; IF NOT EXISTS unnecessary)ensure_calibration_slashing_table default from (datetime('now')) to (CURRENT_TIMESTAMP) for PostgreSQL compat
python3 -m economics_drivers.ci_calibration_slashing → runs successfully against PostgreSQL; 1 settled position evaluated, 0 slashes (Brier ≤ 0.25)orchestra/task/c3a426dc-calibration-slashing-for-miscalibrated-fpython3 -m economics_drivers.quadratic_funding --dry-run projected round 30 withpython3 -m py_compile economics_drivers/quadratic_funding.py passed;python3 -m economics_drivers.quadratic_fundinggap-debate-20260417-033236-0fe26d91gap_funding_rounds has 30 rows, gap_funding_contributions has 1,217 rows,{
"requirements": {
"coding": 6,
"analysis": 6,
"safety": 9
},
"completion_shas": [
"6f37492e489d95f8e1d2da76793dd60d1826289e",
"ba9153a09d61ebb35bea2bb459e82d2e1cbec7ae"
],
"completion_shas_checked_at": "2026-04-12T22:25:51.344678+00:00",
"completion_shas_missing": [
"890749f3bf91486a7c5261a0c49f42c8776f40f4",
"e0fd764e4b3f7d0112bc1b480e7af4d979b74574",
"0d00883c440be4f588262af1853887f3bdeb9379",
"644787d2fb8276578ca7e5100afa0927e38b21d8",
"0567540c4e34cea4b61280bfc9c4ae3284b6d349",
"0064c58de883a2285ceab2332cbaef59580dc9aa",
"2f5cb207381c77885536e22305d483b0d3dca548",
"391a979da2400978dc0694910e1a07043c82db7c",
"cd0e791606c9563c3e9e604c11b649d64af1e838",
"6bef402d79275752bdf8a9af1801f51378fc5530",
"f3c45d0fce2ff9dd1c895901ecb5c7e958d3adc2",
"b64ade1e7f56a48b6512658bc8187776260425d7",
"8ef41fe5d5541a0caf7bedd0075ecb761d6d3c48",
"b348e7e7670bb4064b818238fadd5199094f78bc",
"51b6e43ee010c8906f50459121decb289e011e22",
"5613f5ae366acb95501ffb0e0f0866a48839bec6",
"6e53463223c035576b94be1009de5e5db0139701",
"f8dc60539570e545d99af7ea5788fc8d60561261",
"5487226585db91c9b2f67437cf64035e154e86aa",
"53e2d56ad87f7f1100862464c07bc98b4a4fe740",
"0c1c848d122a1cc8902c9145fd63d1542cefc903",
"c87512a0c360a994a37aac798ef571315c316097",
"28aa7a12945ff666ca33f59450719935ebf88f09",
"794c42592973b10e6b780b886e8313f52a2af686",
"0dc727390a92f49960a3754c82edefaf3f7ec1ea",
"424c9bc5dfbe813f34fa6834757d4bb595743e1c",
"e68a3d884680887898f111390aa7ae11ed8d2df4",
"b3d4ea7ccf8005e8599d75d087188c1ea53a803a",
"e4d9d6a8d834378730568d381434b059d63eb3a4",
"a52118a6fe14a2b2510a3a0377665572574a6ce2",
"f3ee47aa662da01e067dd8d15facfedfeea66529",
"5128f4675777ed06857c48dfb8203e5aa8b2626a",
"e62ea65d164048f7688a27426252a302e0a6c240",
"b99b99fa1df07860ce22e1fb81faf9007803e9da",
"82affb5f9447717e9194eb8c56e4f9ed060a2930",
"1956ee4102fe0c7d63dd393f058af084a65faed2",
"16179f3cf88044334e15cdcf3750c45802d68bc2",
"c198cb98666218dd7d87a82ad9b81a28cbaf24e6",
"afa60621eb64d72bd8cb6ab11f7b70cd52076d4d",
"3e25b1edb3ea58644ce9f3012bab1d09f6f649ad",
"c96186a173962a46f7e21825330127bc4d392cb5",
"571b655d8f7529cf2c2326ed6c379d71cd0e4120",
"1710c404a0677214375261da21dcfa7912ddea07",
"fb7172ee86403bd331a0554b98fa59e6f360b2f4",
"b3249e57c706119a24c9621f9710328d0beb856d",
"043b1617d1c884e0c16d7593d90aca5cfc5df2a6",
"87d0eada70235d4db745b1bc043274ca708082bd",
"c8bbc1102fb1a4f7447729b8cecf05ec14cf40b1",
"e3bf8ce91a7344f2cf12fd3a7b50a3eed1085c9f",
"74b1b757b44e0ea0334eeb0d2c4240012b3c49e3",
"264058f24d4ee15a51fc0e5bdb2fe2402ba2f112",
"94185256a9e1c81eb215ba70a65809daa33a5bef",
"92d3e0ffe3bd13abafb33aace2343c0258ab971e",
"9e143a3431e59c1b37c4daf49efc5509198f6ca4",
"fbb4d913df13e4642e8d838e59e28423a2802ac6",
"ef65786c3ae44aa1412ead12075475646e23dd9b",
"73fbb026a92f2f7a87782b9521e8b4a90336e1b7",
"1f0f0fe5da57fdea2484698c1dd55abb697f2673",
"7d827cc655319ec65f95e14e290f330c4fb1b50f",
"063d3f66a1e393081c8f28bbadb52d40cd165778",
"cf4418b577d69319254f63315ed963a920bdb0c5",
"e5e3481cc73bda14563842906e21da3f83911f1c",
"437d484d0659d04859c0515dfdf58706b80b7849",
"29966f5f7d6ce53bf75bd1fac1fddce1a84c0fc9"
]
}