[Atlas] Research-front velocity meter for sub-fields open

← Live Dashboard Artifact Framework
Per-field velocity meter (z-scored hyps+papers+debates+OQs per week) with cold/warm/hot/red-hot bands and leaderboard.
Spec File

Effort: standard

Goal

Some sub-fields are accelerating (new hypothesis-per-week rate
spiking, new debates being spawned, new papers ingested), some
are dormant. Research funders, journalists, and senior PIs all
want to know "where is the action moving?" — but SciDEX surfaces
no per-sub-field velocity signal.

Build a Research-Front Velocity Meter: a per-sub-field
scalar (hypotheses per week + papers per week + debates per week,
weighted) plus a 12-week trend, plus a "hot fronts" leaderboard.
The meter drives the hero of whats-changed and gives every
intro page a "field heat" badge.

Acceptance Criteria

☐ New module scidex/atlas/research_front_velocity.py:
- compute_velocity(field_slug: str, window_weeks: int = 4)
-> dict
returns {velocity_score, components:
{new_hyps_per_week, new_papers_per_week,
new_debates_per_week, new_open_questions_per_week},
trend_12w: list[float], pct_rank_among_all_fields,
velocity_band: 'cold'|'warm'|'hot'|'red-hot'}
.
- recompute_all_fields() walks the field registry and
writes to field_velocity_snapshots.
☐ New table field_velocity_snapshots with (field_slug,
computed_at, velocity_score, components JSONB,
trend_12w JSONB, pct_rank, velocity_band)
and an index
on (field_slug, computed_at DESC).
☐ Velocity formula (documented in module docstring):
velocity = 0.4 z(new_hyps) + 0.3 z(new_papers) +
0.2 z(new_debates) + 0.1 z(new_open_questions)

where each component is z-scored against the field's
own trailing-12-week mean (so velocity measures
acceleration relative to baseline, not raw output —
avoids dominant fields permanently outranking niche
ones).
☐ Velocity bands: pct_rank ≤ 25 = cold, 25-50 = warm,
50-80 = hot, 80+ = red-hot.
GET /research-fronts page shows leaderboard of all
registered fields sorted by velocity_score desc, with
band-color cards, sparkline of trend_12w, and a "drill in"
link to /intro/{field-slug}.
☐ Per-field velocity badge: every /intro/{field-slug}
page renders the band as a colored badge in the hero;
/field-trends/{field-slug} (q-time-field-time-series)
embeds the velocity meter at the top.
☐ Daily systemd timer scidex-field-velocity-daily.timer
recomputes the snapshot for every field; stale snapshots
(>48h) are flagged in senate_alerts.
☐ Pytest: synthetic 12-week activity for 3 fields with
known relative velocities; asserts band assignment
matches expectation; asserts the leaderboard ordering;
asserts trend_12w array length is exactly 12.

Approach

  • Field-tag joins reuse the same resolver as
  • q-edu-intro-to-field.
  • Z-scoring uses scipy.stats.zscore if available, else a
  • pure-Python helper (do not introduce a new dep).
  • Sparkline reuses the SVG component from
  • q-synth-whats-changed.
  • Velocity computation is pure SQL where possible — only the
  • z-score normalization is in Python.
  • The "stale snapshot" alert prevents a silent-failure mode
  • where the daily cron dies and meters freeze without anyone
    noticing.

    Dependencies

    • q-edu-intro-to-field — field registry.
    • q-time-field-time-series — embeds the badge.
    • q-synth-whats-changed — sparkline component reuse.

    Work Log

    Sibling Tasks in Quest (Live Dashboard Artifact Framework) ↗