[Forge] Score performance for 25 unscored registered skills done analysis:7 coding:8

← Forge
Registered skills have missing or zero performance_score. Skill scores guide routing, playground exposure, and tool-library maintenance. Verification: - 25 skills have calibrated performance_score values or documented insufficient-data status - Scores are based on tool call success, latency, usage, and code-path health - Remaining unscored skill count is reduced Start by reading this task's spec and checking for duplicate recent work.

Git Commits (1)

[Verify] Skill scoring verified: 282 skills, 0 unscored [task:0d17fbe5-ebf7-4850-b35c-a29d8fe2c6ec]2026-04-22
Spec File

Goal

Calibrate performance scores for registered skills that currently have no meaningful score. Skill scores support routing, maintenance, and tool-playground exposure.

Acceptance Criteria

☐ A concrete batch of unscored skills is reviewed
☐ Each reviewed skill receives a calibrated performance_score or insufficient-data rationale
☐ Scores use tool call success, latency, usage, and code-path health
☐ Before/after unscored skill counts are recorded

Approach

  • Select skills with performance_score NULL or 0, ordered by usage and category.
  • Inspect tool_calls, tool_invocations, latency, error rates, and code_path existence.
  • Persist calibrated scores or documented insufficient-data rationale through the standard DB path.
  • Verify updated scores and remaining backlog.
  • Dependencies

    • q-cc0888c0004a - Agent Ecosystem quest

    Dependents

    • Forge skill registry, routing, and tool quality

    Work Log

    2026-04-26 05:23 UTC — Task bec30a01 benchmark run

    • Task bec30a01-e196-4d26-a051-e9e808b95146 ran benchmark on 26 unscored skills.
    • Scored all 26 using tool_calls telemetry: formula = 0.5 + 0.3success_rate + 0.2speed_factor.
    • Score range: 0.150 (never-used playground tools) to 0.994 (openfda-adverse-events, 1 call, 100% SR).
    • Result: 26→0 unscored skills. All 682 skills now have performance_score > 0.
    • Commit: 0267ccb80 ([Forge] Benchmark 25 registered skills by performance and accuracy [task:bec30a01-e196-4d26-a051-e9e808b95146]).

    2026-04-21 - Quest engine template

    • Created reusable spec for quest-engine generated skill performance scoring tasks.

    Already Resolved — 2026-04-21 21:27:05Z

    • Task 85d9ad14-3519-4ebd-9e5e-e4189ac7b2e8 was stale by the time this slot started: live PostgreSQL verification through scidex.core.database.get_db() found 282 registered skills and 0 skills with performance_score IS NULL OR performance_score = 0.
    • Evidence query also found score coverage across the current Forge telemetry base: min score 0.15, max score 1.0, average score 0.9271; tool_calls has 28,280 rows with 27,889 successes, 390 errors, and 1544.2 ms average nonzero latency.
    • Code-path health spot check grouped the registry by skills.code_path; the primary tools.py path has 112 scored skills and exists in-repo, and registered forge/skills/*/SKILL.md paths sampled from the scored registry exist.
    • Resolution source: prior branch commit eb7917ecf ([Forge] Score performance for 25 unscored registered skills [task:c82d378b-5192-4823-9193-939dd71935d1]) plus verification commit 70fbe70a2 ([Verify] Skill scoring already resolved — 26→1 unscored [task:c82d378b-5192-4823-9193-939dd71935d1]) addressed the scoring backlog; current live DB now has 0 remaining unscored skills, satisfying this task's <= 0 verification target.

    Already Resolved — 2026-04-21 21:35:00Z

    • Re-verified: 282 skills, 0 unscored (performance_score IS NULL OR = 0). Score stats unchanged: min=0.15, max=1.0, avg=0.9271. Task acceptance criteria already satisfied by prior work.

    2026-04-27 06:32 UTC — Task 610a8b3c: rescore 20 skills by test coverage + error rate

    • Rescored 20 registered skills using two-dimensional formula:
    score = 0.40 success_rate + 0.35 test_coverage + 0.25 * speed_factor
    • Test coverage audit (manual scan of tests/):
    - 1.0 — tool_pubmed_search, tool_paper_figures (dedicated test files)
    - 0.8 — tool_research_topic (success/failure paths in test_agora_orchestrator_tools.py)
    - 0.6 — tool_gtex_tissue_expression, tool_open_targets_associations, tool_semantic_scholar_search (indirect mock references)
    - 0.2 — remaining 14 skills (no coverage found)
    • Error rate from tool_calls telemetry (per-skill errors / total calls)
    • Speed factor penalises average latency above 500 ms (drops to 0 at 10.5 s)
    • Notable corrections from previous blanket 1.0 scores:
    - tool_pubchem_compound 1.000 → 0.561 (39.7% error rate, no tests)
    - tool_msigdb_gene_sets 1.000 → 0.574 (34.0% error rate, no tests)
    - tool_expression_atlas_differential 1.000 → 0.587 (15.0% err, 3.4 s avg, no tests)
    - tool_brainspan_expression 1.000 → 0.665 (13.7% error rate, no tests)
    - tool_mgi_mouse_models 1.000 → 0.655 (16.3% error rate, no tests)
    - tool_pubmed_search 1.000 → 0.996 (0.3% error rate, full test coverage) ✓
    • Script: score_skills_by_coverage_and_errors.py (checked in)

    Payload JSON
    {
      "requirements": {
        "analysis": 7,
        "coding": 8
      }
    }

    Sibling Tasks in Quest (Forge) ↗