Spegling

A personal authority system for the era of cognitive surrender and intent drift. Not an assistant. A mirror.

Spegling holds your declared intent as a persistent object, not a vanishing prompt. It records evidence as structured audit, not transcript. When the world moves under the intent you signed off on, it re-asks before continuing. Three primitives, one runtime.

Spegling is the productized response to a diagnosis that arrived from two disciplines weeks apart. Storey's Triple Debt Model (UVic, Feb 2026) named the team-level mechanism: AI accelerates code production faster than teams can build the mental model to maintain it. Wharton's Shaw and Nave measured the individual case: 80% of users follow incorrect AI; 73% surrender System 2 thinking entirely. The architecture that takes the diagnosis seriously is at varjosoft.com/bridge.

How it works

  • Declared Intent. Every meaningful run is anchored to a stable intent record with scope and constraints. Decisions link back to the intent, not to a transcript that decays as new tokens arrive
  • Living Authority. Authority is granted at a moment. The system detects when the world has moved under that authority and re-asks for consent before continuing. Approval where it earns its cost, not approval theatre
  • Evidence Audit. What the agent consulted is recorded as structured records, indexable by intent. Queryable across runs. A regulator, a future maintainer, or you in three months can answer what did this stand on? without reading transcripts
  • No speculation in memory. Anything written to Spegling's durable Evidence ledger traces to a verified tool-call result. Reasoning chains, untested plans, and model speculation live in working memory and never graduate to durable evidence without grounding. The discipline prevents model output from crossing from candidate to memorized-fact without verification — the architectural analog of cognitive surrender
  • Plus: a corpus of architectural patterns kept current by daily research, persistent memory across sessions with provenance and confidence tracking, and directed research campaigns that search internal knowledge first and fill gaps from the web

In practice

Used to develop turboquant-vllm, an open-source LLM compression library. Spegling conducted the research, identified approaches, and produced the benchmarks. Total GPU cost: $18. Full write-up at varjosoft.com/writing.

Spegling

Wiki

Loading articles...

Chat

News

Notes

    Patterns

    Repos

    Loading workspaces...

    Stats

    Loading…

    -- articles -- pending agents
    ⌘K to open · ↑↓ to navigate · Enter to open · Esc to close