Spec

Paper candidate

Longitudinal self-specification through repeated AI operator correction

A live record of instruction revisions, preference formation, paper-template constraints, cognitive-load constraints, and AI answer-quality standards.

Abstract

Spec treats Alan's personality-instruction revisions as data. Each revision stores the full instruction text, date, change summary, PDF export, and analysis state. The central finding is that repeated AI friction can be converted into a durable operating contract: source discipline, visual standards, paper continuity, paper-template preservation, cognitive-load control, and explicit done reporting become measurable interface requirements instead of repeated prompting. The current record includes recovered local Codex session history, not only the latest hand-entered contract.

Current Evidence

The stored artifacts include 41 instruction revisions from local Codex history and the current local AGENTS.md contract. The current revision includes source-of-truth operation, continuous paper discipline, a mandatory canonical LaTeX paper-template rule, a website quality bar, look-and-feel inference, visible-language constraints, neurodivergent cognitive-load rules, and automatic recording of future personality-instruction revisions in Spec.

Novelty

The paper-worthy object is not a better prompt. It is a longitudinal preference-to-contract loop. Corrections become acceptance tests. Frustration, overwhelm, and paper-format drift become typed signals. The recovered history shows repeated tightening rather than a single preference statement: broad operating instructions narrowed into outcome contracts, then into a personality rule that Alan should not be the quality-control loop.

Boundary

Archived ChatGPT history is not ingested yet. The current app supports stored revisions and GPT analysis, but deeper archived-chat analysis requires exported or connected source data.