Start here
TL;DR
- This site explains how AI answers behave and stabilize, based on observable patterns.
- We focus on belief states: what’s confident, what’s hedged, and what’s missing.
- We don’t claim internal model mechanics. We document repeatable outputs.
What this is
Field guide: A set of concepts and templates to map AI belief, human belief, and the overlap of trusted sources — then publish scaffolding that makes understanding more coherent.
What this is not
- Not a “rank for AI Overviews” playbook.
- Not internal ML architecture documentation.
- Not a collection of prompt hacks.
The north star
We don’t optimize for AI outputs. We influence the belief systems AI reflects.
How to use this
- Map AI belief: run prompts, capture hedges, capture citations.
- Map human belief: forums, social listening, recurring confusion.
- Find gaps/tensions: where AI is confident but humans aren’t (and vice versa).
- Publish scaffolding: definitions → mechanism → constraints → application.
- Re-test: does AI hedge less? does it adopt the framing?
Last updated: 2026-02-04