Start here

TL;DR

What this is

Field guide: A set of concepts and templates to map AI belief, human belief, and the overlap of trusted sources — then publish scaffolding that makes understanding more coherent.

What this is not

The north star

We don’t optimize for AI outputs. We influence the belief systems AI reflects.

How to use this

  1. Map AI belief: run prompts, capture hedges, capture citations.
  2. Map human belief: forums, social listening, recurring confusion.
  3. Find gaps/tensions: where AI is confident but humans aren’t (and vice versa).
  4. Publish scaffolding: definitions → mechanism → constraints → application.
  5. Re-test: does AI hedge less? does it adopt the framing?

Last updated: 2026-02-04