Skip to main content

How we measure AI representation

We don’t guess what AI says about you—we query it and record the answers. The Scan uses a fixed set of prompts across three models; the Snapshot adds more models, retrieval behavior, and web context. The output is what we see, not what we wish to see. This article gives a straightforward overview of how we measure so you know what you’re getting.

We don’t use proprietary "scores" or black-box ratings. We show you the actual responses and, in the Snapshot, how they’re driven by sources and retrieval. That transparency is intentional: you can judge for yourself and use the report for internal or external discussion.

Scan methodology

For the Scan we ask each model a small set of first-impression questions (e.g. who is X, what is X known for). We don’t use web retrieval for the Scan; we capture the model’s direct response. That gives you a comparable first impression across OpenAI, Anthropic, and Google.

The prompts are designed to mirror how people typically look someone or something up. We don’t prime the model with extra context; we want the "cold" answer. The PDF you receive shows the raw answers side by side, so you can see consistency, confusion, or clear divergence between models.

Snapshot methodology

The Snapshot extends to more AI systems and adds retrieval and web context. We look at entity resolution, name collision, and what sources drive the answers. The report is a diagnostic, not an optimization plan.

We run a broader set of queries and, where relevant, we inspect retrieval behavior and web context. That lets us explain why an answer looks the way it does: Is it training data? A particular set of sources? Name collision? The Snapshot report documents that so you can decide whether improvement is plausible and where to focus.

Limits

We don’t control or train the models. Results can change with model updates. We document what we see at a point in time and help you decide what to do next.

We also don’t guarantee that our queries match every possible way someone might ask about you. We use a consistent, representative set so that results are comparable over time and across entities. If you need a deeper or custom analysis, that’s a different engagement; the Scan and Snapshot are designed to be repeatable, clear, and useful for most situations.

← Back to blog

Cookie preferences