AI Visibility

Understanding Your AI Visibility Report

The Shift

AI platforms — ChatGPT, Gemini, Perplexity, Claude, Google AI Overviews — are now the first place people look when researching a person, brand, or organisation. These systems synthesise answers from a small set of high-authority sources. If your narrative isn't in those sources with consistent, structured entity data, you don't control what AI says about you.

Aleksandra King Agency helps individuals and organisations take control of their AI visibility — ensuring that when someone asks an AI platform about you, the answer is accurate, current, and reflects your authority.

What Is an AI Visibility Report?

An AI Visibility Report is a structured baseline assessment of how major AI platforms currently describe you and your work. It captures what AI says today, identifies specific gaps and challenges, and establishes the measurable benchmarks against which a visibility campaign is evaluated.

Every report is bespoke. We select six queries relevant to your field, run them across four major AI platforms, and assess the results against four proprietary metrics. The findings are presented alongside actionable campaign objectives designed to close the gaps we identify.

How We Score: The AKA Visibility Metrics

Visibility Score (0–10)

The AKA Visibility Score measures how prominently and accurately you appear in AI-generated answers.

Each query is run across ChatGPT, Perplexity, Gemini, and Claude — twice per platform, producing eight responses per query. Each response is scored against the following rubric, and the scores are averaged.

ScoreWhat It Means9–10You dominate the answer. Named first or as the definitive authority. Accurately described with current information.7–8Prominently featured. Named in the top three. Described accurately, though sourcing may not reflect your most recent work.4–6Present but not dominant. Listed as one option among several. Description may be generic or outdated.1–3Barely mentioned. Appears inconsistently across platforms. May be inaccurately described.0Absent from all responses tested.

The Visibility Score is the primary metric we track before and after every campaign. A meaningful improvement — for example, moving from 4 to 7 on a key query — represents a measurable shift in how AI systems describe and recommend you.

Entity Presence

Entity Presence is a qualitative assessment of how well-established your identity is across AI platforms. It answers the question: does AI know who you are?

RatingWhat It MeansStrongConsistently and correctly identified across all platforms tested.AdequatePresent on most platforms, but description may be generic or incomplete.PartialAppears inconsistently. May be confused with other individuals or entities sharing your name.WeakBarely present or frequently inaccurate.AbsentNot found on any platform tested.

Entity Presence is particularly important where name disambiguation is a factor — for example, where another individual in a different field shares your name and appears in recent press coverage.

Source Count

Source Count is the number of distinct high-authority institutional domains hosting relevant content about you.

We count sources that AI models are known to draw from when constructing answers: institutional databases (LexisNexis, Dow Jones Factiva, Thomson Reuters), news platforms (AP News, Bloomberg, Reuters), academic profiles (university pages, Google Scholar), professional registries (Wikidata, Crunchbase), and quality media outlets.

We do not count social media profiles, user-generated content, forum posts, or low-authority websites. We do not count duplicate or syndicated copies of the same content hosted on different domains.

A higher Source Count means your information exists in more of the places AI systems trust. A campaign typically increases Source Count significantly — from single figures to 70 or more named institutional sources.

Source Recency

Source Recency assesses how current the material is that AI systems are drawing from when they describe you.

RatingWhat It MeansCurrentAI cites material published within the last 12 months.PartialA mix of recent and older sources.DatedAI predominantly cites material older than two years.

Source Recency matters because AI systems that rely on older material will describe your work as it was, not as it is. If your thinking has evolved, if you have published new research, or if your professional circumstances have changed, a Dated rating means AI is giving people an outdated picture.

How We Select Queries

Every AI Visibility Report includes six queries chosen specifically for your field and circumstances. We follow a structured framework:

  1. Identity — "Who is [you]?" The baseline check on whether AI knows who you are and describes you correctly.

  2. Core Expertise — A query about your primary field or subject area, to assess how well AI associates you with your domain.

  3. Contextual — Your expertise connected to a current topic or trend, to test whether AI reflects your most recent thinking.

  4. Discovery — How someone would find you — through podcast appearances, publications, speaking engagements, or similar.

  5. Applied — A practical question in your domain where your expertise should appear in the answer.

  6. Competitive — A query about leading figures or organisations in your field, to assess your positioning relative to peers.

The specific queries are tailored to each individual. The framework ensures consistency across reports, making results comparable over time and across campaigns.

From Report to Programme

The AI Visibility Report identifies the gaps. The AI Visibility Programme closes them.

Every report concludes with three Key Challenges and corresponding Campaign Objectives. These become the specific, measurable goals of the visibility campaign. After the campaign, the same queries are re-run and the same metrics are re-scored — producing a direct before-and-after comparison.

The programme includes structured content placement across institutional databases, entity foundation work (Wikidata, knowledge graphs, schema markup, Internet Archive), paid media amplification, and targeted GEO and SEO interventions — all managed by our team with no disruption to your day-to-day work.

Frequently Asked Questions

How long does the assessment take? The baseline assessment is typically completed within 48 hours of engagement. The full campaign runs over approximately six weeks, with results delivered at the end of that period.

What do I need to provide? Very little. We need your approval on the content we create (a press release and a companion article), and we ask for a short quote. Beyond that, the programme is managed entirely by our team.

How do I know it worked? Every campaign produces a documented before-and-after comparison using the same queries, the same platforms, and the same scoring methodology. The numbers either move or they don't. We also provide the Profound AI Citation Report, which documents AI bot engagement with the distributed content.

Can I run the queries myself? Yes. Every query in your report can be run by anyone on any of the platforms we test. The results are reproducible. We encourage you to verify them.

What does it cost? Pricing depends on the scope and is discussed during the initial consultation. We offer the baseline AI Visibility Report as a starting point — if the findings are compelling, we discuss the programme.

Book a Call

To receive your AI Visibility Report or to discuss the findings of an existing report, book a 30-minute consultation: calendly.com/dylan-aleksandraking/30min

Tall modern skyscrapers in a busy city street, with people walking, buses, cars, and bicycles, during dusk.
People sitting at a wooden table, engaged in a meeting or discussion, with notebooks and pens, sunlight coming in through windows.

LET’S CONNECT

Customer satisfaction is paramount for Aleksandra King Agency, striving to exceed client expectations by delivering projects on time, within budget, and to the highest standards.