SEO has clear rankings. AI search doesn’t. Ask ChatGPT the same question twice, even in the same session, and you’ll get two different answers — that’s a feature of how large language models work, not a bug. For B2B marketers used to deterministic Google rankings, this shift breaks most of the tracking assumptions they’ve operated under for years. In episode 135 of FiredUp!, Morgan and Alastair sit down with Kevin White, Head of Marketing at AI visibility monitoring company Scrunch, to talk about how to actually measure and improve brand visibility across ChatGPT, Perplexity, Claude, and Google AI Overviews.

Kevin White is the head of marketing at Scrunch AI, where he’s building visibility infrastructure for the post-LLM web — analytics and optimization that show brands how they’re being represented inside ChatGPT, Perplexity, Claude, and Google AI Overviews. Before Scrunch, Kevin spent a decade marketing for the companies that defined the modern operator stack — Common Room, Retool, and Segment — and has advised teams at Ashby and Deepnote.

Key Takeaways

  • AI search is probabilistic, not deterministic. Even a strong brand will only show up 60-70% of the time, so visibility has to be measured over time, not at a single point.
  • Citations matter more than raw mentions. Citation trends — what’s coming online, what’s dropping off — are often more actionable than the total count.
  • Reddit is overhyped as a citation source. At most, it accounts for around 5% of citations, and the longer tail of niche, vertical sources often matters more — especially for higher-intent prompts.
  • Track which LLM agents are crawling your site. The top three are usually a good proxy for where to focus your optimization.
  • Citation half-life is short. Kevin cites a Scrunch study showing roughly 30-40 days for most sources, longer for major publishers. That demands a steady cadence of new, optimized content.
  • The best learning loop is hands-on experimentation. Listen to experts, but go test what works on your own site and content.

How is AI search different from traditional SEO?

The single biggest difference is that AI platforms are probabilistic. Ask the same question twice — even in the same session, even as the same user — and you’ll often get a different answer. That’s by design, not a glitch.

Because of that, the metrics change. With Google, a strong page can rank in the same spot day after day. With AI search, you’re never going to show up 100% of the time. As Kevin put it, a really strong brand might surface in roughly 60 to 70% of relevant prompt responses. The job becomes tracking presence across a holistic window — daily snapshots, week-over-week trends, share-of-voice movement against competitors — rather than checking a single ranking.

That also reframes the question marketing leaders get from their CEO: “Why didn’t we show up for this search I just did?” The honest answer is that the platforms are probabilistic. You can’t promise constant presence. What you can promise is measured, growing presence over time.

How does Scrunch monitor visibility when the AI platforms don’t share data?

AI platforms don’t expose the kind of search-volume or click data that Google does, so visibility monitoring has to be approximated. Scrunch does this by simulating personas — for example, a marketing buyer versus a family buyer — and layering in regional IP address mapping to mirror the kinds of searches a real user would do.

It also uses panel data, which is user-permissioned data that shows real search behavior across platforms like Google and ChatGPT. That panel signal gets approximated globally to build a picture of what people are actually asking and how brands are showing up in those responses.

Kevin’s framing: it’s never going to be a perfect footprint of everything. But the alternative — burying your head in the sand because the measurement isn’t perfect — isn’t an acceptable answer when the CEO and board are asking how the brand shows up in these platforms. Imperfect signal is still signal.

Download the Multiplier Marketing Megapack today. Exclusive offer for all our listeners — get all our Startup Guides in one go! Over 50 pages of advanced tips and advice that dive deep into content marketing, search advertising, and marketing attribution.

What are fan-out queries and why do they matter?

When a user types or speaks a prompt — sometimes 50 words long if they’re dictating into something like Whisper — the LLM doesn’t run a single search against that exact string. It breaks the prompt down into what are called fan-out queries (Microsoft Bing Webmaster Tools calls them grounding queries): five to a dozen smaller searches across the different components of intent in the original prompt. The platform then gathers information across those fan-out queries — partly from training data, partly from real-time search — and synthesizes the response.

The practical implication is that trying to track exact-match prompts is mostly a lost cause. Nobody phrases queries the same way twice. The better approach is to cluster prompts by topic. If you sell enterprise CRM software, your cluster might include questions about SOC 2 compliance, scaling to millions of users, integration depth, and so on. Track presence across the cluster, not the literal wording.

Scrunch also has a product that flags prompts you’re tracking redundantly — different wording, same citation pattern — so you can prune the list down. Counterintuitive for a tool that charges by consumption, but more useful for the marketer.

Why are citations more valuable than mentions?

Mentions are good. Citations — where an LLM points to a specific source URL behind its answer — are typically more actionable. Citation trends reveal where competitors are winning, which third-party sources to target, and which exact URLs are driving authority for the topics you care about.

Looking at trend data rather than point-in-time snapshots is where the real value lives. Scrunch’s API exposes which citations are declining, disappearing, or newly emerging, which lets teams spot when a competitor has dropped off a source (an overtake opportunity) or when your own URL has fallen off (something to investigate).

One myth worth puncturing: Reddit and similar high-traffic platforms are not the be-all and end-all of citation sources. When Kevin’s team looks at what’s actually being cited for the prompts brands care about, Reddit usually maxes out around 5% of the citation mix. The longer tail of niche, vertical, or publisher sources is often more important — especially at the bottom of the funnel, where higher-intent prompts pull from more specialized sources.

As Alastair pointed out during the conversation, citation data is also shaping how Firebrand’s content and PR teams work together. Cited URLs map to topics and personas, which surfaces specific industry publications worth pursuing for earned or paid placements — turning citation data into a hit list for the PR side.

Should brands optimize for a specific LLM?

Yes — but figure out which one first. Scrunch’s Agent Traffic product identifies which bots are crawling a site, at what frequency, and which pages they’re consuming. The top three crawlers on your site are usually a good proxy for the models worth focusing on.

Kevin’s general read on the market: ChatGPT dominates consumer, Claude is coming up fast (especially in B2B SaaS, where Claude Code is driving heavy adoption), and Perplexity gets meaningful B2B use. But the mix changes frequently, so the better discipline is to inspect your own traffic and recheck periodically rather than assume.

There’s also a relevance argument. A B2B SaaS buyer’s research patterns look different from a consumer e-commerce buyer’s. The right LLM to optimize for is the one your buyers are actually using, which you can only know by looking at your own data.

How do you cope with LLMs changing without notice?

Google and Bing publish algorithm update notes. LLMs largely don’t. They release new models and features constantly, and the underlying ranking logic isn’t disclosed in a way SEO teams are used to. There’s no “MozCast of AI” yet.

Kevin’s recommended response is a combination of patience and experimentation. Watch trends over time to spot inflection points. Then go test things yourself. Refresh content. Add structured data. Add FAQs. Make content chunkable. Build authorship signals. These are commonly recommended tactics, but Kevin’s blunt advice is to not take any of them on faith — including from him. Run them as experiments against a control. See what the models in your space actually reward.

Alastair added that good GEO starts with good SEO. If you’re doing SEO well, you’re probably showing up reasonably in AI search already. The differences worth chasing are the new tactics — structured data, fan-out query coverage, citation acquisition — that move the needle specifically in this new space.

What citation source trends are emerging?

Reddit has been the headline source, but the longer tail matters more for most B2B brands. Kevin mentioned a forthcoming Scrunch study on LinkedIn as a rising source for citations — both what gets cited and what generates reactions and distribution.

The more useful way to look at citations isn’t domain by domain — it’s by grouping them into categories: social publishers, news publishers, niche publishers, review sites, competitors. That lets you see which categories are driving the most authority for your topic clusters, which in turn informs whether your investment should go into earned PR, content syndication, niche partnerships, or your own owned content.

Alastair tied this back to broader strategy: good GEO is multi-channel. AI models look for patterns across the web, not authority on a single domain. That means presence across earned, owned, and paid sources — with consistent brand voice and topic focus — is what builds the kind of cited trust that surfaces in AI answers.

Why does citation performance plateau even when you’re doing the right things?

In traditional SEO, doing the right things tends to produce a fairly predictable upward curve. In AI search, performance ebbs and flows — strong one day, weaker the next, leveling rather than stacking. Kevin’s explanation: citation sources have a short half-life. A Scrunch study puts most citations at around a 30-40 day half-life, with larger publishers maybe double that.

Practically, that means the model keeps cycling through new sources. The citations driving your visibility this month are likely to be partially replaced next month. Staying visible requires constantly publishing fresh content and acquiring new citations, which is exhausting — but it’s also a competitive advantage if you can sustain the cadence.

Alastair noted an upside: the recency bias also means smaller brands can move faster than they could in traditional search. New, well-optimized content can be cited within days of publishing, which is meaningfully faster than the timelines SEO teams are used to with Google.

What is Scrunch’s Agent Experience Platform?

Zero-click search means human visits to your website decline as AI agents visit on behalf of users. That changes how you think about traffic. Bot traffic isn’t background noise to dismiss — there’s increasingly a human with intent behind every agent visit.

Scrunch’s Agent Experience Platform identifies that bot traffic at the CDN edge layer (Akamai, Cloudflare, Vercel, and similar) and serves an optimized version of the page to the agent. Most websites are built for humans, with heavy JavaScript and media that LLM agents struggle to consume efficiently. AXP delivers a mirrored HTML version stripped of the superfluous code, which can reduce token consumption by roughly two orders of magnitude — from hundreds of thousands of tokens to thousands.

It can also enrich the page with additional context — FAQs, internal knowledge, structured data — without altering the underlying intent of the page (which would risk penalty). The human experience stays the same when a human visits. The AI gets a cleaner, denser version when an agent visits.

What’s coming next for AI search?

Two trends Kevin highlighted. First, paid solutions are emerging. Google and OpenAI are rolling out advertising products inside AI search experiences, and Kevin’s view is that advertisers won’t accept a black box for long — they’ll demand search volume data, click rates, and the same kind of transparency they get from existing platforms. That should crack open more of the underlying algorithm visibility over time.

Second, agentic traffic is going to keep growing. Tools like Claude Code and other agentic platforms can crawl sites, fill out forms, and take action on a user’s behalf. That means optimizing not just for crawl and citation, but for what an agent actually needs to complete a task on your site — adhering to standards and protocols that let agents do agentic things, not just read content.

Where should brands serious about AI search start?

Kevin’s blunt answer: go do it. Run the experiments yourself. Listen to experts, but treat their advice as hypotheses to test rather than gospel. You can use a tool like Scrunch, or you can start with a spreadsheet and some manual prompting. The point is to learn by doing, because the field is moving fast enough that hands-on intuition beats any received wisdom.

A concrete starting point from the show notes: take your top five bottom-of-funnel keywords and run them through Perplexity and ChatGPT today. If your brand isn’t in the top three citations, that’s a signal to start experimenting with your content strategy.

Thank you for listening! Tune in to all the episodes for practical tips on crushing your startup marketing goals. Don’t forget to follow, rate, and review the podcast, and tell us your key takeaways!

FAQs: AI Search Monitoring

How is AI search different from SEO?

AI search is probabilistic — the same prompt can return different answers each time, even within the same session. That means visibility has to be measured over time, not as a single ranking. Even strong brands surface in only 60-70% of relevant prompts.

What are fan-out queries?

Fan-out queries (also called grounding queries) are the five to twelve smaller searches an LLM runs after breaking down a user’s original prompt into its component parts. The platform synthesizes a response from across those sub-searches. The implication: track topic clusters, not exact-match prompts.

Is Reddit really the most important citation source for AI search?

No. Across the prompts most B2B brands care about, Reddit usually accounts for around 5% of citations at most. The longer tail of niche and vertical sources matters more, especially for higher-intent, bottom-of-funnel prompts.

Which LLM should brands optimize for?

Inspect your own agent traffic and follow your buyers. ChatGPT dominates consumer use, Claude is growing fast in B2B SaaS, and Perplexity has meaningful B2B share. The mix changes frequently, so check periodically rather than assuming.

How long do citations last in AI search?

A Scrunch study puts the half-life of most citations at roughly 30-40 days, with major publishers running closer to double that. That’s much shorter than SEO, which is why a steady cadence of new content and citation acquisition is critical.

What's the single best way to learn AI search optimization?

Run experiments yourself. Test the commonly recommended tactics — structured data, FAQs, content refresh frequency, authorship signals — against your own data and see what actually moves visibility for your brand. Take your top five bottom-of-funnel keywords and run them through Perplexity and ChatGPT today as a starting baseline.

Connect with Kevin White

Find Us

Apple PodcastsSpotifyBuzzsproutAmazon Music

Connect with Firebrand

Firebrand is a startup marketing agency. We help tech startups secure outsized marketing outcomes on their path to growth.

Like what you hear?

Get the hottest startup marketing insights, delivered to your inbox