Why prompt volume is controversial
As AI assistants become a primary discovery channel, many tools have started offering “prompt volume” metrics. These numbers are meant to estimate how often users ask specific questions inside tools like ChatGPT, Perplexity, or Google AI experiences.
A recent critique highlighted a real concern. Much of the prompt volume data available in the market is noisy, heavily extrapolated, and often misunderstood.
This article explains:
Why prompt volume is hard to measure accurately
Where common estimates break down
How Scrunch approaches AI search data differently
How you should use prompt volume responsibly, if at all
How most prompt volume data is created
Most third-party tools do not have first-party access to AI platform usage. Instead, they rely on panel data, typically collected from browser extensions or clickstream providers.
In practice, this means:
A small subset of users install extensions that capture AI interactions
Those interactions are aggregated into a dataset
Prompts that look “commercial” are extracted
Volumes are extrapolated to represent the broader population
This creates several problems.
1. AI usage is not the same as search usage
People use AI assistants for many things beyond search:
Writing emails
Summarizing documents
Brainstorming ideas
Planning trips
Coding and debugging
Only a fraction of prompts resemble classic “search queries.” Filtering this down introduces significant noise.
2. Coverage is incomplete
Most extension-based panels:
Miss mobile usage
Miss native apps
Miss Safari traffic
Miss privacy-focused browsers
As a result, raw data often represents a very small slice of total AI usage.
3. Extrapolation compounds error
To compensate for limited coverage, tools scale the data. If a panel represents 1 percent of observed usage, results may be multiplied 100x or more.
At that point, the number looks precise, but the underlying signal is weak.
Why comparing prompt volume to Google search volume is tricky
It is tempting to compare prompt volume to Google Search Console or Ahrefs data. However, these systems measure different behaviors.
Google Search Console reflects first-party data for classic search
SEO tools blend multiple panel and trend-based sources
AI prompts reflect conversational behavior, not keyword targeting
The shape of AI questions is often very different from traditional keywords. A mismatch does not automatically mean the AI data is wrong. However, extreme discrepancies should raise caution.
Scrunch recommends using traditional search data as an anchor, not as a direct replacement or validation.
Scrunch’s position on prompt volume
Scrunch does not treat prompt volume as a precise demand metric.
Instead:
Prompt volume is directional
Trends matter more than absolute numbers
Topic-level movement is more reliable than individual prompts
Internally, Scrunch is conservative about how volume is presented and interpreted. We care far more about:
What questions AI platforms are answering
Which brands appear
Which sources are cited
How visibility changes over time
This aligns with how AI assistants actually influence the customer journey, which is increasingly about recommendations and answers, not query counts.
How Scrunch approaches AI search data differently
Scrunch combines multiple methodologies to reduce bias and over-reliance on any single data source:
Automated, clean-slate AI response collection
High-quality, consented consumer panels
Cross-platform monitoring across ChatGPT, Google AI Overviews, Perplexity, Claude, Meta AI, and Gemini
Validation through citations, agent traffic, and downstream AI referrals
No AI platform provides a native, first-party prompt feed today. All AI visibility data should be treated as probabilistic and directional, not absolute. Scrunch is explicit about this in how metrics are designed and communicated.
You can read more about how Scrunch approaches AI visibility and optimization in our guide on using Scrunch data to show up in AI responses.
Best practices for customers
If you are using any prompt volume data, including Scrunch trends, we recommend:
1. Do not optimize for volume alone
High estimated volume does not mean high business impact.
2. Validate against multiple signals
Compare AI trends with:
Google Search Console
SEO tools
AI referrals
Agent traffic
Citations
3. Focus on visibility, not demand math
The real question is not “how many people asked,” but:
Did AI mention you?
Was it accurate?
Were you recommended or compared favorably?
Did it drive traffic or influence perception?
4. Use prompt volume directionally
Rising topics and emerging question clusters are far more actionable than exact counts.
The bottom line
Prompt volume is not fake, but it is fragile.
Used incorrectly, it can mislead teams into chasing inflated numbers. Used correctly, it can help identify where AI attention is shifting.
Scrunch is built around the belief that visibility, accuracy, and control inside AI answers matter more than raw volume estimates. As AI-driven discovery continues to evolve, that distinction will only become more important.
