Skip to main content

Understanding Your Dashboard

Everything you need to know about the AI monitoring metrics you see in your brand's Scrunch dashboard

Updated over 2 weeks ago

The Scrunch dashboard is your home base for monitoring how your brand—and your competitors—appear across AI assistants. This guide explains what each key metric means and how to interpret the data.

Prompts

The number of prompts within the applied dashboard filters.

Note: By default, Scrunch filters the Dashboard by Non-Branded Only prompts, so the count here may be lower than what you see in your Prompts tab.


Responses

The number of AI-generated responses collected for the prompts within the applied filters.


Presence

What it measures

Presence shows how often your brand is explicitly mentioned in the written AI response for the prompts and filters you have selected.

Presence reflects what a real user actually sees when reading an AI-generated answer across platforms like ChatGPT, Perplexity, Gemini, Meta AI, and others.

How it’s calculated

A response is counted as “present” only if the AI model directly mentions your brand by name in its answer.

Presence is calculated as an average over your selected time range. You can choose:

  • Last 12 weeks (default)

  • Last 4 weeks (month)

  • Last 7 days (week)

  • Custom date range

The trend chart below the metric shows how Presence changes over time so you can track whether your visibility inside AI answers is improving or declining.

What Presence does not include

Presence does not increase if your brand appears only inside a cited URL, page title, or source snippet but is never mentioned in the AI’s written response.

Citations are still fully available in the Citations tab, but they do not influence Presence unless the AI explicitly refers to your brand in its answer.


Competitive Presence

What it measures

Competitive Presence is a side-by-side comparison of how often your brand and your competitors are explicitly mentioned in AI responses, averaged over the selected time period.

This metric shows your share of visibility relative to competitors for the same prompts, topics, personas, and platforms.

How to read it

  • Each line represents the average Presence for a brand (your brand and each competitor) during the selected time range.

  • Higher presence mean that brand is mentioned more frequently in AI-generated answers.

Because Competitive Presence uses the same Presence logic, it reflects only what users actually see in AI answers, not what appears inside cited sources.

Why this definition matters

Scrunch measures Presence to match the real AI search experience. If an answer never mentions a brand directly, users do not perceive that brand as present, even if a cited page contains it.

This definition keeps Presence aligned with how customers interpret AI answers and ensures consistency across Presence, Position, Sentiment, and Competitive Presence throughout the platform.


Sentiment

What it measures:
The tone of responses that mention your brand, categorized as:

  • Positive

  • Mixed

  • Negative

Important to note:

  • Sentiment is only recorded when a tone is detected.

  • Factual or neutral statements about your brand may not register as sentiment, even if your brand is mentioned.


Citations

What it measures:
Anytime a URL within your configured domain is cited in an AI assistant’s response.

Why it matters:
Citations show how often your owned sources—not just your brand name—are influencing responses, compared to competitors or third-party sites.

Additional details:

  • You’ll see time-series charts under each metric to visualize changes over time.

  • This helps you identify whether content updates or optimizations are increasing your citation frequency.


Filters You Can Apply

Filters help you focus on specific data segments. Examples of filters include:

  • Branded: Brand is mentioned or not in the prompt itself

  • Prompt Topic: Prompts grouped by your configured key topics

  • Citation Topic: Citations/Sources grouped by your configured key topics as they relate to the content on the page

  • Persona: Grouped by personas tied to prompts

  • Country: Prompts tied to personas with a specified country geography

  • Platform: ChatGPT, Claude, Google AI Overviews, Perplexity, Meta, etc.

Applying filters updates all dashboard metrics so you can analyze performance in context.

Did this answer your question?