Summary
- Start with 20-30 high-value queries that represent your core business and check where your brand appears across ChatGPT, Claude, Perplexity, and Gemini
- Track three baseline metrics: citation rate (how often you're mentioned), citation position (where you appear in responses), and competitor gap (who's beating you)
- Use free manual testing for the first audit, then graduate to tracking tools like Promptwatch for ongoing monitoring
- Focus on quick wins: pages already ranking in traditional search but missing from AI responses, and competitor mentions where you should logically appear
- Document everything in a simple spreadsheet so you can measure progress month-over-month

AI search is not a future concern anymore. ChatGPT processes over 3 billion messages daily from 700 million weekly users. Google's AI Overviews reach 2 billion people monthly. Your customers are already asking AI engines about your industry, your competitors, and the problems you solve. The question is whether your brand shows up in those answers.
Most companies have no idea where they stand. They're optimizing for Google while AI engines cite their competitors dozens of times per day. An AI visibility audit fixes that blind spot. It shows you exactly where your brand appears (or doesn't) across the platforms that increasingly control discovery.
This guide walks through a 30-minute baseline assessment you can run today with zero budget. You'll finish with a clear picture of your current AI visibility, a prioritized list of gaps to fix, and a simple tracking system to measure progress.
Why traditional SEO metrics don't tell the AI visibility story
Organic traffic and keyword rankings measure one thing: how often people click through from Google's blue links to your website. AI search breaks that model completely.
When someone asks ChatGPT or Perplexity a question, the AI synthesizes an answer from multiple sources and presents it directly in the interface. Users don't always click through. They get what they need right there. Your brand can be cited, recommended, or completely ignored without ever showing up in Google Search Console.
The data backs this up. Research shows that when AI Overviews appear in Google results, organic click-through rates drop from 1.76% to 0.61% -- a 61% decline. Meanwhile, websites are seeing AI referral traffic increase 527% year-over-year. Some sites report over 1% of total sessions now coming from ChatGPT, Perplexity, and similar platforms.
Traditional metrics like impressions, clicks, and rankings still matter for Google. But they're silent on the question that increasingly determines whether customers discover you: does AI mention your brand when users ask questions in your category?
That's what an AI visibility audit measures.
What you're actually measuring in an AI visibility audit
An AI visibility audit answers three core questions:
- Where does your brand appear? Which AI platforms cite you, for which types of queries, and in what context?
- Where are the gaps? Which high-value queries return competitor mentions but not yours? Which topics do AI engines associate with your industry but not your brand?
- What's blocking visibility? Are technical issues preventing AI crawlers from accessing your content? Is your content structured in ways AI engines can't parse?
You're not measuring clicks or conversions yet. This is a baseline assessment. You're establishing the current state so you can track improvement over time.
The specific metrics you'll collect:
- Citation rate: Percentage of test queries where your brand is mentioned
- Citation position: Where you appear in AI responses (first mention, middle, footnote)
- Competitor presence: How often competitors appear in the same query set
- Platform coverage: Which AI engines cite you (ChatGPT, Claude, Perplexity, Gemini)
- Content gaps: Topics where competitors dominate but you're absent
These metrics form your baseline. In 30 days, you'll run the same audit and measure change.
The 30-minute audit: Step-by-step walkthrough
Step 1: Build your test query set (10 minutes)
Start by listing 20-30 queries that represent how your target customers actually search. These should span three categories:
Category 1: Direct brand queries (5-7 queries) These mention your brand name explicitly. Examples:
- "What is [YourBrand]?"
- "[YourBrand] vs [Competitor]"
- "Is [YourBrand] worth it?"
These queries measure baseline brand recognition. If AI engines don't cite you for your own brand name, you have a serious visibility problem.
Category 2: Problem/solution queries (10-15 queries) These describe the problem your product solves without mentioning any brand. Examples:
- "How to [solve specific problem]"
- "Best way to [achieve outcome]"
- "What's the difference between [approach A] and [approach B]?"
These queries reveal whether AI engines associate your brand with the problems you solve. This is where most companies discover their biggest gaps.
Category 3: Competitor comparison queries (5-7 queries) These mention competitors directly. Examples:
- "Alternatives to [Competitor]"
- "[Competitor] vs [Other Competitor]"
- "Best [category] tools"
These queries show whether you're part of the consideration set when users research your category.
Write these queries in a spreadsheet with columns for: Query, Category, Expected Result (should you appear?), and notes.
Step 2: Run manual tests across four platforms (15 minutes)
Open four browser tabs:
- ChatGPT (chat.openai.com)
- Claude (claude.ai)
- Perplexity (perplexity.ai)
- Gemini (gemini.google.com)
For each query in your list:
- Paste the query into each platform
- Read the response carefully
- Record whether your brand is mentioned (yes/no)
- If mentioned, note the position (first, middle, end, footnote)
- Record which competitors are mentioned
- Copy the full response text into your spreadsheet for reference
This is tedious but essential. You're building a baseline dataset you can compare against in future audits.
Pay attention to:
- Context of mentions: Is your brand recommended positively, mentioned neutrally, or criticized?
- Citation sources: Does the AI cite your website, a third-party review, or general knowledge?
- Consistency: Do all four platforms give similar answers, or does visibility vary wildly?
Step 3: Calculate your baseline metrics (5 minutes)
Now that you have data for 20-30 queries across four platforms, calculate:
Overall citation rate: (Number of queries where you were mentioned) / (Total queries tested) × 100
Example: If you were mentioned in 8 out of 25 queries, your citation rate is 32%.
Platform-specific citation rates: Calculate the same metric for each AI engine individually. You might find ChatGPT cites you 40% of the time while Claude only cites you 15%.
Competitor gap score: For each competitor, count how many times they were mentioned vs. you. If Competitor A appeared in 18 queries and you appeared in 8, they have a +10 advantage.
Category performance: Break down citation rates by query category (brand, problem/solution, competitor). Most companies find they perform well on brand queries but poorly on problem/solution queries.
Document these numbers. They're your baseline.

Interpreting your results: What the numbers actually mean
Here's how to read your baseline metrics:
Citation rate below 20%: You have a serious AI visibility problem. AI engines either don't know about your brand or don't consider you relevant to your category. This is common for newer brands or companies that haven't optimized content for AI consumption.
Citation rate 20-40%: You're visible but inconsistent. AI engines mention you sometimes but not reliably. You're probably missing from key problem/solution queries where competitors dominate.
Citation rate 40-60%: Solid baseline visibility. You're in the consideration set for many queries, but there's room to improve positioning and consistency across platforms.
Citation rate above 60%: Strong AI visibility. You're likely a category leader or have invested heavily in content optimization. Focus on maintaining position and expanding to adjacent topics.
Platform variance: If one platform cites you significantly more than others, investigate why. It might be that platform's crawler accessed your site more recently, or that platform weights certain types of sources differently.
Competitor gaps: Any competitor mentioned 2x more often than you represents a strategic threat. Users asking AI engines about your category are being steered toward that competitor by default.
Quick wins: What to fix first
You now have a baseline and a list of gaps. Don't try to fix everything at once. Prioritize based on effort vs. impact.
Quick win #1: Fix brand query visibility
If AI engines aren't citing you for your own brand name, fix that immediately. The problem is usually:
- Your website lacks a clear, AI-readable "About" or "What is [Brand]?" page
- Your homepage is heavy on marketing copy but light on factual information
- You have no structured data (schema markup) telling AI engines what your company does
Solution: Create or update a dedicated page that answers "What is [YourBrand]?" in clear, factual language. Include:
- One-sentence description of what you do
- Year founded, location, team size
- Core products/services with brief descriptions
- Key differentiators vs. competitors
- Customer count, notable clients, or other credibility markers
Add Organization schema markup to this page. AI crawlers use structured data heavily.
Quick win #2: Target competitor comparison queries
If competitors are mentioned in "alternatives to X" or "X vs Y" queries but you're not, you're missing easy visibility.
Solution: Create comparison content. Write articles titled:
- "[YourBrand] vs [Competitor]: Which is better for [use case]?"
- "Top 5 [Competitor] alternatives in 2026"
- "[YourBrand] vs [Competitor]: Feature comparison"
Be honest and specific. AI engines favor content that directly compares features, pricing, and use cases over vague marketing claims.
Quick win #3: Answer high-volume problem queries
If you're missing from problem/solution queries, you need more educational content that directly addresses user questions.
Solution: For each problem query where competitors appear but you don't, create a dedicated guide or tutorial. Structure it as:
- Clear H1 that matches the query ("How to [solve problem]")
- Step-by-step instructions or framework
- Concrete examples
- Tools or resources needed (mention your product naturally if relevant)
AI engines prioritize content that directly answers questions over content that pitches products.
Quick win #4: Fix technical AI crawler access
If your citation rate is low across all platforms, AI crawlers might be blocked from accessing your site.
Check:
- Your robots.txt file: Are you blocking common AI user agents (GPTBot, Claude-Web, PerplexityBot)?
- Your site speed: Do pages load in under 3 seconds?
- Your navigation: Can a crawler reach your best content within 3 clicks from the homepage?
Solution: Update robots.txt to allow AI crawlers. Add a clear sitemap. Fix any broken internal links.
For detailed guidance on managing AI crawler access, see tools like DarkVisitors that track which bots are hitting your site.

Moving from manual testing to automated tracking
The 30-minute manual audit gives you a baseline, but you can't run it every week. Once you've identified gaps and started fixing them, graduate to automated tracking.
Tools that automate AI visibility monitoring:
Promptwatch: Tracks your brand mentions across 10 AI models (ChatGPT, Claude, Perplexity, Gemini, etc.). Shows citation rates, competitor gaps, and content recommendations. Monitors AI crawler logs so you see exactly when and how AI engines access your site. Includes an AI writing agent that generates content optimized for AI citations based on real prompt data.

Profound: Tracks brand visibility across AI search engines with persona customization. Good for brands that need to monitor how different user types (e.g., enterprise buyers vs. SMB buyers) see their brand in AI responses.
AthenaHQ: Monitors 8+ AI search engines. Focuses on tracking citation frequency and position over time. Simpler interface than Promptwatch but fewer optimization features.
Otterly.AI: Budget-friendly option for basic AI visibility monitoring. Tracks mentions across major platforms but lacks advanced features like crawler logs or content gap analysis.

Most of these tools offer free trials. Start with manual testing to understand what you're measuring, then pick a tool that fits your budget and use case.
Setting up ongoing measurement
Once you've run your baseline audit and started fixing gaps, establish a monthly tracking cadence.
Monthly audit checklist:
- Re-run the same 20-30 test queries you used in your baseline
- Record citation rates, positions, and competitor mentions
- Calculate month-over-month change
- Identify new gaps (queries where competitors gained ground)
- Review which content updates drove visibility improvements
- Adjust your content roadmap based on what's working
Track this data in a simple spreadsheet with columns for:
- Month
- Overall citation rate
- Citation rate by platform
- Citation rate by query category
- Top 3 competitor gaps
- Content published that month
- Notes on what changed
This gives you a clear trendline. You'll see whether your efforts are moving the needle or whether you need to adjust strategy.
Comparison: Manual testing vs. automated tools
| Approach | Cost | Time per audit | Best for | Limitations |
|---|---|---|---|---|
| Manual testing | Free | 30-60 min | Initial baseline, small query sets | Not scalable, no historical data, labor-intensive |
| Promptwatch | $99-579/mo | 5 min | Ongoing tracking, content optimization, crawler monitoring | Requires budget, learning curve |
| Profound | Custom pricing | 10 min | Persona-based tracking, enterprise needs | Higher cost, overkill for small brands |
| AthenaHQ | $49-299/mo | 5 min | Mid-market brands, straightforward tracking | Fewer optimization features |
| Otterly.AI | $29-99/mo | 5 min | Budget-conscious teams, basic monitoring | Limited platforms, no crawler logs |
Start with manual testing for your first audit. Once you've identified gaps and committed to fixing them, invest in a tracking tool to automate monthly measurement.
Common mistakes in AI visibility audits
Mistake #1: Testing only brand queries
Most companies test "What is [OurBrand]?" and call it done. That's not an audit -- it's a vanity check. The real visibility battle happens in problem/solution queries where users don't know your brand yet.
Mistake #2: Ignoring competitor context
Knowing you're cited 30% of the time means nothing without context. If your main competitor is cited 70% of the time, you're losing. Always benchmark against competitors.
Mistake #3: Testing once and never following up
AI visibility changes constantly as models retrain and competitors publish new content. A one-time audit tells you where you are today, not whether you're improving. Set up monthly tracking.
Mistake #4: Focusing only on ChatGPT
ChatGPT has the most users, but Perplexity, Claude, and Gemini are growing fast. Users often try multiple platforms. Test across at least four engines to get a complete picture.
Mistake #5: Expecting instant results
AI models don't re-crawl your site daily. It can take 2-4 weeks for new content to influence citations. Track trends over months, not days.
Real example: What a baseline audit reveals

A B2B SaaS company in the project management space ran their first AI visibility audit in January 2026. Here's what they found:
Baseline metrics:
- Overall citation rate: 18%
- ChatGPT citation rate: 25%
- Claude citation rate: 12%
- Perplexity citation rate: 20%
- Gemini citation rate: 15%
Competitor gaps:
- Competitor A appeared in 22 of 25 queries (88% citation rate)
- Competitor B appeared in 19 of 25 queries (76% citation rate)
- They appeared in only 5 of 25 queries (20% citation rate)
Category breakdown:
- Brand queries: 80% citation rate (4 of 5 queries)
- Problem/solution queries: 10% citation rate (1 of 10 queries)
- Competitor comparison queries: 10% citation rate (1 of 10 queries)
Key insight: They had strong brand recognition (AI engines knew who they were) but zero association with the problems they solve. When users asked "How to manage remote teams?" or "Best project management tools for agencies," competitors dominated while they were invisible.
Action taken: They created 12 new guides over 8 weeks, each targeting a specific problem query where competitors appeared. They used Promptwatch to identify which content gaps to prioritize based on prompt volume data.
Results after 2 months:
- Overall citation rate increased to 42%
- Problem/solution citation rate jumped to 45%
- Competitor gap narrowed (Competitor A still at 88%, but they closed from 20% to 42%)
The audit gave them a clear roadmap. They stopped guessing and started fixing specific, measurable gaps.
Next steps: From audit to optimization
You've run your baseline audit. You have numbers, gaps, and a priority list. Now what?
Week 1-2: Fix technical blockers
- Update robots.txt to allow AI crawlers
- Add structured data to key pages
- Fix any broken links or slow-loading pages
Week 3-4: Create comparison content
- Write 2-3 competitor comparison articles
- Update your homepage with a clear "What we do" section
- Add an FAQ page answering common brand queries
Week 5-8: Build educational content
- Publish 4-6 guides targeting high-value problem queries
- Focus on queries where competitors appear but you don't
- Use clear headings, step-by-step instructions, and concrete examples
Week 9+: Track and iterate
- Re-run your audit monthly
- Double down on content types that improve citation rates
- Expand to adjacent topics as you gain visibility in core areas
AI visibility is not a one-time project. It's an ongoing optimization loop: measure, create, track, adjust. The 30-minute baseline audit is just the starting point.
Run it today. You'll finish with a clear picture of where you stand and exactly what to fix first.

