Summary
- AI search engines (ChatGPT, Perplexity, Google AI Overviews) don't automatically cite brands that rank well in traditional search -- you need a separate audit to understand your AI visibility
- You can run a meaningful AI presence audit in 3-4 hours using free tools: Google Search Console, ChatGPT/Perplexity/Claude free tiers, a spreadsheet, and basic prompt testing
- The audit covers five core areas: baseline citation testing, content gap analysis, technical blockers (robots.txt, schema), competitor benchmarking, and traffic attribution
- Most brands discover they're invisible for 60-80% of the prompts their customers are actually asking AI engines -- the audit reveals exactly where you're missing and why
- After the audit, you'll have a one-page baseline report showing your current AI visibility score, top gaps, and a prioritized action plan to start earning citations
Why your Google rankings don't matter in AI search
I've seen brands ranking #1 for major keywords in Google completely absent from ChatGPT and Perplexity results for the same queries. The correlation between traditional search rankings and AI citations is weaker than most marketers assume.
AI engines don't just scrape the top 10 Google results. They synthesize answers from a much wider pool of sources -- Reddit threads, YouTube transcripts, niche forums, academic papers, and yes, sometimes your website. But being on page one of Google doesn't guarantee a citation. AI models prioritize clarity, structure, authority signals, and recency in ways that differ from traditional ranking algorithms.
Half of consumers now use AI-powered search according to McKinsey's October 2025 research. By 2028, this behavioral shift will influence $750 billion in revenue. If your brand isn't showing up in AI answers, you're already invisible to millions of potential customers.
The good news: you don't need enterprise software or a six-figure budget to understand where you stand. You can run a solid AI visibility audit in one afternoon using free tools and this checklist.
What you'll need (all free or cheap)
- Google Search Console -- already connected to your site
- ChatGPT free tier (or Plus if you have it)
- Perplexity free tier
- Claude free tier (Anthropic)
- Google Gemini (free via Google account)
- A spreadsheet (Google Sheets or Excel)
- Your existing FAQ pages, blog posts, product pages -- you'll be testing how AI engines cite them
- 3-4 hours of focused time
Optional but helpful: access to Promptwatch if you want to scale this process beyond the manual audit. Promptwatch tracks your citations across 10+ AI models automatically and shows you exactly which prompts competitors rank for but you don't.

Step 1: Build your prompt test set (30 minutes)
Start by listing 20-30 prompts your customers would actually ask an AI engine about your category, product, or service.
Don't guess. Pull real data:
- Open Google Search Console
- Filter for queries with impressions in the last 90 days
- Export the top 100 queries driving traffic to your site
- Rewrite 20-30 of them as conversational questions someone would ask ChatGPT
For example, if you sell project management software:
- Google query: "best project management tool for remote teams"
- AI prompt: "What's the best project management tool for a fully remote team of 15 people?"
AI prompts are longer, more specific, and conversational. They include context (team size, use case, constraints) that traditional keyword searches omit.
Also add:
- Direct competitor prompts: "Should I use [Competitor A] or [Competitor B]?"
- Category education prompts: "How do I choose a project management tool?"
- Feature-specific prompts: "What project management tools have built-in time tracking?"
You want a mix of brand-aware prompts (where users know your category) and discovery prompts (where they're still learning).
Save these in a spreadsheet with columns:
| Prompt | ChatGPT result | Perplexity result | Claude result | Gemini result | Your brand cited? | Competitors cited |
|---|
You'll fill this in during the next step.
Step 2: Run the prompts and log citations (60 minutes)
Now the manual work begins. Open ChatGPT, Perplexity, Claude, and Gemini in separate browser tabs.
For each prompt in your spreadsheet:
- Paste it into ChatGPT. Read the response. Note whether your brand is mentioned, cited, or recommended. Copy the exact text if you are cited.
- Repeat in Perplexity. Perplexity shows inline citations -- check if your domain appears in the source list.
- Repeat in Claude.
- Repeat in Gemini (or Google AI Mode if you have access).
Log the results in your spreadsheet. Mark "Yes" if your brand was cited, "No" if it wasn't. If competitors were cited instead, note which ones.
This is tedious. It takes about 2-3 minutes per prompt across four engines, so budget 60-90 minutes for 20-30 prompts.
What you're looking for:
- Citation rate: What percentage of prompts mention your brand?
- Position: When you are cited, are you listed first, third, or buried in a paragraph?
- Context: Are you recommended positively, mentioned neutrally, or compared unfavorably to competitors?
- Competitor dominance: Which competitors appear most often? Are they cited more prominently than you?
Most brands discover they're cited in 10-30% of relevant prompts. If you're above 40%, you're doing well. Below 10% means you have serious visibility gaps.
Step 3: Identify content gaps (45 minutes)
Now compare the prompts where you weren't cited to the content on your website.
For each "No" in your spreadsheet, ask:
- Do I have a page that directly answers this prompt?
- If yes, why didn't the AI cite it? (We'll diagnose technical blockers in Step 4.)
- If no, what content am I missing?
Let's say the prompt is "What project management tools have built-in time tracking?" and you weren't cited. Check:
- Do you have a features page that lists time tracking?
- Do you have a blog post comparing time tracking features across tools?
- Do you have a help doc explaining how your time tracking works?
If the answer is no to all three, that's a content gap. The AI engine had nothing to cite because you never published content answering that question.
Content gaps are the #1 reason brands are invisible in AI search. You can't be cited for content you didn't write.
Log these gaps in a separate tab of your spreadsheet:
| Missing content | Prompt(s) it would address | Priority (High/Med/Low) |
|---|
Prioritize based on:
- How many prompts would this content address?
- How often do customers ask this question? (Check support tickets, sales calls, FAQ page traffic.)
- Are competitors being cited because they have this content?
This is where tools like Promptwatch become valuable at scale. Promptwatch's Answer Gap Analysis shows you exactly which prompts competitors are visible for but you're not, then highlights the specific content topics your site is missing. Instead of manually checking 20 prompts, you can analyze hundreds and get a prioritized list of content gaps backed by real citation data.

Step 4: Check for technical blockers (30 minutes)
Sometimes you have great content but AI engines can't access it. Common blockers:
Robots.txt restrictions
AI crawlers (GPTBot, Claude-Web, PerplexityBot, Google-Extended) respect robots.txt. If you're blocking them, they can't index your content.
Check your robots.txt file:
- Go to
yoursite.com/robots.txt - Look for lines like:
User-agent: GPTBot
Disallow: /
User-agent: Claude-Web
Disallow: /
If you see Disallow: / for any AI crawler, you're blocking that engine from reading your site. Remove those lines unless you have a specific reason to block AI indexing.
Some AI crawlers to allow:
- GPTBot (OpenAI/ChatGPT)
- Claude-Web (Anthropic/Claude)
- PerplexityBot (Perplexity)
- Google-Extended (Google Gemini, Bard)
- Applebot-Extended (Apple Intelligence)
Missing or weak schema markup
AI engines rely heavily on structured data to understand your content. If your pages lack schema markup, they're harder to parse and less likely to be cited.
Check your schema:
- Go to Google's Rich Results Test
- Enter the URL of a key page (product page, blog post, FAQ page)
- See what schema is detected
At minimum, you should have:
- Organization schema on your homepage (name, logo, social profiles)
- Article schema on blog posts (headline, author, datePublished)
- Product schema on product pages (name, description, price, reviews)
- FAQPage schema on FAQ pages (questions and answers)
- HowTo schema on tutorial/guide pages (steps)
If schema is missing, add it. Most CMS platforms (WordPress, Shopify, Webflow) have plugins or built-in tools to generate schema automatically.
Thin or duplicate content
AI engines prefer detailed, original content. If your pages are short (under 300 words), generic, or duplicated across multiple URLs, they're less likely to be cited.
Audit your top 10 pages for each content gap you identified in Step 3. Are they:
- At least 800-1500 words?
- Structured with clear headings (H2, H3)?
- Answering the question directly in the first 200 words?
- Including examples, data, or specific details?
If not, those pages need to be rewritten or expanded.
Paywalls and login walls
If your best content is behind a login or paywall, AI crawlers can't access it. They see the same "Sign in to continue" page that a logged-out user sees.
Solution: create public-facing versions of key content (blog posts, guides, case studies) that AI engines can index. Keep premium features or data behind the paywall, but make the core educational content accessible.
Tools like Promptwatch include AI Crawler Logs that show you in real-time which pages AI engines are hitting, how often they return, and any errors they encounter (403s, 404s, timeouts). This is the fastest way to diagnose indexing issues without manually checking robots.txt and server logs.

Step 5: Benchmark against competitors (30 minutes)
Go back to your spreadsheet. For each prompt where a competitor was cited but you weren't, visit their website and reverse-engineer why.
Open the competitor's site and search for the topic (use their site search or Google with site:competitor.com [topic]).
Ask:
- Do they have a dedicated page for this topic?
- How long is the page? (Word count, depth of detail.)
- What format is it? (Blog post, comparison page, FAQ, video transcript.)
- Do they use schema markup? (Check with Google's Rich Results Test.)
- Are they cited on Reddit, YouTube, or other third-party platforms for this topic?
Log your findings:
| Competitor | Prompt they ranked for | Content type | Why they were cited (hypothesis) |
|---|
Common patterns:
- Competitors have long-form comparison posts ("X vs Y vs Z") that directly answer "Which tool should I use?" prompts
- Competitors publish on third-party platforms (Medium, Reddit, YouTube) in addition to their own site, giving AI engines multiple sources to cite
- Competitors structure content as Q&A (FAQPage schema) making it easy for AI to extract answers
- Competitors update content frequently (recent dateModified timestamps signal freshness)
You're not copying competitors. You're identifying the content formats and topics that AI engines prefer to cite, then creating your own version.
Step 6: Check traffic attribution (15 minutes)
Finally, see if AI search is already driving traffic to your site -- you just didn't know it.
In Google Search Console:
- Go to Performance > Search Results
- Filter by "Search appearance" > "AI-powered overview"
- Check impressions and clicks from Google AI Overviews
This only shows Google AI traffic. For ChatGPT, Perplexity, and Claude referrals, check:
- Google Analytics: Look for referral traffic from
chat.openai.com,perplexity.ai,claude.ai - Server logs: If you have access, grep for user agents like
ChatGPT-User,PerplexityBot,Claude-Web
Most brands see 0-2% of total traffic from AI search today. But that number is growing fast. If you're already seeing AI referrals, it means your content is being cited -- you just need to scale it.
Tools like Promptwatch close the loop by connecting AI visibility to actual traffic. You can install a code snippet (similar to Google Analytics) or integrate with Google Search Console to see which AI citations are driving clicks, conversions, and revenue. This is the difference between tracking vanity metrics ("We were cited 50 times!") and understanding business impact ("AI search drove $12K in revenue last month").

Step 7: Build your one-page baseline report (10 minutes)
You've spent 3-4 hours gathering data. Now distill it into a single slide or one-page doc you can share with your team.
Include:
Your AI visibility score
- "We were cited in X out of 25 prompts tested (X% citation rate)"
- "ChatGPT cited us Y times, Perplexity Z times, Claude A times, Gemini B times"
Top 3 content gaps
- "We have no content addressing [topic 1], which appeared in 8 prompts"
- "Competitors are being cited for [topic 2] because they have [content type]"
- "We need to create [specific content piece] to address [prompt cluster]"
Technical blockers found
- "Our robots.txt is blocking GPTBot and Claude-Web"
- "12 of our top pages are missing FAQPage schema"
- "Our best content is behind a login wall"
Competitor insights
- "[Competitor A] was cited 3x more often than us"
- "[Competitor B] dominates comparison prompts because they have a dedicated 'X vs Y' hub"
Next steps (prioritized)
- Fix robots.txt to allow AI crawlers (1 hour, high impact)
- Add FAQPage schema to top 10 FAQ pages (2 hours, high impact)
- Write 3 new blog posts addressing top content gaps (2 weeks, high impact)
- Create comparison page for "X vs Y" prompts (1 week, medium impact)
This one-pager is your baseline. Run the same audit again in 60-90 days to measure progress.
Common mistakes to avoid
Testing only branded prompts
Don't just test "What is [Your Brand]?" prompts. Most of your potential customers don't know your brand exists yet. They're asking category questions ("What's the best X for Y?") and discovery questions ("How do I solve Z?"). If you only test branded prompts, you'll miss 80% of the opportunity.
Ignoring Reddit and YouTube
AI engines cite Reddit threads and YouTube transcripts more often than most brands realize. If there's a popular Reddit discussion about your category and you're not mentioned, that's a gap. Same for YouTube reviews and tutorials. You can't control these platforms directly, but you can engage with them (answer questions on Reddit, collaborate with YouTubers, create your own video content).
Assuming citations equal traffic
Being cited is great. But if the AI engine answers the user's question completely in the response, they may never click through to your site. This is why you need to track referral traffic and conversions, not just citation counts. Tools like Promptwatch help you connect visibility to revenue.

Optimizing for one AI engine only
ChatGPT, Perplexity, Claude, and Gemini all have different content preferences and ranking signals. A page that performs well in Perplexity might be invisible in ChatGPT. Test across multiple engines and optimize for the ones your customers actually use.
Waiting for perfect data
You don't need to test 500 prompts or audit every page on your site. Start with 20-30 high-value prompts and your top 10 pages. Run the audit, fix the biggest gaps, then iterate. Perfect is the enemy of done.
What to do after the audit
You've identified your gaps. Now fix them.
Fix technical blockers first (highest ROI)
- Update robots.txt to allow AI crawlers
- Add schema markup to key pages
- Remove login walls from educational content
These changes take hours, not weeks, and immediately make your existing content more discoverable.
Create content for your top 3 gaps
Don't try to fill every gap at once. Pick the 3 prompts with the highest search volume (or the most competitor citations) and write content specifically to answer them.
Format matters:
- FAQ pages with clear Q&A structure and FAQPage schema
- Comparison posts ("X vs Y vs Z") for decision-stage prompts
- How-to guides with step-by-step instructions and HowTo schema
- Listicles ("10 best X for Y") for discovery-stage prompts
AI engines prefer content that directly answers questions, uses structured markup, and includes specific examples or data.
If you want to scale content creation, tools like Promptwatch include an AI writing agent that generates articles, listicles, and comparisons grounded in real citation data. It's not generic SEO filler -- it's content engineered to get cited by ChatGPT, Claude, and Perplexity based on what those models actually cite today.

Publish on third-party platforms
AI engines don't just cite your website. They cite Reddit, YouTube, Medium, Quora, and niche forums. If you're not active on these platforms, you're missing citations.
Strategies:
- Answer questions on Reddit in your category's subreddit (genuinely helpful answers, not spam)
- Publish long-form guides on Medium or LinkedIn
- Create YouTube videos or collaborate with creators who review tools in your space
- Contribute to Quora threads where your expertise is relevant
When AI engines see your brand mentioned across multiple sources, they're more likely to cite you in their responses.
Track and iterate
Run this audit again in 60-90 days. Measure:
- Did your citation rate improve?
- Are you now cited for prompts where you were previously invisible?
- Did fixing technical blockers increase AI referral traffic?
- Which content pieces are being cited most often?
AI search is still evolving. What works today might not work in six months. Regular audits help you stay visible as ranking signals change.
Tools to scale beyond the manual audit
The manual audit works. But it doesn't scale. If you want to track hundreds of prompts, monitor 10+ AI engines, and get real-time alerts when competitors overtake you, you need automation.
Here's a quick comparison of tools that can help:
| Tool | What it does | Best for | Price |
|---|---|---|---|
| Promptwatch | Tracks citations across 10 AI models, shows content gaps, generates optimized content, monitors crawler logs, connects visibility to traffic | Brands and agencies that want to optimize, not just monitor | $99-579/mo |
| Semrush AI Visibility | Monitors ChatGPT, Google AI, Perplexity with fixed prompt sets | Traditional SEO teams adding AI tracking | $200/mo (bundle) |
| Ahrefs Brand Radar | Tracks brand mentions in AI search results | Brands focused on reputation monitoring | Add-on to Ahrefs |
| Otterly.AI | Basic monitoring dashboard for AI citations | Small teams on a budget | $49-199/mo |
| Peec.ai | Multi-language AI visibility tracking | International brands | Custom pricing |



Most of these are monitoring-only tools -- they show you data but leave you stuck. Promptwatch is built around taking action: it shows you what's missing, then helps you fix it with content gap analysis, AI content generation, and optimization tools.
Final thoughts
You can run a meaningful AI visibility audit in one afternoon. You don't need enterprise software or a six-figure budget. You need a spreadsheet, free AI tools, and 3-4 hours of focused work.
Most brands discover they're invisible for 60-80% of the prompts their customers are actually asking. The audit reveals exactly where you're missing and why. Then you fix it -- technical blockers first, content gaps second, third-party platforms third.
AI search is the biggest shift in digital discovery since Google launched. The brands that audit their presence now and optimize for AI citations will dominate their categories in 2026 and beyond. The brands that wait will be invisible.
Start with the checklist above. Run the audit. Fix the gaps. Track the results. Then iterate.
If you want to scale beyond the manual process, Promptwatch is the only platform that helps you find gaps, create content, and track results in one place. Most competitors stop at step one. Promptwatch closes the loop.

