Key takeaways
- Organic search traffic across major publisher portfolios is down 42% compared to pre-AI Overview baselines, with evergreen content hit hardest
- Breaking news is the one exception -- up 103% across Google surfaces -- because AI systems struggle to synthesize real-time, event-driven reporting
- LLM citation patterns are unstable: only 30% of cited sources appear again in the next response to the same query
- Publishers who built loyal, direct audiences before the AI shift are weathering the storm better than those who depended on search traffic
- Generative Engine Optimization (GEO) is now a real discipline for newsrooms -- structured data, authority signals, freshness, and citation-worthy content all matter
- Tracking which pages AI models actually cite (and which they ignore) is the first step to any meaningful recovery strategy
The numbers are hard to look at if you run a media brand. According to Define Media Group's analysis of 15 national and local news publishers, organic search traffic is down 42% compared to the pre-AI Overview average. A No. 1 organic result can lose 79% of its potential traffic when an AI Overview sits above it, according to research cited by The Guardian. Digital Trends lost 97% of its search traffic. ZDNet lost 90%. The Verge lost 85%.
And yet the story is more complicated than "AI is killing publishing." Because some publishers are doing fine. And one content category -- breaking news -- is actually up 103% across Google surfaces since November 2024.
So what's actually happening? Why are some brands getting cited constantly by ChatGPT, Perplexity, and Claude while others have basically disappeared from AI-generated answers? And what can a newsroom realistically do about it?
This guide works through all of it.

What's actually happening to publisher traffic
The mechanism is straightforward even if the scale is surprising. Google AI Overviews launched in May 2024. They answer questions directly in the search results page. Users get what they need without clicking. Publishers lose the visit.
Define Media Group tracked this across their portfolio of publishing clients, which averaged 1.7 billion organic search clicks per quarter before the AI Overview launch. The traffic never recovered after May 2024. By Q4 2025, it was down 42% against that baseline. That's not a penalty or a seasonal dip. It's a structural change in how information gets delivered.
The same dynamic plays out in conversational AI. ChatGPT processes around a billion queries a day. When someone asks it about a topic your publication covers, it synthesizes an answer from its training data and cited sources -- and the cited sources may or may not include you. There's no guarantee of a click even when you are cited. Data from Search Engine Land's analysis of 20,000+ queries found that AI Overview citations perform roughly like a position 6 organic result in terms of click-through rate. High visibility, low traffic.
The deeper problem is citation instability. According to the 2026 State of AI Search report from AirOps and Kevin Indig, only 30% of brands cited in an AI-generated answer appear again in the next response to the same query. Run the same query five times, and only 20% of brands show up consistently across all five responses.

That's the core challenge for publishers. Traditional SEO gave you a relatively stable position. AI citations are a rotating cast. If your content isn't continuously pulling AI models back in, you're not just losing a ranking -- you're losing the conversation.
Why breaking news is the exception
Here's the counterintuitive finding: breaking news traffic is up 103% across Google surfaces since November 2024, per Define Media Group's data. Every other content category is declining. Breaking news is the only one accelerating.
The reason makes sense once you think about it. AI systems are trained on data with a cutoff. They can't synthesize real-time information from scratch. When something happens today -- a court ruling, a natural disaster, a product launch, an election result -- AI models have to pull from live sources. They can't fake freshness. Publishers who are fast, accurate, and authoritative on breaking events become the sources AI systems reach for, because there's no alternative.
This has real implications for editorial strategy. Newsrooms that deprioritized breaking news in favor of evergreen SEO content over the last decade may need to reconsider. The content that AI can most easily replace (explainers, listicles, "what is X" articles) is exactly what's getting absorbed into AI answers without a click. The content AI can't easily replace (original reporting, live updates, event-specific analysis) is what's driving traffic growth.
The caveat: breaking news traffic spikes are short-lived. The sustainable version of this strategy isn't just chasing events -- it's building the authority and infrastructure to be the go-to source when events happen.
The authority moat: why some publishers are surviving
The Verge lost 85% of its search traffic by one measure. But The Verge also introduced a paywall in late 2024 as part of a four-point strategy that included building direct audience relationships. Traffic decline, as Media Copilot's Pete Pachal observed, doesn't have to mean business decline.
What separates publishers who are weathering this from those who aren't isn't primarily a technical SEO question. It's an audience question. Publishers who built loyal readers -- people who come directly, subscribe to newsletters, follow on social -- have a traffic base that doesn't depend on search algorithms or AI citation patterns. The Verge has been synonymous with tech news since 2011. Tech brands choose to break news there. That kind of brand authority is hard to replicate and hard for AI to displace.
This doesn't mean GEO tactics don't matter. They do. But they work best on top of a genuine editorial brand, not as a substitute for one.
What GEO actually means for newsrooms
Generative Engine Optimization is the practice of making your content more likely to be cited by AI models. For publishers, it breaks down into a few concrete areas.
Freshness and update frequency
Pages that haven't been updated in 90 days are 3x more likely to lose their AI citations, according to analysis cited in the 2026 State of AI Search report. For news publishers, this is actually an advantage -- your content is inherently time-stamped and frequently updated. The problem is evergreen content that was written once and left alone. If you have high-value explainer pages or topic hubs that haven't been touched in a year, they're likely losing ground in AI responses.
A quarterly audit of your highest-traffic evergreen pages, with meaningful updates rather than cosmetic date changes, is a minimum baseline.
Structured data and entity clarity
AI models pull from structured signals. Schema markup for articles (NewsArticle, Article, FAQPage), author entities, and organization markup all help models understand what your content is, who wrote it, and why it's authoritative. Coalition Technologies' analysis of why sites fail to appear in AI answers lists weak authority signals -- missing third-party citations, incomplete trust signals, poorly structured content -- as primary factors.
For news publishers specifically: author pages with genuine credentials, clear bylines, and links to external profiles (LinkedIn, Twitter/X, author bio pages) all contribute to the entity signals that AI models use to evaluate source credibility.
Citation-worthy content formats
AI models cite content that directly answers questions. This sounds obvious but has real editorial implications. Long-form narrative journalism is harder for AI to cite than content that includes clear, quotable claims, data points, and direct answers to specific questions.
This doesn't mean abandoning narrative journalism. It means thinking about whether your articles include the kinds of structured, citable elements that AI models can pull from: specific statistics, clear definitions, expert quotes with named attribution, and direct answers to the questions your readers are actually asking.
Third-party authority signals
AI models don't just look at your content in isolation. They evaluate whether other credible sources reference you. For publishers, this means: Are your stories being cited by other publications? Are your journalists quoted as experts? Do you have Wikipedia entries, Wikidata presence, and mentions in authoritative reference sources?
These off-site signals matter more for AI citation eligibility than most publishers realize. A newsroom that produces great content but has weak external authority signals will lose ground to a smaller publication that's more widely referenced.
The gating question
Nieman Lab's 2026 predictions included a provocative one: AI will probably force publishers to gate their content. The logic is straightforward. If AI models train on your open content and then answer questions without sending traffic, the economic incentive to keep content open erodes.
This is a real tension without a clean answer. Gating content protects it from being absorbed into AI training data and zero-click answers, but it also removes it from AI citation pools entirely. A paywalled article can't be cited by ChatGPT if ChatGPT can't access it.
The middle path some publishers are exploring: gate the deep archive and premium analysis, keep breaking news and topic-entry content open for AI indexing, and use AI citation visibility as a top-of-funnel channel that drives subscription conversions. Whether that model works at scale is still being tested.
Practical steps for publishers right now
Here's what a newsroom can actually do, ordered roughly by impact and feasibility.
1. Audit your AI citation footprint
Before you can improve, you need to know where you stand. Which of your pages are being cited by ChatGPT, Perplexity, Claude, and Google AI Overviews? Which topics are competitors getting cited for that you're not? This is the starting point for any GEO strategy.
Promptwatch is built specifically for this kind of analysis -- it tracks citations across 10 AI models, shows you which pages are being cited and how often, and surfaces the gaps where competitors are visible but you're not.

For publishers who want to start with a simpler tool, options like Otterly.AI or Peec AI offer basic monitoring at lower price points.

2. Map your content against AI query patterns
AI models respond to prompts, not keywords. The queries people ask ChatGPT about topics you cover may be quite different from the keywords you've been optimizing for. Understanding prompt volume and difficulty -- which questions are asked most often, which are hardest to win -- helps you prioritize where to focus.
3. Fix your structured data
Run a structured data audit. Check that your article pages have proper NewsArticle or Article schema, that author entities are marked up, and that your organization schema is complete. Google's Rich Results Test is a free starting point. For larger publisher sites, a crawler like Screaming Frog can surface structured data gaps at scale.

4. Build your author entity signals
Create or update author pages with genuine credentials, external links, and clear expertise signals. Make sure your journalists have LinkedIn profiles and Twitter/X accounts that link back to your publication. If your publication has a Wikipedia page, keep it accurate and well-referenced.
5. Prioritize freshness on high-value evergreen content
Identify your top 20-30 evergreen pages by historical traffic. Build a quarterly update schedule. Updates should be substantive -- new data, updated statistics, added sections -- not just date changes.
6. Track AI crawler activity on your site
AI crawlers (GPTBot, ClaudeBot, PerplexityBot, etc.) visit your site to index content for their models. Knowing which pages they're reading, how often, and whether they're encountering errors gives you a direct signal about what's being considered for citation. Most publishers have no visibility into this.
Promptwatch's crawler log feature shows exactly which AI bots are hitting your site and what they're finding. This is one of the more underused signals in publisher GEO strategy.
7. Monitor Reddit and YouTube for your topics
AI models frequently cite Reddit discussions and YouTube content alongside or instead of traditional publisher content. Understanding which Reddit threads and YouTube videos are influencing AI answers in your coverage areas tells you where the competition is actually coming from -- and sometimes where you should be publishing supplementary content.
Content strategy matrix for publishers
| Content type | AI citation potential | Traffic potential | Recommended action |
|---|---|---|---|
| Breaking news | High (real-time advantage) | High | Prioritize speed and accuracy; structured live blog markup |
| Evergreen explainers | Medium (if fresh) | Medium | Quarterly updates; add FAQ schema; include citable data points |
| Long-form narrative | Low (hard to excerpt) | Low from AI | Add structured summaries; include quotable claims and stats |
| Data journalism / original research | Very high | High | Publish data in citable formats; create dedicated data pages |
| Opinion / commentary | Low | Low from AI | Build direct audience; newsletter and social distribution |
| Topic hubs / pillar pages | High (if authoritative) | High | Strong entity markup; internal linking; regular updates |
The honest picture
Publishers are in a genuinely difficult position. The economic model that funded digital journalism -- search traffic driving ad impressions -- is being dismantled by the same AI systems that are now being asked to cite those publishers as authoritative sources. That's a strange loop.
The publishers who are navigating it best aren't doing so primarily through technical optimization. They built real editorial brands with loyal audiences before the shift happened. GEO tactics help at the margins, and those margins matter. But a newsroom that was entirely dependent on search traffic for its audience doesn't become resilient by adding schema markup.
What GEO can do for publishers is this: make sure that when AI models are looking for sources on topics you cover, your content is findable, structured, and authoritative enough to be cited. That's not a complete business strategy. But it's a real and measurable improvement over being invisible.
The first step is knowing where you actually stand. Most publishers don't. Running a proper AI citation audit -- across ChatGPT, Perplexity, Claude, Google AI Overviews, and the other major models -- takes maybe a few hours with the right tools, and it will tell you more about your current exposure than months of watching traffic dashboards.
Start there.

