Key Takeaways:

Most marketing teams follow a similar pattern when a new channel becomes popular. First, they see industry folks talk about it on places like LinkedIn. Maybe they even notice the channel driving a few initial sales.

Then, a discussion happens, and someone almost always asks the question:

“Do we need to hire someone to manage this?”

There’s a similar pattern taking root inside a lot of marketing teams thanks to the rather bombastic arrival of GEO (Generative Engine Optimization) on the scene.

But what if we told you your existing team has almost everything you need to succeed with GEO?

Those who are in charge of SEO, content, and PR already have the skills they need to handle GEO. It’s just a matter of setting up a proper workflow, a sturdy tech stack and a measurement process.

And, of course, spending some time learning how AI platforms attribute and cite information.

So, before you go ahead and post a job description for a GEO specialist, ask yourself this:

“Do we lack the capability to do GEO? Or do we just lack structure?”

We’re willing to bet it’ll be the latter.

GEO Is Additive SEO, Not a Reinvention of It

There’s a popular narrative (often spread by self-proclaimed AI search gurus) that generative engine optimization is a completely new discipline that requires a specialized team. It’s not.

GEO is an extension of SEO. Your team is already improving content structure, strengthening authority signals, updating content regularly. Those same practices drive AI citations. The content just gets surfaced in AI answers instead of search results.

A few things are different, though. AI platforms have a different way of citing information; they rely more on structured data and third-party sources, and they require content to be highly specific.

But the instances where these differences justify a new role are few and far between.

What You Already Have

If you care to look closely, most of the GEO skill set already lives inside your team. 

Creating Effective Content

Your content team or specialist already knows how to make information clear, factual, and structured.

They’re writing authoritative and scannable content, optimizing for featured snippets, and using optimized headings and structure.

These are the same qualities large language models (LLMs) look for when deciding which pages to surface. They reward clear, factual answers and rely heavily on authority signals and content structure. (If you’ve been paying attention to featured snippet optimization over the past few years, congratulations—you’ve been doing proto-GEO without knowing it.)

So if your team knows how to write a good “What is X” section for an SEO article, they already know how to write citable definitions. The adjustment is minor: a bit more specificity, a bit more concision, a bit less fluff. Less “our industry-leading solution transforms workflows” and more “this tool does X, Y, and Z.”

Technical SEO

Your technical foundation is pulling double duty as you read this.

Site architecture, internal linking and schema markup (aka the essential elements of technical SEO) are what influences how LLMs process and attribute information. They use schema to extract structured data and they rely on logical site hierarchy and clean HTML to discover and parse content.

The good news is that most of this work compounds. If you’ve invested in technical SEO over the past few years, you’re not starting from zero with GEO. You’re starting from a foundation that AI platforms already know how to read. The teams that neglected technical SEO in favor of content volume are the ones going in circles now.

Competitive Intelligence and Visibility Tracking

You know that thing where you’re staring at a SERP, trying to figure out why that one competitor keeps outranking you despite having content that’s objectively worse? The “how is this page ranking” spiral that every SEO person has fallen into at least once?

GEO has its own version of that. Except instead of rankings, you’re looking at citations. Instead of “why do they rank,” you’re asking “why did ChatGPT cite them and not us.” The existential frustration is the same, it’s just the surface that’s different.

Your team is already tracking keyword rankings, analyzing SERPs, identifying content gaps. GEO requires all of that, but pointed at AI platforms instead of search engines. Which prompts are you showing up for? Which ones are you invisible on? Why does your competitor keep getting cited in answers about your category when your product page is objectively more helpful?

The analytical muscle is identical. The tooling is still catching up, honestly. We’ve had keyword tracking infrastructure for decades and AI visibility tracking is maybe two years old. But if your in-house SEO ever obsessed over why a competitor ranks, they already know how to obsess over why they get cited.

Keyword Research

Speaking of your SEOer, they have a process for keyword research. They know how to pull data, prioritize by intent, group related queries, build a tracking system. That entire workflow transfers to GEO with one adjustment: the inputs.

In SEO, keyword research usually starts with tools. In GEO, prompt research often starts with your support tickets, sales calls, customer interviews. The questions people ask AI platforms aren’t always the same ones they type into Google. They’re often longer, more conversational, more specific. “What’s the best CRM for a 20-person sales team that already uses HubSpot for marketing” isn’t a keyword anyone’s bidding on, but it’s absolutely a prompt someone’s typing into ChatGPT.

The strategic logic is similar: prioritize commercial intent, group related prompts, track visibility. Your team already knows how to do this. They just need access to different source material.

Third-Party Visibility and Brand Management

This is the one that catches people off guard.

Brand management has always mattered in SEO. Backlinks, mentions, reviews…none of this is new per se. But in SEO, you had a fallback. Even if your third-party presence was weak, you could still rank. You control your site, optimize your pages, fix your technical issues, and sometimes manage to brute-force your way up the SERP through sheer on-page effort.

GEO doesn’t give you that fallback.

AI platforms pull from everywhere, and we do mean everywhere. Your site, yes, but also review sites, Reddit threads, comparison articles, news coverage, Quora answers, analyst reports. They’re triangulating across sources to build an answer. And you can’t control most of those sources. Your owned channels can be perfect—structured data in place, content optimized, everything technically sound—and you’ll still get passed over if the third-party ecosystem doesn’t back you up.

It’s not that brand management suddenly became important, it’s that you can no longer compensate for weak third-party presence by being really good at the stuff you control.

Your PR team already knows how to work this ecosystem. They’re tracking mentions, building journalist relationships, monitoring review sites. The work is the same, but the margin for error just got smaller.

What’s Missing

So your team has the skills. What they probably don’t have is the infrastructure.

Let’s break it down.

The Tech Stack

Your SEO team lives in Google Search Console and GA4, tools that have been refined over the course of fifteen-plus years. They know precisely where to look to understand what’s working and what isn’t.

GEO is maybe two years old, tops. The tooling is still being built, and frankly, a lot of what’s out there are half-baked dashboards slapped together to ride the hype cycle.

Yes, you need visibility tracking across AI platforms, i.e., where you’re showing up, for which prompts, how often, and who’s being surfaced instead of you. Citation patterns, competitive benchmarking. Everyone building in this space offers some version of that.

What most tools don’t give you is the “now what.” A dashboard tells you you’re invisible, but it doesn’t tell you why, what to fix first and how. You end up with a PDF full of data and no actionable next step.

This is what we built Writesonic to solve. Visibility tracking, yes, but also an Action Center that tells you what’s blocking citations and prioritizes fixes by impact. Dashboard plus execution layer. We’re biased, but do your own comparison.

An Effective Workflow

GEO is new enough that best practices are still being written. Which means your team is going to be experimenting. A lot. And experiments without documentation are more vibes than science. You need: 

None of this is complicated. You probably have similar processes for other channels already. The work is applying them to something new before the lack of structure turns into six months of unrepeatable, unscalable effort.

Time and Permission

Here we hit the uncomfortable point.

Your team can probably handle GEO. The question is whether they have the bandwidth to do it well, or whether it becomes another thing they squeeze in between everything else and half-heartedly do for six months before someone asks why it’s not working.

Marketing teams are already stretched as is. Adding GEO to the pile without removing something else means it’ll get the scraps—an hour here, a task there, no sustained focus. And GEO in its current state rewards sustained focus, teams who’ve carved out dedicated time to experiment, track, and iterate.

Permission to reprioritize is the actual blocker. Not capability. So you need to sit down and decide what you’re willing to deprioritize that isn’t driving business value.

When Hiring Actually Makes Sense

There are a few edge cases where hiring for a GEO-focused role might make sense:

You’re Enterprise-Scale

Enterprise-scale businesses have multiple product lines, overlapping audiences, complex product documentation, and thousands of relevant AI prompts that need to be managed.

At this scale, doing GEO properly is a volume problem due to the sheer number of prompts and content pieces that need to be managed.

Even if you build a strong workflow, managing GEO at this scale will likely be too much for your existing SEO or content team.

In this case, hiring a dedicated person to own GEO can help you maintain consistency, prioritize work, and ensure you’re not missing out on any opportunities for improved AI visibility.

Your Team is Already at Capacity

Some teams don’t have the bandwidth to dedicate time to GEO without impacting existing channels that do drive value.

If your team is already at full capacity and there’s nothing you’re willing to deprioritize, then you’ll struggle to gain traction with GEO.

In this case, you’ll need to make a dedicated hire simply to increase your team’s capacity and be able to execute an effective GEO strategy consistently.

You Have the Budget to Experiment Aggressively

Some companies are in the fortunate position of having money to throw at emerging channels. If that’s you, the advantage isn’t necessarily a dedicated GEO hire, but more so speed.

Budget means you can run more experiments simultaneously and invest in better tooling earlier. It means your existing team can spend more hours on GEO without sacrificing other priorities, whether that’s through backfilling their current workload or bringing in freelance support for the grunt work.

That said, money doesn’t guarantee you’ll figure it out first. Plenty of scrappy teams with tighter budgets are running smarter experiments and learning faster than enterprise teams drowning in process. 

Budget is an accelerant but it won’t replace good thinking.

You Probably Don’t Need a GEO Team

The instinct to hire when something new shows up is understandable. It feels like action,  like taking the channel seriously.

But GEO isn’t a completely new capability. Your team already has a lot of the necessary skills, poised to be honed into something that works on this surface.

What’s actually missing is the infrastructure. The visibility layer that shows you where you’re showing up and where you’re not, the prioritization that tells your team what to fix first. The connective tissue that turns “we’re not getting cited” into specific tasks for specific people.

Your team can do this. They just need the setup to make it happen.

If you want help figuring out what that looks like, we’ve built a lot of this into Writesonic, i.e., visibility tracking, gap analysis, prioritized actions for content, SEO, and PR. We’re happy to show you around.This is where Writesonic can help. Get in touch if you’d like to learn more.

Key Takeaways

So far in my foray into LLM data, I’ve focused on content types and query frames. What formats get cited, how platforms respond to different intents, whether branded prompts change citation patterns.

I wanted even more specificity: which domains dominate these citations? Who’s publishing the listicles that LLMs so love to surface? Who owns the reviews that these platforms trust?

I ranked 2.4 million domains by how often they get cited across eight AI platforms. Here’s what the top of the list looks like:

You can quickly spot the commonalities. These are user-generated content platforms, aggregators. Community spaces where millions of people create millions of pages.

That pattern tells us something important about how LLMs source information—and where the leverage points are for AI visibility.

How I categorized platform strategies

I classified the 2.4 million domains based on how many platforms cited them during the study period (May 2025 to October 2025):

The distribution:

Platform presence distribution (domains x LLMs)

Two-thirds of all cited domains appear on exactly one platform. Just 6.5% achieve universal presence.

Finding #1: Universal domains are UGC aggregators (and you can’t compete with that directly)

Here’s the top 10 list of universal domains by total citations:

  1. reddit.com – 7,328,267 citations across 7 platforms
  2. wikipedia.org – 4,289,547 citations across 8 platforms
  3. youtube.com – 2,661,056 citations across 7 platforms
  4. google.com – 1,652,610 citations across 8 platforms
  5. linkedin.com – 1,424,134 citations across 8 platforms
  6. g2.com – 1,219,726 citations across 8 platforms
  7. medium.com – 1,157,881 citations across 8 platforms
  8. forbes.com – 1,155,981 citations across 7 platforms
  9. nih.gov – 974,124 citations across 8 platforms
  10. zapier.com – 956,337 citations across 8 platforms

There’s no sidestepping the pattern. Seven of the top 10 are platforms where users create content, not publishers creating their own.

Reddit aggregates community discussions, Wikipedia aggregates crowd-sourced knowledge., YouTube aggregates user videos, G2 aggregates reviews and so on.

Even the exceptions lean on aggregation. Forbes has contributor networks while Zapier publishes integration guides and user-submitted workflows. The NIH hosts research papers from several authors.

Universal domains in LLM citations

The domains achieving universal AI presence are structured to aggregate millions of contributions from millions of users across millions of topics.

You can’t build the next Reddit. Neither can I. That ship sailed 15 years ago (and required venture funding and a tolerance for chaos that most businesses don’t have).

But—and this is the important part—you can optimize your presence on Reddit. And Wikipedia. And LinkedIn. 

Finding #2: The citation gap is massive (and it tells us what LLMs trust)

The average citations by platform strategy:

Universal domains get cited 26 times more than multi-platform domains and 182 times more than single-platform domains.

Citation Volume by Domain Strategy

This is yet another data point showing us that LLMs heavily favor user-generated content and community wisdom when answering queries, especially decision-oriented ones. These sites are structured to provide the exact format of information LLMs trust: community-vetted, multi-perspective, experiential content.

This aligns with what we already knew from the school of SEO: third-party signals are important. In the AI era, “off-page” just takes on renewed importance. You need to have a consistent, ironclad presence on the third-party platforms AI systems already perceive as aggregators of truth.

Finding #3: URL diversification is a structural outcome of UGC

One of the clearest patterns separating universal domains from everyone else is that they have tens of thousands—sometimes hundreds of thousands—of unique URLs getting cited.

Compare that to domains with over-concentrated citations (where 70%+ of citations go to a single URL):

URL citation across LLMs scatterplot

Not surprising.

Reddit has 678,255 URLs because it has millions of users creating posts and comments every day across tens of thousands of subreddits. That diversity emerges from the inherent structure of the platform.

Wikipedia has 111,823 URLs because it documents everything and relies on global contributors. YouTube has 366,197 because millions of creators upload videos.

These platforms win on diversification because they’re designed to aggregate. Every new user, post and video is a new potential citation target.

You’re not going to publish 678,000 pages (and if you tried, most of them would be low-quality filler). But you can create strategic content on these platforms:

Your focus isn’t to match UGC platforms on volume, but rather place high-value content where LLMs are already looking.

Finding #4: Query frames have distinct winners (and some are more monopolized than others)

Along with platform presence and URL diversity, there’s another dimension worth examining: which domains take over specific query types.

We tracked seven primary query frames based on how people ask questions:

For each frame, we looked at which domains get cited most often and whether those citations cluster around specific players or distribute more evenly.

The pattern is stark. Some query frames are monopolized, while others are wide open:

Reddit’s dominance in alternatives queries is particularly interesting, though not surprising. People asking for alternatives want real user experiences and Reddit delivers—as far as LLMs are concerned, better even than G2. 

Reddit monopolizes alternatives query citations in LLM search

This pattern repeats across frames:

The strategic implications for GEO

It’s not news that you need to care about more than just your owned channels to succeed in AI search. But the sheer magnitude probably is.

Universal domains get cited 182 times more than single-platform domains. And those universal domains are almost exclusively UGC aggregators: Reddit, Wikipedia, YouTube, LinkedIn, Medium.

This isn’t a sign to abandon owned content. It’s telling you that third-party presence is where the bulk of citation volume lives and treating it as a nice-to-have instead of a strategic imperative means you’re ignoring where a lot of the game is being played.

You need both a really good foundation of owned content and really good third-party hygiene.

Your owned content establishes what you do and how you do it from your POV.  Product pages, documentation, blog posts and case studies are the structure. But LLMs don’t just pull from your site when someone asks about your category. They pull from Reddit threads comparing tools in your space, G2 ratings, third-party listicles and on and on it goes. 

Even when those third-party mentions don’t get directly cited, models are pulling information from them to form their understanding of your brand. When they recommend you in certain contexts or position you against competitors, they’re drawing on everything that’s out there about you, not just what you publish.

You can’t control every mention, but you can influence the narrative through judicious presence on UGC platforms, engagement with review sites, partnerships with publishers and monitoring what’s being said in spaces where your audience is active.

That’s where the citations are. That’s where the broader information ecosystem that forms brand understanding lives. 


Methodology

Data collection period: May 2025 to October 2025

Platforms tracked: ChatGPT Plus, ChatGPT Pro, Claude 4 with search, Perplexity (free tier), Gemini, Gemini with search, Google AI Mode, Grok, and Copilot.

Citation logging: We tracked every domain cited in AI responses, including URL-level granularity, query metadata, and timestamp data. Any domain appearing at least once during the study period was included in the dataset (n=2,384,921).

Due to dataset size, we analyzed summary statistics across all 2.4M domains and pulled detailed examples from the top 1,000 performers in each category. For consistency and URL diversity analyses, we filtered to domains with at least 100 citations to focus on meaningful patterns rather than outliers. For frame analysis, we examined the top 50 domains per frame type.

Key Takeaways:

Our concept of what a website is hasn’t fundamentally changed from the first, static Dreamweaver-built monstrosities of yesteryear.

It’s a business card. You want yours to be classy, like a pale nimbus background with raised lettering. Maybe a Silian Rail font.

And every CMO feels understandably protective of that business card. Understandably, content agencies have built an entire industry on supporting this reflex. For the last couple decades, the core toolkit has barely changed: you go in and clean up the technical SEO, deliver a content strategy, fire up the blog machine.

There have certainly been upheavals along the way. The gated gardens of social media corralled many eyeballs into just a few isolated pastures within the internet’s great expanse. You could no longer completely control the message, but it was close enough you could still say you “owned” your marketing assets and keep a straight face.

Your website, your blog, and your social channels. That is the holy trinity of traditional agency optimization.

For a long time, the rules were stable. Google rewarded the signals you could actually shape. Things like your site quality and your content depth and your link authority. So, agencies built entire workflows around those signals. You knew what mattered, and you knew how to move the needle.

But LLMs … don’t really care about any of that. They scour the rest of the internet. And that’s scary because neither you, nor your clients, control all those other parcels of land.

LLMs might key in on some Reddit thread from 2019, or a years-old Capterra review, or a Substack article that’s part product comparison and part anecdote.

These are the things that end up feeding the model’s understanding of your client. And if these third-party sources talk about your client loudly, or incorrectly, or (worse!) not at all, that’s what the AI parrots. It’s scraping consensus signals, and those signals no longer live on the landing page.

All the stuff you’ve been doing still works, of course. This is not an “SEO is dead” moment (just like the last time wasn’t. Nor the time before that). After all, 40% of AI citations are pulled from the top 10 SERP results.

But, it does mean 60% of citations aren’t covered in your traditional playbook. AI is like an expansion pack to the game content marketers are used to playing.

In case you missed it: our GEO playbook for agencies.

Why Third-Party Placement Is a Service Line

If you’ve been watching how LLMs answer questions, you’ve probably noticed they love certain corners of the internet. They keep circling back to places like Reddit, Quora, Medium, Hacker News.

In fact, nearly 22% of AI citations come from user-generated content like this. Our most recent analysis of 2.4 million domains across eight AI platforms found that seven of the top 10 most-cited domains are UGC platforms. Reddit alone had 7.3 million citations. Wikipedia had 4.3 million. YouTube, LinkedIn and Medium all follow the same pattern.

LLM models love these places not because they are polished or authoritative like you would expect from traditional EEAT content, but because they’re crowded with people talking in detail about problems and the solutions to those problems.

Sometimes, those conversations are inaccurate or outdated. But the model has no reason to correct them when it feeds you those conversations as answers. That, as we’ll explore below, is your job.

Once you understand where the model likes to pull from, though, you can put your marketing strategist hat back on, open up the playbook, and start updating the oldies but goodies based on this new generative engine optimization (GEO) battleground.

It requires relationship capital

This is where agencies earn their keep, because AI doesn’t do diplomacy (yet).

Publishers aren’t always going to want to hear your pitch. If you want them to update a listicle, it needs to be because it’s helpful first and foremost for them and their readers.

It’s the difference between:

“Hey, I noticed this line is outdated. Here’s the correct information, with a source to verify it. Hope this was helpful!”

and

“Here’s why my tool, EIE.io, is better for enterprise ag producers than the Old Mac app.”

This is time consuming, and it doesn’t scale. Fortunately, just gaining a few extra brand mentions can drive 1,000 new citations from LLMs, so we’re not talking about days and days of work here. You can afford to take your time and personalize your messages.

User-generated content platforms work differently, but the principle is the same: be authentic or bounce.

Hacker News readers have a sixth sense for brand plants. Reddit mods will gleefully vaporize anything that smells like a PR initiative.

Your presence in these communities has to be slow, steady, and, above all, helpful. That means adding value by answering questions and hardly, if ever, mentioning your client unless it is really, truly the best answer.

This is high-touch work that requires deep expertise. It isn’t commodity content work clients can get from Fiverr. Fortunately, that higher barrier to entry equates to better margins for your agency.

It’s recurring revenue

Generating third-party mentions requires ongoing management.

There are always going to be new listicles popping up in mentions while others fall out of date. Subreddits are always rehashing arguments, and those new threads get picked up by the LLMs, so you need to make sure your voice is in the mix there, too.

You can’t fix things once and expect your client’s narrative to stay put. Maintaining their reputation is a constant battle, which means it fits best within a retainer model.

In addition to monitoring your client monthly, there will be intermittent bursts of outreach work when something important changes. That makes it a stable, profitable service line.

It’s a hedge against SEO commoditization

Before SEO became a profession, nobody was thinking in terms of keywords or topic clusters. People wrote recipes, built fan pages, and wrote angsty LiveJournal entries but it all just sat out in the ether in a disorganized mess.

Half the magic of the old internet was how chaotic it all was. You’d search for something and stumble onto a blog and it felt like you’d discovered a secret world in the back of your wardrobe.

The downside, of course, was that finding anything specific was difficult. That’s what Google fixed. Backlinks as a proxy for authority was a brilliant idea that made the internet far more usable. But, it also kicked off the longest-running cat-and-mouse game in marketing: Google trying to surface genuine expertise, and everyone else trying to look like genuine experts.

Over time, the “right” way to write so you appeared atop the SERPs became fairly codified. You got your hub-and-spokes and ultimate guides. FAQs and key takeaways appeared in every article because they became part of a checklist.

Along the way, a lot of content ended up sounding exactly like its neighbors. Then AI showed up, trained on all that sameness, and turned the dial to max. If SEOs were already drifting toward a shared voice, AI took that voice and blended it into an even smoother puree, then made it cheap enough to crank that SEO smoothie out for pennies a pound.

So, now you’ve got two forces flattening content at once. Writers are adapting to Google’s preferences, and AI is learning from those writers. This templated stuff is now a commodity that can only compete on price. And competing on price is a race with only one destination: the bottom.

Third-party placement, on the other hand, is stubbornly human work. 

You have to actually understand which sites matter in your client’s space, because they’re not always the ones with the highest domain rating. Then, you have to figure out who actually maintains that content and write outreach that actually captures their attention. That requires a level of category fluency that lets you position a client as the right answer for the page without overselling it.

This is how the best agencies will start to move up the food chain from mass production to strategic visibility work.

The operational framework

None of the work that goes into third-party visibility is mystical or hand-wavy. What surprises most people when they first dig in is how familiar the tasks actually end up feeling. 

The terrain outside your familiar website SEO work may be a bit different. You’re not auditing title tags or rewriting H2s, but the instincts you honed doing that work will serve you well here.

Phase 1: The third-party visibility audit

In phase 1 for each of your clients, you will map:

Citation source categories

To understand your client’s citation footprint, you need to know which types of sources tend to shape the LLM’s understanding of their space.

This usually falls into recognizable buckets. There will be review aggregators like G2 and Capterra, listicles that rank the “best X tool for Y users”, trade pubs, Reddit communities, Wikipedia pages, and experts writing on Medium and Substack.

Different industries will have different gravitational centers. Some will have unusually active Reddit communities, while others have a well-respected industry newsletter.

Current visibility

Once you’ve mapped the categories, the next step is to see where your client actually shows up within the industry’s LLM ecosystem

You want to find where the client is mentioned, and then figure out whether that source talks about your client accurately, whether it’s a recent citation, and whether the mention is positive or negative. Also look for citations where your competitors are mentioned, but your client isn’t.

Now, research those sources a little more deeply. Figure out what their domain authority is and how often they are cited by LLMs.

This snapshot becomes the raw data set from which you’ll work going forward. It’s the baseline from which you’ll show your client how you’ve improved their visibility.

Gap prioritization

Once you’ve gathered information about the current state of play, your next step is to sort out what deserves your attention first. Not all gaps are created equal.

This step is like prioritizing keywords. Only, instead of search volume, difficulty, and intent you’ll use authority and influence to sort your priorities.

Every category is different, but in general you can sort into the following tiers:

At the end of this phase, you’ll have a comprehensive third-party visibility audit alongside a prioritized opportunity list to deliver to your client.

Phase 2: Strategic Outreach

With a visibility audit and an opportunity list in hand, it’s time to execute.

This is the part that doesn’t scale neatly. That’s OK. It’s why your clients are willing to pay top dollar for your expertise, skills, and influence in the industry.

For listicle and review site inclusion

There are likely to be dozens and dozens of listicles and review sites that you could chase on behalf of your clients. We encourage you not to get sidetracked pitching every client to every list.

Instead, work to build relationships with high-authority sources within your agency’s verticals. As you prove yourself again and again to be a useful contact, you’ll build trust as a resource they can turn to when it comes time to update their content.

Here’s what the process looks like:

  1. Identify the decision-maker. This is likely someone like a publication editor, a list curator, or the site’s owner. If there isn’t a byline, look for a contact in the masthead, or in an “about” section. You can also try and find someone on LinkedIn associated with the site that has “editor” or something similar in their title. One last source is to do a quick WHOIS lookup to reveal the domain registrar.
  2. Research update frequency. If there isn’t a date attached to the blog, you can sometimes get clues from the screenshots they use within the blog, the features they highlight about each tool, or follow their outbound links and see how old those pages are.
  3. Understand inclusion criteria. Look at who’s already on the list and how they’re described. Patterns in the entries usually reveal what the curator values, whether it’s pricing transparency, UI/UX, the availability of a free tier, or integrations. Whatever shows up again and again through the list is probably what they’re optimizing for.
  4. Craft a non-spammy pitch. The key here is to be specific and to add value. Point to the exact line or section you’re hoping to update. Then, explain what’s changed and why this information helps their readers. Give them a clean, ready-to-paste version along with a verified source for the information.
  5. Provide easy-to-use assets. Along with the copy and a source link, you might include screenshots, comparison data, and any other information that will help inform their readers.
  6. Follow up strategically. Give your contact time to respond and make the changes. They’re busy, too, after all. After a week or two, it’s OK to give them a polite nudge in the form of an email that references your original note. If they don’t respond after this nudge, though, let it go. You don’t want to get a reputation as pushy or spammy.

For Reddit/Quora presence

It is so, so, so tempting to go full marketer mode here. Resist the urge at all costs. If you’ve chosen well, these communities are full of your client’s exact audience, so missteps will have outsized consequences for your client’s reputation.

We recommend identifying 5-10 high-value threads in a client’s category. Don’t go reviving dead threads. These should be live, active conversations. It’s better to be patient and wait for the right thread to emerge than to rampage through the subreddit like a bull in a china shop.

When you do engage, make sure it is to provide genuinely helpful answers, not pitches. Only mention your client if doing so actually answers the question in a useful way. Placement should be the cherry on top of your reputational sundae.

For Wikipedia

Wikipedia wants to know if your client has been covered in independent, reputable sources like mainstream publications, industry press, scientific research, or books. If all you’ve got are blog posts and press releases, Wikipedia doesn’t consider you notable, and that’s OK.

If your client does meet the bar for inclusion, then be sure to follow Wikipedia’s editing protocols strictly. What you write must be backed by a reliable, third-party source. You should summarize your client neutrally, without spin. Wikipedia will absolutely remove any promotional content.

For content syndication

First, let’s clarify what we’re talking about here. There’s “syndication” in the PR-network sense where you pay a few hundred bucks and end up splattered across 50 sites. That’s not what we’re talking about here.

The kind of syndication you want is the editorial kind. You want to find a partner where your client’s perspective will make the partner’s publication better for their readers. That’s because you’re going to be repurposing your client’s best-performing content (with their permission, naturally) for syndication.

You’ll know it’s the right kind of publication because it will include disclaimers that say “originally published in …”, usually at the top of the article. What these publishers are looking for is usually primary research and analysis or perspectives that test the establishment view on a topic.

Look for editors that want a regular cadence of articles from your client. Done well, this kind of work raises the publisher’s prestige while also framing your client as an authoritative thought leader in the industry.

Make sure to give the editor the canonical link, an author bio, and the exact copy they should use to credit your client so everything is attributed appropriately.

Writesonic does this for you

If you’ve made it this far, then you probably agree the opportunity for agencies here is pretty huge.

You may also have identified a problem.

Most any agency could offer third-party placement. Agencies are pros at running audits, building relationships, pitching editors. The work isn’t the issue.

But systematically identifying where your dozens of clients should be mentioned across hundreds of potential sources is a bottleneck that sounds like it would probably kill this service before it ever launched.

What you’d need to do manually

Let’s take a look again at the work involved in building a client’s third-party invisibility.

First, you’d have to check hundreds of high-authority listicles in each client’s category to see if they’re mentioned. You’ll need to cross-reference the information in those listicles against your client’s latest feature updates to identify errors that need to be corrected or updated. That’s laborious, but you could get away with doing this just monthly. So, this alone is probably manageable.

Unfortunately, monitoring communities like Reddit, Quora, Medium, and Substack has to be done almost daily or you risk missing relevant discussions as they happen in real-time.

Now, remember you have to do all this work not just for client mentions, but for all their competitors as well.

For one client, this is 10-15 hours of research. For a roster of 10+ clients, it quickly becomes untenable.

What Writesonic’s Action Center Does

Instead of spending a week shining a flashlight into every nook and cranny of the internet hoping to spotlight opportunities, Writesonic’s Action Center gives you a dashboard.

On this dashboard, you’ll see where the gaps are. It shows:

Writesonic also gives you the data you need to sell the service:

So, instead of 15 hours of manual research per client, you’ll automatically receive a comprehensive, ready-to-present audit report and a prioritized outreach list.

In addition, to those high-value client deliverables, you can use the Action Center to automatically monitor mentions of both your client and their competitors. That way, those deliverables remain living, useful documents instead of dusty PDFs rotting in some forgotten folder on your shared Google Drive.

You’ll be able to provide monthly reporting to clients, and have an ongoing list of outreach tasks to pursue on their behalf. All of this slots neatly into your recurring monthly revenue model.

Without this tool, you’re looking at manual tracking that produces inconsistent data because it depends who on your team does the research. There’s no systematic way to prioritize your outreach because it comes down to the researcher’s gut feel on what matters. And, you’ll struggle to prove whether placements are actually improving AI citation rates.

The Action Center offers infrastructure that automates discovery across citation sources, provides a consistent methodology you can use for all your clients, delivers high-value data for your clients and ROI tracking that shows the value you bring to the table.

If you want to see what this looks like for your clients’ categories – what placement opportunities exist, where competitors are mentioned, what sources you should go after – we’ll give you a walkthrough.

Book a demo and we’ll pull a real audit for your space so you can see how big the opportunity really is.

Key Takeaways

  • Listicles win regardless of query type. 20-32% citation share across every frame tested. Troubleshooting queries show the lowest listicle performance (19.74%), and that’s still nearly 1 in 5 citations.
  • Intent matching works, but it’s not surprising. Pricing queries pull pricing pages 5.88x more than baseline. Comparison queries pull comparison content 3.34x more. Alternatives queries surface competitor pages 5.37x more. Platforms respond to explicit intent signals the way you’d expect.
  • Reviews explode in troubleshooting contexts. 8.9x lift—the most extreme multiplier in the dataset. Reviews jump from <1% to 7.87% of citations when users search for fixes and bugs. Likely explanation: reviews mention problems users encountered, and platforms match those snippets to troubleshooting queries even though reviews don’t actually solve anything.
  • Product pages beat pricing pages in pricing queries. Pricing pages get a 5.88x lift, but product pages still capture more total citations (8.64%). Platforms prefer comprehensive context over isolated pricing information.

Welcome back to our AI search lab. Last time, I analyzed LLM citation patterns in branded vs. non-branded prompts. This week, I wanted to find out whether divergence in query framing—how to do X, Product A vs Product B, what is Y, best tools for Z—produces meaningful changes in what gets cited.

The data landed somewhere between “mostly predictable” and “why is that happening?”

The assumption going in was that platforms would heavily adjust citation patterns based on intent. If someone’s asking how to do something, they’d prioritize tutorials. If someone’s comparing products, they’d surface comparison content. Basic intent matching.

The reality is more subtle than that.  Grab some coffee while I break down the best insights and what they mean for your GEO strategy.

Finding #1: Listicles stay dominant everywhere

Listicles account for 20-32% of citations across all query types. That’s a 1.6x range, which is basically nothing compared to most content types.

Listicles stay dominant across Ai citations regardless of frame type

Even in troubleshooting contexts, where listicles perform worst, they still capture nearly 20% of citations. This matches what we saw in the industry analysis and the branded query study: listicles work everywhere. Query framing changes a lot of things, but it doesn’t dethrone listicles as the format platforms default to.

This is good news if you’re already publishing them. It’s also confirmation that you can’t ignore them just because your vertical feels “different.”

Finding #2: The obvious matches are mostly what you’d expect

Platforms are reasonably good at matching content to explicit intent.

LLMs respond to user query intent

These aren’t shocking revelations, but they’re worth confirming. When users explicitly signal their intent, platforms respond accordingly. If someone searches “Writesonic pricing,” they’re getting pricing pages. If they search “Writesonic alternatives,” they’re getting competitor comparison content.

The lifts are consistent across platforms too.

Finding #3: Reviews explode 8.9x in troubleshooting queries

Reviews account for less than 1% of citations in most contexts (0.88% baseline). In troubleshooting queries, they jump to 7.87%. 

That’s an 8.9x lift and the most extreme multiplier in the entire dataset.

Review content explosion in troubleshoot queries

When users search “why isn’t Slack loading my messages” or “Zoom freezing during calls,” platforms prioritize review content over nearly everything else. Reviews jump from less than 1% of citations to almost 8%.

This doesn’t make obvious sense. Reviews aren’t troubleshooting guides, they’re product evaluations. Why would they be relevant when someone’s trying to fix a problem?

A possible explanation (and my best guess) is that reviews often mention bugs, issues and problems users encountered. If someone leaves a review saying “great product but crashes on mobile” or “love it except for the sync issues,” that content might match troubleshooting queries. Platforms could be pulling review snippets where users describe similar problems, even if those reviews don’t provide solutions.

But that’s just a hypothesis. 

Other troubleshooting lifts:

The FAQ lift makes sense as they address common issues. Press releases might surface because companies announce patches and fixes. But as for why case studies lift 2x in troubleshooting contexts, that’s another interesting conundrum. 

What’s undeniable is the review lift. Whether that’s good content matching or platforms struggling to find actual troubleshooting guides is an open question.

Finding #4: How-to queries favor instructional formats 

How-to queries show the expected preferences for educational content.

Nothing wild here. Platforms distinguish between “teach me” and “help me decide” intent. How-to queries suppress comparison pages (0.20x), competitor pages (0.14x) and reviews (0.30x).

Finding #5: Pricing queries surface product pages over pricing pages

Pricing pages get a 5.88x lift in pricing queries (0.61% → 3.57%), which makes sense. But product pages get cited at 8.64% in pricing contexts, significantly outperforming dedicated pricing pages.

We’re seeing a similar pattern as we did back with branded vs non-branded queries. In contexts where you’d assume pricing pages to be the go-to choice (branded prompts and pricing queries), LLMs prefer comprehensive product pages with context, feature explanations and pricing together rather than pricing in isolation.

Meanwhile, competitor pages don’t move in comparison queries (0.17% baseline → 0.17%). You’d think “Slack vs Teams” would prioritize dedicated competitor comparison pages, but platforms prefer broader comparison pages (4.85%) that analyze multiple options rather than binary matchups.

Platform biases are there but they don’t dominate

Most platforms follow similar patterns, but a few show distinct preferences.

Platform-specific citation biases

Claude over-indexes on competitor pages

Competitor pages get 4.08x over-representation in what-is queries and 3.87x in list queries on Claude. When users ask “what is Writesonic” or “best project management tools,” Claude disproportionately pulls competitor comparison content.

ChatGPT prefers case studies

Case studies get 1.66-1.80x over-representation across multiple frame types on ChatGPT. No other platform shows this preference. If you’re publishing case study content, ChatGPT is your best distribution channel within AI search.

Grok favors aggregator roundups

Grok cites aggregator roundups 1.57-2.00x more than average across nearly all query types. 

Suppressions are bigger than lifts

Some content types get suppressed in specific contexts:

Platform-specific suppressions in query types

These suppressions are often larger than the lifts. Comparison pages drop to 0.29% in how-to contexts from 1.45% baseline. That’s a 0.20x multiplier and far more dramatic than most positive lifts.

Intent-based optimization works in AI search the same way it works in SEO. Users signal what they want, platforms attempt to match that intent and specific content formats perform better in specific contexts.


Methodology: Analysis based on 282,828,738 citations across 7 frame types (what is, how-to, comparison, pricing, alternatives, troubleshooting, list/best) and 16 content types. Lift calculated as (frame % / baseline %) where baseline represents average citation rate across all frames. Platform biases calculated as (platform % / average %) for each frame-content combination.

Sky-Rocket Your Organic Traffic with AI-Assisted SEO

  • Get SEO-Optimized Articles in Minutes
  • Cut down Research time in Half
  • Boost Your Topical Authority
Start Free Trial
No Credit Card Needed