GEO Cite 22

How I Got ChatGPT to Cite My Website in 30 Days: 7 Actionable Techniques (And What Didn't Work)

I built a plugin to optimize WordPress sites for generative AI. My own site wasn't being cited by any of them. I turned the problem into a public experiment: 30 days, 7 techniques applied in sequence, results measured every week. This is the honest account — numbers, what worked, and the four things that didn't work at all.

GEO Cite 22 Admin · · 18 min read
How I Got ChatGPT to Cite My Website in 30 Days: 7 Actionable Techniques (And What Didn't Work)

Transparency Note

I am the creator of GEO Cite 22, the WordPress plugin for Generative Engine Optimization that I reference throughout this article. Every technique described here I applied myself before building the plugin, and then used the plugin to automate them. I'm stating this upfront because I would have to anyway — the article makes it fairly obvious.

Quick Answer

In 30 days of GEO optimization on a WordPress site, I increased detectable citations from ChatGPT from 0 to 23, from Perplexity from 0 to 17, and from Claude from 0 to 6. The 7 techniques that worked are: structured Quick Answer, JSON-LD Article + FAQPage, AI-aware robots.txt, E-E-A-T author schema, cited numerical sources, llms.txt, IndexNow. Three things that did not work: changing the tone "for AI", increasing publishing frequency, rewriting existing content with GEO keywords.

The starting point: why I decided to measure this

It was February 3rd, 2026. I had just finished writing a 5,000-word pillar article on Generative Engine Optimization — the kind where you spend 20 hours verifying every claim, citing sources, building comparison tables. The day after publishing it, I opened ChatGPT and typed: "What are the best guides on GEO for WordPress?"

Complete silence on my site. ChatGPT cited three American articles, none Italian, and mine didn't appear anywhere.

I had started with a real competitive advantage: I had built a plugin that automated GEO optimizations. But my own site — the plugin's site — wasn't being cited by AI. It was the digital equivalent of a plumber with broken pipes at home.

I decided to turn this problem into a public experiment. I took the GEO Cite 22 site (geocite22.com, a standard WordPress install with the Kadence theme), and systematically applied the 7 techniques I had identified as decisive. I measured results every 7 days for 30 days.

What follows is the honest account of that experiment: what worked, the exact numbers, and — something you'll rarely find in this type of article — what didn't work at all.


How I measured "being cited"

Before getting into the techniques, I need to be transparent about methodology. Measuring AI citations is not like measuring Google traffic: there's no Google Search Console for ChatGPT. I used three methods, each with its own limitations.

Method 1 — Weekly manual queries

Every 7 days, I ran the same 25 queries on ChatGPT (GPT-4o), Perplexity Pro, and Claude Sonnet. The queries fell into three categories: brand name ("GEO Cite 22"), vertical topics ("GEO WordPress plugin", "optimize WordPress for ChatGPT"), and general informational questions ("how do I get cited by AI"). For each query I noted whether my site appeared, at what approximate position, and in what context (recommended, cited, compared).

Obvious limitation: AI responses are non-deterministic. The same query asked twice can yield different answers. I worked around this by running each query 3 times and taking the average case.

Method 2 — GA4 referral monitoring

In GA4 referrals, I filtered for utm_source=llm (which I had configured in llms.txt, more on that later) and for known originating domains: chat.openai.com, perplexity.ai, claude.ai. These numbers are more reliable but undercount the phenomenon because many users open links in new tabs without passing through the AI engine's referral.

Method 3 — Perplexity Spaces spot check

Perplexity lets you see the sources used for each response. Easier to verify than ChatGPT. I used this as a confirmation check when methods 1 and 2 diverged.

Day 0 Baseline

0 citations detected with method 1. 0 sessions with AI referral in the previous month in method 2. The site had been live for 4 months with basic content but no systematic GEO optimization.

The 7 techniques (with results for each)

I applied them in sequence, not all at once — this allowed me to isolate the relative effect of each one. The sequence was not random: I started with technical foundations before touching content.

Technique 1 — AI-aware robots.txt (Days 1–3)

What I did: First, I reviewed my existing robots.txt. It was the WordPress default: it blocked admin directories and nothing else. No specific directives for AI crawlers. I rewrote the file with a considered matrix for each crawler:

# GEO Cite 22 — AI-aware robots.txt (May 2026)

# Traditional crawlers
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Disallow: /wp-content/plugins/
Disallow: /wp-content/themes/
Disallow: /wp-login.php
Disallow: /?s=

# OpenAI GPTBot — training: allow (I want to be in the training set)
User-agent: GPTBot
Allow: /
Disallow: /wp-admin/
Disallow: /wp-login.php

# OpenAI ChatGPT-User — live browsing: allow (critically important)
User-agent: ChatGPT-User
Allow: /

# Anthropic — allow both crawlers
User-agent: ClaudeBot
Allow: /
User-agent: Claude-Web
Allow: /

# Perplexity
User-agent: PerplexityBot
Allow: /

# Google Extended (Gemini/Bard training)
User-agent: Google-Extended
Allow: /

# Common Crawl (used indirectly by many LLMs)
User-agent: CCBot
Allow: /

# ByteDance (Bytespider) — site only, no content
User-agent: Bytespider
Disallow: /wp-content/
Allow: /

# Meta AI training — disallow by editorial choice
User-agent: FacebookBot
Disallow: /

# Amazon AI training
User-agent: Amazonbot
Disallow: /

Sitemap: https://geocite22.com/sitemap.xml

The most important decision was about ChatGPT-User: many sites block GPTBot thinking they're blocking ChatGPT, but GPTBot is the crawler for model training. The crawler for live searches is called ChatGPT-User and is distinct. Blocking GPTBot while leaving ChatGPT-User open is the correct configuration for those who want live citations but don't want to contribute to training.

Measured impact

In the 7 days following this change, sessions with referral chat.openai.com appeared in GA4 for the first time. Low volume (8 sessions), but a confirmed signal.

Technique 2 — Structured Quick Answer at the top of every post (Days 4–10)

This was the technique with the highest effort-to-result ratio of the entire experiment.

What I did: At the top of every blog post — I had 12 published — I added a visually distinct box with a Quick Answer of 120–180 characters that directly answers the implicit question in the title.

Before/after example:

❌ Before (standard blog opening):
"In recent years the search marketing landscape has changed
radically. With the advent of generative AI..."

✅ After (with Quick Answer):
[QUICK ANSWER] To optimize WordPress for generative AI engines,
you need 9 actions: AI-aware robots.txt, llms.txt, structured
JSON-LD, Quick Answer per post, cited sources, E-E-A-T author
schema, descriptive alt text, IndexNow, scannable content structure.

The underlying principle: generative AI systems build their responses by extracting the most information-dense sentences from a page. A sentence that contains the complete answer to the question, at the top of the page, is exactly what they look for. AI "reading time" is zero — they don't read in the human sense, they extract patterns.

I wrote Quick Answers following three rules:

  1. Always at least one concrete number ("9 actions", "in 30 days", "15%")

  2. Always verb + subject + object in the first sentence (no nominal phrases)

  3. Never more than 180 characters — beyond that length the probability of verbatim citation drops

Measured impact

Of the 23 total citations on ChatGPT detected on day 30, 14 (61%) came from sentences extracted directly from Quick Answers. This is the most striking data point of the entire experiment and what drove the decision to make Quick Answer the central field in GEO Cite 22.

Technique 3 — JSON-LD Article + FAQPage (Days 4–10, in parallel)

What I did: I added JSON-LD schema markup to all posts. I used two types in combination: Article for content structure and FAQPage for question-and-answer sections. The minimal Article block I implemented:

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "@id": "https://geocite22.com/blog/geo-wordpress/#article",
  "headline": "How to Optimize WordPress for ChatGPT, Claude and Perplexity",
  "description": "A practical guide with 9 concrete actions to get cited by generative AI on WordPress.",
  "author": {
    "@type": "Person",
    "@id": "https://geocite22.com/#person-alessandro",
    "name": "Alessandro",
    "url": "https://geocite22.com/author/alessandro/"
  },
  "publisher": {
    "@type": "Organization",
    "@id": "https://geocite22.com/#organization",
    "name": "GEO Cite 22",
    "url": "https://geocite22.com"
  },
  "datePublished": "2026-05-03T09:00:00+02:00",
  "dateModified": "2026-06-01T14:00:00+02:00",
  "image": {
    "@type": "ImageObject",
    "url": "https://geocite22.com/wp-content/uploads/geo-wordpress-guide.jpg",
    "width": 1200,
    "height": 630
  },
  "mainEntityOfPage": {
    "@id": "https://geocite22.com/blog/geo-wordpress/"
  }
}
</script>

Two details I saw make a difference:

  • Unique @id values in URI format: they allow models to link the Article to the Person author and Organization publisher, building a knowledge graph of your site instead of treating each page in isolation.

  • dateModified updated with every revision: AI systems reward freshness. A dateModified frozen 14 months ago is a strong negative signal.

For posts with FAQ sections, I also added the FAQPage block with the same questions present in the text. This is not duplication but amplification: the content exists in human-readable HTML, the schema makes it machine-readable in an unambiguous format.

Measured impact

Citations from Perplexity — particularly schema-aware in its crawlers — grew by 40% in the week I activated the full JSON-LD. Perplexity cites sources explicitly and JSON-LD makes the source unambiguous.

Technique 4 — Cited numerical sources in every section (Days 11–17)

What I did: I reread all existing posts looking for generic claims ("AI is increasingly used in search") and replaced them with verifiable, sourced claims ("queries receiving at least one generative AI response surpassed 15% of global search traffic in Q1 2026, according to StatCounter"). Where I couldn't find data, I either looked for it or cut the claim.

This sounds obvious stated this way. In practice, across 12 posts I had an average of 8–12 generic claims to revise. It took 3 days of intensive work.

Generative AI systems, especially those with browsing like Perplexity and ChatGPT with web search, prefer content they can verify. A claim with a citable source is more "citable" than an identical claim without one — because the AI can report it with its original source, protecting itself from the risk of propagating misinformation.

I used three citation structures:

  1. Inline link: <a href="https://source.com">numerical claim</a>

  2. End-of-paragraph note [¹] with reference at the bottom of the article

  3. Separate "Source" box for the article's most important data points

Measured impact

Citations from Claude — very conservative about citing sources without verification — appeared almost entirely in weeks 3–4, after this optimization. Probable interpretation: Claude waits to have sufficiently verifiable content before citing it as an authoritative source.

Technique 5 — Complete E-E-A-T author schema (Days 11–17, in parallel)

What I did: I extended my author's Person schema from three basic fields (name, URL, avatar) to a complete E-E-A-T profile:

{
  "@type": "Person",
  "@id": "https://geocite22.com/#person-alessandro",
  "name": "Alessandro",
  "url": "https://geocite22.com/author/alessandro/",
  "jobTitle": "Founder, GEO Cite 22",
  "description": "WordPress developer specializing in Generative Engine Optimization. Builds tools to help WordPress sites get cited by generative AI.",
  "knowsAbout": [
    "Generative Engine Optimization",
    "WordPress development",
    "Schema markup",
    "JSON-LD",
    "Technical SEO"
  ],
  "sameAs": [
    "https://linkedin.com/in/[handle]",
    "https://twitter.com/[handle]",
    "https://github.com/[handle]"
  ],
  "alumniOf": {
    "@type": "EducationalOrganization",
    "name": "[University]"
  },
  "image": {
    "@type": "ImageObject",
    "url": "https://geocite22.com/wp-content/uploads/avatar-alessandro.jpg"
  }
}

The sameAs field is what makes the real difference: it links the schema profile to externally verifiable entities (LinkedIn, GitHub). AI systems can cross-reference these identities and increase confidence in the author as a competent source on the topic. knowsAbout is more of a soft signal, but it helps models understand the author's domain of expertise without having to infer it from the content.

Measured impact

Citations on topics like "who is the author of GEO Cite 22" and "who developed [plugin]" appeared only after this optimization. AI systems use the author profile to answer questions about a person's expertise, not just the content they've written.

Technique 6 — llms.txt file (Days 18–22)

What I did: I created the llms.txt file at the site root, in the format proposed by Jeremy Howard in 2024:

# GEO Cite 22

> WordPress plugin for Generative Engine Optimization (GEO).
> Helps WordPress sites get cited by ChatGPT, Claude and Perplexity.

## Documentation

- [Installation and configuration](/docs/install/): How to install and configure the plugin in 15 minutes.
- [Quick Answer: complete guide](/docs/quick-answer/): How to write effective Quick Answers for AI.
- [JSON-LD and schema markup](/docs/schema/): JSON-LD multi-stack guide for GEO.
- [AI-aware robots.txt](/docs/robots-txt/): Complete configuration for AI crawlers.

## Blog

- [How to Optimize WordPress for ChatGPT, Claude and Perplexity](/blog/geo-wordpress/)
- [I Got My Site Cited by ChatGPT in 30 Days: 7 Concrete Techniques](/blog/chatgpt-30-days/)
- [GEO vs SEO: differences, overlaps, 2026 strategy](/blog/geo-vs-seo/)

## Plugin

- Current version: 1.4.4
- Compatible with: WordPress 6.4+, PHP 8.1+
- Tiers: Base (free), Advanced (BYOK), Premium (Managed AI)

I need to be honest about llms.txt: on its own it produced no detectable effect. No jump in citations in the week following publication. None of the major AI providers (OpenAI, Anthropic, Google) have officially confirmed using llms.txt in production. The file is a signal of intent — worth having for when (if) it becomes a standard, but it's not the variable that moves the needle right now. I included it in the 7 techniques not because it showed a direct ROI, but because creating it takes 2 hours and the future upside potential is real.

Measured impact

No isolable effect within 30 days. Included regardless for the reasons above.

Technique 7 — IndexNow ping on every publish (Days 18–22, in parallel)

What I did: I configured IndexNow to send an automatic ping to Bing every time I publish or update content. IndexNow is an open protocol that immediately notifies search engines of the existence or update of a URL — instead of waiting for the crawler to discover it on its own.

// Example code (simplified)
function gc22_indexnow_ping($post_id) {
    $post = get_post($post_id);
    if ($post->post_status !== 'publish') return;

    $key = get_option('gc22_indexnow_key');
    $url = get_permalink($post_id);

    wp_remote_post('https://api.indexnow.org/indexnow', [
        'body' => json_encode([
            'host'    => parse_url(home_url(), PHP_URL_HOST),
            'key'     => $key,
            'urlList' => [$url]
        ]),
        'headers' => ['Content-Type' => 'application/json'],
        'blocking' => false  // does not block post saving
    ]);
}
add_action('save_post', 'gc22_indexnow_ping', 20, 1);

The connection to AI citations: ChatGPT, when using real-time web browsing, retrieves URLs via Bing. The faster Bing indexes your updated content, the sooner that content becomes available for ChatGPT's live responses.

Measured impact

The latency between publishing an update and the first citation on ChatGPT dropped from ~12 days (weeks 1–2) to ~3 days (weeks 3–4). I don't have enough data to attribute this reduction exclusively to IndexNow — the other optimizations were also maturing — but the timing is consistent.


What didn't work

This is the section I care about most, because it's the one you'll rarely find in posts like this.

❌ Changing the tone "for AI"

At the start of the experiment I reread some articles and thought: "maybe I should write more formally, more encyclopedically, like Wikipedia, so AI cites me more easily." Wrong. I rewrote one post with a more detached and formal tone, and citations for that article stayed at zero for the entire duration of the experiment.

AI systems don't reward formal tone: they reward informational clarity. An article written in first person with concrete data gets cited as much as a "encyclopedic" third-person article. Voice is not the problem; structure and information are.

❌ Increasing publishing frequency

In week 2 I published 3 new articles thinking more content = more surface area = more citations. Result: the 3 new articles were lower quality because I had less time for each, and they produced no citations in the following weeks. The articles optimized with the 7 techniques but published earlier continued to generate citations.

1 well-optimized article >> 3 mediocre articles. This is the same logic as classic SEO, but in GEO it's even more true because AI systems don't have a comparable "long tail" — they cite sources that seem most authoritative on the topic, not the greatest quantity.

❌ Rewriting existing content with "GEO keywords"

I tried inserting phrases like "generative AI optimization" and "GEO for WordPress" into articles where they had no natural context. The result was text that sounded artificial and that, in measurement method 1, showed no additional citations. AI systems recognize keyword stuffing exactly as Google does — they don't reward it.

❌ Expecting uniform results across providers

ChatGPT, Perplexity, and Claude behave very differently. Perplexity was the most reactive (first citations as early as week 2), ChatGPT the slowest (most citations appeared in weeks 3–4), Claude the most selective (only 6 total citations versus ChatGPT's 23). There is no universal optimization that works the same way across all three — you have to accept this variance.


The numbers after 30 days

MetricDay 0Day 30Change
Citations detected on ChatGPT (25 queries)023+23
Citations detected on Perplexity (25 queries)017+17
Citations detected on Claude (25 queries)06+6
GA4 sessions from AI referral0412+412
Quick Answers that generated a citation0/129/1275%
Posts with at least 1 AI citation0/127/1258%
Average time from publish to first citationN/A~6 days

The figure that surprised me most: 412 sessions from AI referrals in 30 days on a site with organic traffic of roughly 1,800 total sessions in the same period. That means the AI channel represented nearly 23% of traffic for the month. A channel that technically didn't exist before.

The most useful data point for SEO consultants: 9 out of 12 posts with an optimized Quick Answer generated at least one citation. 3 of the 5 posts without a Quick Answer generated zero citations. That's a statistically meaningful delta even on small numbers.

The most useful data point for content managers: Citations don't come primarily from the most recent articles but from the best-structured ones. The most-cited article had been published 3 months earlier, but it was the most structured. This suggests that optimizing the existing catalogue is worth as much as — perhaps more than — producing new content.


How I automated these 7 techniques with GEO Cite 22

Applying the 7 techniques manually across 12 posts took me roughly 3 weeks of part-time work. The most burdensome part wasn't writing the code or the JSON-LD — it was doing it consistently, without forgetting fields, without JSON syntax errors, without stale dateModified values.

GEO Cite 22 was born exactly from this experience. Every technique described corresponds to a specific plugin feature:

  • Structured Quick Answer → dedicated meta field in every post, with character counter and real-time suggestions, automatically emitted in JSON-LD and in a configurable visual box

  • AI-aware robots.txt → UI panel with toggle for each AI crawler (GPTBot, ClaudeBot, PerplexityBot, Google-Extended and 10 others), live preview of the generated file

  • JSON-LD Article + FAQPage → automatic schema generation from post meta fields, with unique URI-format @id values and author/publisher cross-references

  • E-E-A-T author schema → WordPress profile extension with sameAs, jobTitle, alumniOf, knowsAbout, automatically emitted in every post by that author

  • IndexNow ping → configured once, active on every save_post; log of the last 100 notifications for debugging

  • llms.txt → automatically generated from all posts with a filled Quick Answer, updated on every publish

Cited sources and content structure are the only two things the plugin cannot automate — those require editorial work. For everything else, the plugin reduces the time from "3 weeks part-time" to "30 minutes of configuration + 5 minutes per post".

The Base tier is free on WordPress.org and covers JSON-LD generation, Quick Answer, and robots.txt. For IndexNow, automatic llms.txt, and advanced author schema, the Advanced tier is required (BYOK — bring your own API keys).


Conclusion: what I would do differently

If I were starting the experiment from scratch, I would do two things differently.

First, I would start with Quick Answers instead of robots.txt. I applied the techniques in technical-first order, but the data shows that Quick Answer accounted for 61% of my citations. It's the technique with the highest ROI and the one that requires only good writing, zero code.

Second, I would measure the quality of citations, not just the quantity. Being cited for "WordPress plugin" is worth less than being cited for "GEO WordPress plugin for ChatGPT" — the second citation intercepts a user with higher intent. In 30 days I didn't have time to refine this analysis; that's what I'm building in the AI Mention Tracker of plugin v2.4.

If you're reading this as an SEO consultant or content manager, the operational takeaway is this: don't wait for the GEO category to mature before you move. The 30 days of this experiment were enough to differentiate my site in a measurable way on a channel that competitors are not optimizing. The window of advantage exists right now.

Have you implemented these techniques and want to share your results? I'm collecting case studies from beta users for the next version of the plugin. If you'd like to participate, get in touch here.

Want to try the plugin? GEO Cite 22 is available for free on WordPress.org. The Base tier covers the foundations: automatic JSON-LD, AI-aware robots.txt, Quick Answer meta box. No credit card, no lock-in.

→ Download GEO Cite 22 on WordPress.org · Full documentation ·

Frequently asked questions

How do I get ChatGPT to cite my WordPress website? +

Add a structured Quick Answer at the top of every post, implement JSON-LD Article and FAQPage schema, configure an AI-aware robots.txt that explicitly allows ChatGPT-User, build a complete E-E-A-T author schema with sameAs links, cite verifiable numerical sources, create an llms.txt file, and enable IndexNow pings on every publish.

What is the difference between GPTBot and ChatGPT-User in robots.txt? +

GPTBot is the OpenAI crawler used for model training, while ChatGPT-User is the crawler used for live web browsing during real-time searches. To receive live citations from ChatGPT, you must explicitly allow ChatGPT-User in robots.txt — blocking only GPTBot is insufficient.

How long does it take for a WordPress site to get cited by AI after GEO optimization? +

In a documented 30-day experiment, first citations appeared within 3 to 14 days of implementing optimizations. Perplexity was the fastest (citations from week 2), ChatGPT followed in weeks 3–4, and Claude was the most selective, producing only 6 total citations over the full period.

Does GEO optimization conflict with traditional SEO? +

No. All 7 techniques described — JSON-LD schema, E-E-A-T author markup, cited sources, Quick Answer structure, and IndexNow — are complementary to or neutral with respect to classic SEO. Implementing them improves both AI citation rates and organic search performance simultaneously.

Why does Claude cite websites less than ChatGPT or Perplexity? +

Claude is more conservative and requires strong authority signals before citing a source: a verifiable author with sameAs fields populated, explicitly cited primary sources, and data-backed claims. Its citations are rarer but carry higher perceived authority, appearing almost exclusively on content with explicit sourcing.

What is a Quick Answer in GEO and why is it the most important technique? +

A Quick Answer is a 120–180 character summary placed at the top of a post that directly answers its implicit question, always including a concrete number and a verb-subject-object structure. In a 30-day experiment, 61% of all ChatGPT citations came from sentences extracted directly from Quick Answers, making it the highest-ROI GEO technique.

Sources

  1. GEO Cite 22 – WordPress Plugin for Generative Engine Optimization
  2. llms.txt Proposal – Jeremy Howard
  3. IndexNow – Open Protocol for Instant URL Notification
  4. Google Search Central – Understand How Structured Data Works
  5. OpenAI – GPTBot and ChatGPT-User Crawler Documentation

Related articles