Tutorial: Reverse-Engineer ChatGPT’s Fanout Queries

ChatGPT generates internal sub-queries before selecting citations — and the pages it cites score higher on those sub-queries than on the original prompt. This tutorial shows you how to intercept those fanout queries using Chrome DevTools, restructure your content around them, and use Ahrefs Brand Radar to measure and close the semantic gaps keeping you out of AI responses.


0

Reverse-Engineering ChatGPT’s Fanout Queries to Get Your Content Cited

A 1.4-million-prompt study by Ahrefs found that ChatGPT retrieves dozens of URLs per response yet cites roughly half of them — and selection isn’t random. By the end of this tutorial, you’ll know how to intercept ChatGPT’s internal search queries using browser DevTools, use those queries to restructure your content, and run cosine-similarity gap analysis in Ahrefs Brand Radar to identify exactly what your pages are missing before ChatGPT decides to ignore them.

ChatGPT retrieves dozens of URLs per query but cites only ~50% — the Ahrefs study that kicked off this investigation.
ChatGPT retrieves dozens of URLs per query but cites only ~50% — the Ahrefs study that kicked off this investigation.
  1. Understand the ref_type citation hierarchy. ChatGPT tags every URL it retrieves with an internal ref_type field — one of five channels: search, news, Reddit, YouTube, or academia. Citation rates differ dramatically between them. Search accounts for 88.46% of all citations. Reddit, despite representing 67.8% of non-cited URLs in the dataset, is cited at just 1.93%. ChatGPT uses Reddit extensively to build context, then credits an institution instead. If your content doesn’t rank in organic web search, it won’t enter the citation pool regardless of how well-optimized it is for other channels.
Search-sourced URLs account for 88.46% of all ChatGPT citations — if your content doesn't rank in web search, it almost certainly won't get cited.
Search-sourced URLs account for 88.46% of all ChatGPT citations — if your content doesn’t rank in web search, it almost certainly won’t get cited.
  1. Recognize that fanout query relevance — not prompt relevance — drives citation selection. When a user submits a prompt, ChatGPT generates a set of internal sub-queries called fanout queries to hunt for specific facts. Cited pages score significantly higher on cosine similarity to those fanout queries than to the original user prompt. Optimizing for the sub-questions is the higher-leverage move.

  2. Open ChatGPT in Chrome and enter a test prompt. Navigate to chatgpt.com, type a representative query such as Fashion trends for this winter, and submit it.

  1. Extract the fanout queries from Chrome DevTools. Right-click the page and select Inspect, then switch to the Network tab. Find the request URL containing a forward slash followed by C, copy the string that follows that slash, paste it into the Network filter bar, and refresh the page. Click the result displaying orange brackets, open the Response tab, and search the payload for the word queries. The search_model_queries array that appears contains the exact sub-queries ChatGPT sent to the web on the user’s behalf.

Warning: this step may differ from current official documentation — see the verified version below.

Inside browser DevTools, ChatGPT's search_model_queries field exposes the exact fanout sub-queries it runs — this is what you're optimizing titles and URLs against.
Inside browser DevTools, ChatGPT’s search_model_queries field exposes the exact fanout sub-queries it runs — this is what you’re optimizing titles and URLs against.
DevTools confirms ChatGPT's internal fanout queries in real time — use these strings to audit whether your title and URL slug semantically align before publishing.
DevTools confirms ChatGPT’s internal fanout queries in real time — use these strings to audit whether your title and URL slug semantically align before publishing.
  1. Use the fanout strings to restructure your content. Place the extracted queries as H2 section headers within existing pages, or build standalone content pieces targeting each sub-question individually. Title and URL slug alignment matters most here — pages with natural-language URL slugs were cited at 89.78% versus 81.11% for opaque ones in the Ahrefs dataset.

  2. Audit semantic gaps using Ahrefs Brand Radar. In Ahrefs, open Brand Radar → AI Responses Report, select a target prompt, and review the fanout queries alongside the URLs ChatGPT cited. Activate the AI Content Helper to measure cosine similarity between your content and those query topics. Color-coded highlights identify which sub-questions your page still needs to address.

Ahrefs AI Content Helper scores your draft against fanout query topics in real time — colored highlights show exactly which sub-questions your content still needs to address.
Ahrefs AI Content Helper scores your draft against fanout query topics in real time — colored highlights show exactly which sub-questions your content still needs to address.
  1. Prioritize product pages and landing pages as your primary citation targets. Both content formats fall squarely within the search ref_type. According to PromptWatch research, SEO product pages and landing pages are among the most cited content types across AI platforms — and their keyword-targeting structure naturally produces the high fanout-query relevance the Ahrefs data rewards.
Cited pages in the search ref_type have a median age of 500 days — evergreen, well-established content outperforms freshness alone for ChatGPT citation.
Cited pages in the search ref_type have a median age of 500 days — evergreen, well-established content outperforms freshness alone for ChatGPT citation.

How does this compare to the official docs?

The DevTools extraction method is a practitioner workaround built on undocumented ChatGPT internals, and the Ahrefs Brand Radar workflow layers directly on top of that — Act 2 examines what OpenAI and Ahrefs actually publish to map where the video’s approach holds up and where verified guidance tells a different story.

Here’s What the Official Docs Show

The tutorial’s practitioner workflow is built on real, documented tools — official sources confirm the access paths and core capabilities the video relies on. Where the docs go quiet, those gaps are called out step by step below so you can calibrate your confidence at each stage.

1. The ref_type citation hierarchy and the 88% search citation figure

No official documentation was found for this step — proceed using the video’s approach and verify independently.

ChatGPT homepage at chatgpt.com showing the standard prompt interface in a logged-out state
📄 ChatGPT homepage at chatgpt.com showing the standard prompt interface in a logged-out state

2. Fanout query relevance versus prompt relevance

No official documentation was found for this step — proceed using the video’s approach and verify independently.

3. Open ChatGPT in Chrome and enter a test prompt

The video’s approach here matches the current docs exactly. One practical addition: the Network panel interception in the next step requires an active, logged-in session — the public homepage won’t generate the API traffic you need to inspect.

ChatGPT homepage — logged-out state, confirming the browser interface described in step 3
📄 ChatGPT homepage — logged-out state, confirming the browser interface described in step 3

4–5. Open DevTools and navigate to the Network panel

The video’s approach here matches the current docs exactly. Chrome DevTools is browser-native, requires no installation, and the Network panel is officially documented to “analyze and overwrite network requests and responses on the fly” — the confirmed capability the entire interception technique depends on.

Chrome DevTools official documentation homepage confirming DevTools is browser-native and includes a Panels section covering the Network panel
📄 Chrome DevTools official documentation homepage confirming DevTools is browser-native and includes a Panels section covering the Network panel
Chrome DevTools docs confirming the Network panel supports live inspection and analysis of network requests and response payloads
📄 Chrome DevTools docs confirming the Network panel supports live inspection and analysis of network requests and response payloads

6–9. Isolating the fanout payload: the ‘/C’ string, filter bar, orange brackets, and ‘queries’ search

No official documentation was found for these steps — proceed using the video’s approach and verify independently.

The Network panel’s general capability to inspect live API response payloads is well documented; the specific sub-steps applied to ChatGPT’s internal endpoint are a practitioner field technique, not covered by any official Chrome or OpenAI source.

Chrome DevTools docs showing the AI innovations and DevTools MCP server integration sections — separate from the passive network inspection technique used here
📄 Chrome DevTools docs showing the AI innovations and DevTools MCP server integration sections — separate from the passive network inspection technique used here

10. Restructure content using the extracted fanout strings

No official documentation was found for this step — proceed using the video’s approach and verify independently.

11–12. Ahrefs Brand Radar and the AI Responses Report view

Brand Radar is confirmed as a named, navigable Ahrefs product — the video’s approach here matches the current docs exactly on that point. As of May 2, 2026, the sub-feature label “AI Responses Report” does not appear as a tab, heading, or navigation label in any available screenshot; the confirmed UI label is simply “Brand Radar.” The specific fanout queries + cited-URL side-by-side layout described in step 12 is also not visible in documentation — Brand Radar displays full AI overview paragraphs per keyword, not a discrete sub-query list.

Ahrefs Brand Radar interface showing keyword volume alongside AI overview response text per keyword
📄 Ahrefs Brand Radar interface showing keyword volume alongside AI overview response text per keyword

13. Cosine similarity scoring in the AI Content Helper

The Ahrefs content tool is confirmed and shows numeric topic scores with color-coded coverage indicators. As of May 2, 2026, the label “cosine similarity” does not appear in any available UI screenshot — the scoring methodology is unlabeled in official sources. Treat the topic scores as a practical proxy for semantic alignment regardless of what’s powering them.

Ahrefs content optimization tool showing color-coded topic scores and coverage gaps in a demo document
📄 Ahrefs content optimization tool showing color-coded topic scores and coverage gaps in a demo document
Ahrefs homepage confirming Brand Radar as a named feature and showing a content optimization tool with a numeric topic score
📄 Ahrefs homepage confirming Brand Radar as a named feature and showing a content optimization tool with a numeric topic score

14–15. Prioritize product pages and landing pages

No official documentation was found for these steps — proceed using the video’s approach and verify independently.

One tool the video doesn’t cover: Promptwatch

Promptwatch automates the equivalent of steps 3–9. It tracks AI citations across ChatGPT, Gemini, Perplexity, and Copilot; surfaces prompt-level fanout data; and produces a quantified Citations Analysis view — 40% Mentioned / 60% Missing in their demo. Its analytics also show /pricing and /features/content-agents among the highest AI-cited URL paths, which independently supports the video’s product-page recommendation in step 15. Free tier covers 10 ChatGPT prompts.

Promptwatch homepage — a third-party AI search visibility and citation tracking platform not referenced in the tutorial
📄 Promptwatch homepage — a third-party AI search visibility and citation tracking platform not referenced in the tutorial
Promptwatch analytics dashboard showing AI citation traffic by platform and top cited URL paths for a demo brand
📄 Promptwatch analytics dashboard showing AI citation traffic by platform and top cited URL paths for a demo brand
Promptwatch feature overview showing Citations Analysis (40% Mentioned / 60% Missing), Prompt Tracking, and Agent Analytics
📄 Promptwatch feature overview showing Citations Analysis (40% Mentioned / 60% Missing), Prompt Tracking, and Agent Analytics
  1. ChatGPT — Browser interface for submitting test prompts and generating the network traffic inspected in steps 4–9.
  2. Ahrefs — AI Marketing Platform — Platform providing Brand Radar for AI overview monitoring and the content optimization tool with topic-level scoring used in steps 11–13.
  3. Chrome DevTools — Official documentation for Chrome’s browser-native developer tools, including the Network panel used to inspect API response payloads.
  4. Promptwatch — Third-party AI citation tracking platform offering automated fanout prompt monitoring and citation gap analysis as an alternative to the manual DevTools workflow.

Like it? Share with your friends!

0

What's Your Reaction?

hate hate
0
hate
confused confused
0
confused
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win

0 Comments

Your email address will not be published. Required fields are marked *