CHABOT.DEV — A FIELD JOURNAL — VOLUME I, NO. 4

16    DEVREL IN THE AI ERA   ✣

LLM-Mediated Discovery.

For most of the 2010s, the developer's first encounter with a new product was a Google search followed by ten browser tabs. The 2020s, especially 2024 onward, broke that pattern. Developer research now starts with an AI assistant — ChatG…

For most of the 2010s, the developer’s first encounter with a new product was a Google search followed by ten browser tabs. The 2020s, especially 2024 onward, broke that pattern. Developer research now starts with an AI assistant — ChatGPT, Claude, Perplexity, Gemini — and only sometimes lands on your website.

This file is about that shift: what changed, what the evidence shows, and what DevRel teams must do about it.

What changed

Three forces compounded between late 2022 and mid-2026:

  1. ChatGPT’s launch (November 2022) and the rapid maturation of Claude, Perplexity, and Gemini into competent technical-research assistants.
  2. Stack Overflow’s collapse. Question volume fell roughly 95% from its 2014–2017 peak by late 2025. Developers stopped consulting Stack Overflow first and increasingly consulted AI instead. See ../09-platforms/stack-overflow.md.
  3. The integration of LLMs into IDEs. GitHub Copilot, Cursor, Windsurf, Claude Code, JetBrains AI Assistant, and Replit AI Agent made the AI assistant the developer’s first surface, often before they ever opened a browser.

By mid-2026, a developer trying to choose between (say) two API products often does the following in order:

  1. Ask ChatGPT or Claude: “What’s the difference between X and Y for use case Z?”
  2. Optionally ask the AI to draft a quickstart with one of them.
  3. Open the actual product’s website only after AI has given a recommendation.

This pattern has measurable consequences for DevRel.

The evidence

Several sources triangulate the shift:

  • Stack Overflow’s own declines. The 2023 SO Developer Survey marked the first measurable adoption of AI tools among professional developers; subsequent surveys show further consolidation. By the 2025 survey, the majority of professional developers report using AI coding tools regularly.
  • Faros AI and DX research (2024–2026) notes that developer sentiment about AI productivity is consistently positive while telemetry shows mixed results. Crucially, the sentiment-vs-telemetry gap doesn’t change the discovery pattern: developers are asking AI first regardless of whether the AI’s answer is correct.
  • AngelHack’s Developer Relations in 2026 essay describes the LLM as “the new first-touch user” of developer products. Their argument: “Gaps in your docs don’t just frustrate developers anymore. They create gaps in AI-assisted adoption before a human ever enters the picture.”
  • Anecdotal evidence from DevRel professionals across companies in 2025–2026: the most consistent observation is that “developers arrive at our signup having already been told what to do by an AI assistant.” The integration steps they follow are often generated by the AI from your public docs, not read by the human from your docs directly.

The trend will continue. There is no plausible scenario in which AI-mediated research becomes less common.

Mechanisms — how AI assistants form views about your product

A DevRel team needs to understand the mechanics. AI assistants form their views from several inputs:

1. Training data

What was on the public web when the model was trained. For most major models, this is a snapshot from a particular point in time, refreshed periodically. Training-data influence is durable but slow to change.

What helps:

  • Long-form authoritative content — blog posts, tutorials, documentation that’s been on the web for years.
  • Citations and inbound links from authoritative sources.
  • Wikipedia and similar reference sites.
  • Books, conference papers, podcast transcripts.

What hurts:

  • Stale or contradictory content.
  • Content that fails to use canonical terms.
  • Absence from authoritative reference sources.

2. Real-time retrieval

What AI assistants pull during a query. ChatGPT and Claude both have web-browsing modes; Perplexity is built around live retrieval; Gemini integrates Google’s retrieval infrastructure.

What helps:

  • Fresh, dated content.
  • Clear semantic structure (headings, schema markup).
  • Canonical URLs that survive site reorganisations.
  • Authority signals (citations, mentions in trusted sources).
  • High-quality answers in places AI assistants prefer (Reddit, Stack Overflow archive, official docs, GitHub READMEs).

What hurts:

  • Blocked AI crawlers (deliberate or accidental robots.txt settings).
  • Heavily client-side-rendered content (some AI crawlers can’t execute JS).
  • Login-walled or paywalled docs.

3. User feedback loops

What users have asked recently. Some assistants weight recent collective feedback into their responses, particularly for newer products.

4. Live data sources

Increasingly, official Bing (which underpins ChatGPT’s search), Google (for Gemini), and standalone retrieval indexes for Claude and Perplexity. Whether your domain is well-indexed in these underlying systems matters more than ever — see ./geo-aeo-for-devrel.md.

What this means for DevRel work

If you are a DevRel team, you should:

Audit how AI assistants currently describe your product

Spend an hour with ChatGPT, Claude, Perplexity, and Gemini. Ask each:

  • “What is [your product]?”
  • “How does [your product] compare to [competitor]?”
  • “Write a quickstart for [your product].”
  • “What are the gotchas of [your product]?”
  • “Why would I choose [your product] over [alternatives]?”

Note errors, omissions, and outdated claims. Note whether the AI cites your product at all when asked about your category. Note the sentiment of the answer.

This audit, repeated quarterly, is the single most actionable AI-DevRel exercise. It tells you what work to prioritise.

Treat your authoritative content as the source of AI assistants’ beliefs

Your official documentation, blog posts, and OSS READMEs are the substrate from which AI assistants synthesise answers. The corollary: ambiguity in your docs becomes confusion in AI assistants’ answers, and confusion in AI assistants’ answers becomes confusion in your customer’s mind.

Concrete tactics:

  • Use canonical, consistent product names.
  • State key facts (compatibility, pricing, region availability) explicitly and repeatedly.
  • Date your content (dateModified in schema).
  • Maintain a single canonical “What is [your product]?” page that AI assistants are likely to extract from.

Optimise inbound mentions on platforms AI assistants weight heavily

In 2026 these include:

  • Reddit — heavily weighted by some assistants for “real developer opinion.”
  • Hacker News — weighted for technical authority.
  • Stack Overflow (the archive, even though traffic is down) — weighted for technical accuracy.
  • GitHub — README files and discussions are retrieved heavily.
  • dev.to and Hashnode — code-aware content that AI assistants treat as developer-authentic.
  • Substack and authoritative newsletters — for industry analysis.
  • YouTube transcripts — captioning matters for AI consumption.

Cultivating mentions in these channels is now part of DevRel’s job, not marketing’s.

Provide explicit AI-readable surfaces

llms.txt, MCP servers, and OpenAPI-driven reference docs. See ./llms-txt-standard.md and ./mcp-as-devrel-surface.md.

Track AI-mediated signups separately

Many activated developers in 2026 arrive having “been told to try it by Claude” or “having had Cursor write integration code for me.” If you can capture even a coarse signal of this — through onboarding surveys, referrer analysis, or qualitative customer interviews — track it as a separate cohort. Activation funnel design for these developers should be different from search-mediated developers.

What about the developers who don’t use AI assistants?

A small but real population. They tend to skew toward:

  • Senior engineers at conservative enterprises.
  • Some sectors with regulatory friction (finance, healthcare, defence).
  • Developers with strong ideological commitments to non-AI workflows.
  • Developers in regions with limited AI-assistant adoption.

DevRel should not optimise only for AI-mediated discovery. Traditional discovery — Google search, conference attendance, peer recommendation — still serves these developers. The dual audience treatment (./the-dual-audience-thesis.md) covers them by ensuring the human-facing surfaces remain strong even as agent-facing ones are added.

Risks and limits

The visibility paradox. AI assistants summarise instead of linking. A developer who learns about your product through an AI assistant may never visit your site, sign up, or appear in any of your analytics. Your influence on them is real but invisible. This makes attribution harder.

The sentiment-vs-telemetry gap. Developers feel productive with AI even when telemetry shows mixed results. Translated to discovery: developers may feel they made a careful choice when in fact they followed an AI’s casual recommendation. The trust signals in AI-mediated discovery are more important and harder to influence than in traditional search.

The training-data lag. If your product was renamed, rebranded, or changed direction in the last twelve months, AI assistants may still describe the old version. Refresh cycles vary by model. The best mitigation is heavy production of fresh content with the new framing.

The unreliability of “what AI says about us.” AI answers vary between sessions, models, and prompt phrasings. Don’t treat a single bad answer as a crisis; treat patterns of bad answers across multiple models and sessions as a signal.

The strategic conclusion

You no longer control the developer’s first impression of your product. AI assistants do.

What you control is the substrate from which AI assistants form their views. The substrate is the public corpus of authoritative, well-structured, well-dated, human-credible writing about what your product is and how to use it.

Investing in that substrate is now the single most leveraged DevRel activity available.

See also