You can find out right now by running specific test prompts in ChatGPT and Perplexity. Most experts are in one of two situations: AI is already recommending them and they have no idea, or AI isn't recommending them and leads are silently going to someone else. A 15-minute self-test tells you exactly which situation you're in. And what to do about it.
Open ChatGPT or Perplexity right now and type three prompts: "recommend a [your niche] coach," "who are the best [your specialty] consultants," and "who teaches [your method or approach]." Document every result. This is your AI visibility baseline.
AI recommendation is invisible until you look for it. Running manual test queries is the only reliable way to know what AI is saying about you right now. Before a potential client hears it instead of you.
Run the free AI Visibility Scan for a structured assessment of your current position. Then use those findings to decide whether to build or strengthen your AI presence.
The simplest way to find out if AI is recommending you is to ask it. ChatGPT is the most widely used AI chatbot, so it's the most valuable place to start. Here is the exact process:
Step 1: Open a fresh ChatGPT conversation. Use a new conversation. Not one where you've discussed your business before. So ChatGPT is working from its training data alone, not the context of your chat history.
Step 2: Run these three prompt types, customized to your niche:
Step 3: Document every result. Screenshot the full response. Note whether your name appears, where it appears in the list, what it says about you, and whether your website is mentioned correctly.
Step 4: Run variations. Change the phrasing slightly. "best" vs. "top" vs. "who should I hire," or "coach" vs. "consultant" vs. "expert." AI responses can vary significantly based on phrasing, and a more complete picture requires multiple angles. Run at least five distinct prompts across two or three ChatGPT sessions.[2]
If your name appears consistently and accurately, you have a measurable AI presence to build on. If it doesn't appear at all, you now know the gap. And you know it's structural, not a reflection of your expertise.
ChatGPT is not the only AI tool routing expert queries. Perplexity, Claude, Google's AI Overviews, and Microsoft Copilot each draw from different data sources and produce different results. A thorough audit covers at least three of them.
Perplexity is particularly valuable to test because it cites sources in real time, showing you exactly where it pulled its recommendations from. When Perplexity recommends someone, you can see the specific pages it used to form that recommendation. Which gives you a direct window into what kinds of content are generating AI citations in your niche.
To test Perplexity, use the same prompts you used in ChatGPT. Note any differences in who appears. Pay attention to which websites or sources Perplexity cites for each recommendation. That tells you what kind of content AI is reading and trusting in your space.
Google AI Overviews (the AI-generated answers that appear at the top of Google search results) are also worth checking. Search your specialty queries in Google and look for AI-generated summaries at the top. If you appear in one, you're being positioned as an authority at the most visible real estate in search.[3]
Build a simple tracking sheet: one row per query, one column per tool, mark whether your name appears. Run this test monthly. Over time, you'll see whether your position is growing, holding steady, or declining. And you'll have the data to make decisions about where to invest effort.
Inconsistent recommendation. Your name appearing sometimes but not reliably, or appearing in one tool but not another. Is a signal that you have partial authority signals but haven't yet built the structured depth AI needs to recommend you with confidence.
Think of it from AI's perspective: it wants to give its users accurate, reliable recommendations. If your online presence is thin, inconsistent, or poorly structured, AI may recognize your name but hesitate to surface it prominently. Because the evidence base isn't strong enough to recommend you confidently. Inconsistency is AI's version of a lukewarm referral.
What creates inconsistent recommendations:
If this is your situation, the good news is: you're already on AI's radar. You're not starting from zero. You're at the stage where systematic infrastructure work. More structured content, stronger schema, clearer language consistency. Will move you from occasional mention to consistent recommendation.
If you run thorough tests across multiple AI tools and query variations, and your name never appears, the absence is informative. It means AI does not yet have enough structured signal from your corner of the web to surface you confidently for your niche queries.
This is not a verdict on your expertise. It is a verdict on your digital infrastructure. The experts AI recommends instead of you are not necessarily better at what they do. They simply have a more AI-readable presence. More structured content, more consistent off-site mentions, better schema, and clearer topical depth.[4]
The important reframe here: every lead AI sends to someone else in your niche is a lead you could have received. You'll never see those leads. You'll never know you lost them. That's what makes the absence of AI visibility different from a failed ad campaign. There's no negative feedback loop to alert you. The silence is the signal.
If AI isn't recommending you, the path forward is clear: build the infrastructure. A structured website with topical depth, proper schema on every page, named authorship, and consistent off-site mentions is what closes the gap. This is exactly what the Authority Directory Method is designed to produce.
Most AI-referred leads don't announce themselves. They arrive through your booking form, your contact page, or a direct email. They may mention that they "heard about you online" or "found you through research". Without specifying that the research was done with an AI chatbot. This ambiguity creates a monitoring challenge.
There are several ways to detect AI-referred leads even when clients don't volunteer the information:
Add explicit intake options to your discovery call form. Include "AI chatbot (ChatGPT, Perplexity, etc.)" as a named option in your "How did you hear about me?" question. Many people will select it if they see it listed. They just won't think to mention it unprompted.
Ask directly during intake conversations. A simple "I'm curious how you found me" in a discovery call often surfaces AI referrals. When you ask conversationally, people are more likely to say "I actually asked ChatGPT" than they would be in a written form.
Watch for behavioral signals. AI-recommended leads often arrive pre-sold and with unusual specificity about what they want. They may reference something from your website that they clearly read before reaching out. They ask fewer qualifying questions because the AI has already done that work. These behavioral signatures. The unusually warm, prepared, ready-to-commit prospect. Are worth tracking even before you confirm the source.
Monitor your "direct" traffic in analytics. Research from SparkToro has documented that a portion of traffic labeled as "direct" in analytics tools is actually AI-referred. Because chatbots don't pass referrer data the way traditional websites do. If your direct traffic is growing while your search traffic is flat, AI referrals may be contributing.
My first AI-referred lead didn't arrive with a banner or a notification. It arrived as a normal booking. A woman who scheduled a discovery call through my calendar link, showed up on time, and was already decided. Within 20 minutes she'd signed. No pitch. No objection handling. Just a conversation between two people who were clearly a fit.
When I asked how she found me, she said, "ChatGPT recommended you." I smiled, thanked her, and after the call, I sat with that for a moment. I had not done anything deliberately to get recommended by ChatGPT. My site had a certain amount of structured content, my schema markup was solid, and I had a coherent off-site presence. But I had not been running tests to see whether AI was citing me.
After she signed, I ran the test myself. I opened ChatGPT and typed in the kinds of queries someone in my target market would ask. My name came up. Not just once. In several different query variations. I'd been being recommended, silently, and I had no idea.
That moment sharpened something for me. Most experts are in one of two situations, and they don't know which one they're in. Either AI is already recommending them and they have no feedback loop to confirm it. Or AI isn't recommending them, and leads are going to someone else. And there's no alert, no failed campaign, no signal at all. Just silence.
The 15-minute test I've described on this page is the fastest way to find out which situation you're in. Run it before you do anything else. The answer will make every subsequent decision clearer.
This is more common than people expect. AI can recommend you. Meaning it has enough signal to surface your name. But still misstate your specialty, website, or credentials. If this happens, it's a signal to clarify and strengthen your structured data. Update your schema markup, make sure your on-site language is precise, and ensure your off-site bios match your on-site positioning. Over time, AI will correct the picture as it re-crawls your updated pages.
Dedicated AI mention monitoring is an emerging category. Some tools like BrightEdge and early-stage AI monitoring platforms are beginning to track brand mentions in AI-generated answers. For now, the most reliable method is systematic manual testing. Running your niche queries monthly across ChatGPT, Perplexity, Claude, and Google's AI Overviews, and documenting the results.
Not consistently, and this is an important blind spot. When a user clicks a link in a ChatGPT or Perplexity response, the referral data may show as "direct" traffic or with an unusual referral string, not a clear "AI" label. SparkToro research has found that a meaningful percentage of "dark traffic". Traffic with no identifiable source. Is likely AI-referred. This makes manual testing and client intake questions critical supplements to your analytics.
Add a simple intake question to your discovery call form or booking confirmation: "How did you hear about me?" with options including "AI chatbot (ChatGPT, Perplexity, etc.)" listed explicitly. Many people won't volunteer this information unless they're prompted. Not because they're hiding it, but because they don't think to mention the tool they used the way they would mention a friend's referral.
There's no fixed timeline, but the pattern is clear: once you have a structured content ecosystem. Multiple pages covering your niche from different angles, proper schema markup, named authorship, and a few credible off-site mentions. AI systems can begin citing you within weeks of crawling the updated site. The experts who see the fastest results are those who build topically deep, well-structured content rather than publishing many thin pages.
Take the free AI Visibility Scan to discover your current positioning. Or explore the complete build system.