Key takeaways
We scanned 149 real brands across 8 industries in March 2026 - using all five major AI models (ChatGPT, Claude, Perplexity, Gemini, and Grok) to produce the first data-backed AI visibility benchmark based on actual RankSignal scans.
SaaS leads with an average Signal Score of 84/100, closely followed by Legal at 83/100. Insurance and Finance sit at 79/100 - a much tighter spread than expected.
Perplexity dominates almost every industry with the highest average scores, while Claude consistently scores lowest - often 10 to 15 points below Perplexity.
The gap between top quartile and bottom quartile is 10 to 18 points depending on the industry, with Healthcare showing the widest spread (90 vs 72).
Every industry has live benchmark pages with per-brand, per-country breakdowns you can explore right now on RankSignal.ai.
How visible is your brand when someone asks an AI model for recommendations in your industry? We scanned 149 brands across eight industries using all five major AI models - and built a real, data-backed benchmark. Unlike theoretical frameworks, every number in this article comes from actual RankSignal scans run on 20 March 2026.
This article presents findings for each industry, cross-industry patterns, and how you can benchmark your own brand. Explore the full data in our live benchmark pages - filterable by industry, country, and individual brand.
RankSignal.ai automates this process by scanning ChatGPT, Claude, Perplexity, Gemini, and Grok and generating a Signal Score from 0 to 100 - so you can track your AI visibility over time and see exactly where you stand.
Methodology: how we built this benchmark
Every score in this benchmark comes from a real RankSignal scan. No surveys, no estimates, no dummy data.
What we scanned
149 brands across eight industries: Legal (22 brands), SaaS (12 brands), Healthcare (16 brands), Finance (15 brands), Real estate (10 brands), Consulting (17 brands), Insurance (18 brands), and Retail (19 brands). Brands span multiple countries including the US, UK, Germany, Australia, New Zealand, and France.
How we scored
Each brand was queried across five AI models: ChatGPT (GPT-4o), Claude (Anthropic), Perplexity, Gemini (Google), and Grok (xAI). The Signal Score (0 to 100) is computed from five dimensions:
- Visibility - does the brand appear in AI responses?
- Sentiment - is the tone positive, neutral, or negative?
- Authority - does the AI treat the brand as an authoritative source?
- Risk - are there negative signals or controversies?
- Consistency - do all five models agree?
Per-model scores show how each AI platform individually perceives the brand. All scans were run on 20 March 2026.
Industry results: 8 industries compared
Here is how each industry performed, ranked by average Signal Score.
| Industry | Brands | Avg score | Top quartile | Bottom quartile | Best model | Weakest model |
|---|---|---|---|---|---|---|
| SaaS | 12 | 84 | 87 | 80 | Perplexity (90) | Claude (80) |
| Legal | 22 | 83 | 88 | 78 | Perplexity (90) | Grok (78) |
| Real estate | 10 | 82 | 86 | 76 | Perplexity (85) | Claude (78) |
| Consulting | 17 | 81 | 85 | 77 | Perplexity (85) | Claude (75) |
| Healthcare | 16 | 80 | 90 | 72 | Perplexity (85) | Claude (74) |
| Retail | 19 | 80 | 85 | 76 | Gemini (84) | Claude (76) |
| Finance | 15 | 79 | 86 | 74 | Perplexity (85) | Claude (69) |
| Insurance | 18 | 79 | 85 | 73 | Perplexity (86) | Claude (70) |
The overall average across all 149 brands is 81/100. The range is tighter than you might expect - just 5 points between the top (SaaS at 84) and the bottom (Finance and Insurance at 79).
SaaS
Average: 84/100 | Explore the full SaaS benchmark
SaaS leads the pack, and the top performers are recognizable names. Canva tops the list at 89, followed by Atlassian (87) and HubSpot (86). The bottom of the SaaS cohort - Darktrace at 78 and Employment Hero at 80 - still scores above most industries' averages.
Why SaaS performs well: these companies produce enormous amounts of structured, machine-readable content. Documentation sites, changelog pages, comparison content, and active review profiles on G2 and Capterra give AI models rich data to work with.
Perplexity leads with a 90 average across SaaS brands - its real-time web access lets it pull in the freshest product data. Claude trails at 80, often providing less specific product details.
Legal
Average: 83/100 | Explore the full Legal benchmark
The legal industry's strong showing may surprise. Kirkland & Ellis and Latham & Watkins both score 89, with Skadden Arps and Linklaters close behind at 88. These firms benefit from extensive media coverage, landmark case involvement, and comprehensive directory listings on platforms like Chambers and Am Law.
The bottom quartile (78) is still high in absolute terms - even smaller firms maintain decent AI visibility thanks to the legal profession's culture of public case records and directory participation.
Perplexity dominates at 90, while Grok trails at 78 - likely because its training data under-represents legal industry sources relative to other models.
Real estate
Average: 82/100 | Explore the full Real estate benchmark
CBRE, JLL, and Savills all tied at the top with a score of 86. The real estate industry benefits from large commercial firms with strong brand authority and extensive market research publications.
The spread from top quartile (86) to bottom quartile (76) is moderate. Brands like Domain (75) and Lendlease (74) score lower, often because AI models associate them less strongly with real estate advisory and more with adjacent categories.
Consulting
Average: 81/100 | Explore the full Consulting benchmark
Bain leads at 86, followed by Deloitte and Accenture at 85. An interesting outlier: McKinsey scores just 76 - below the industry average. AI models associate McKinsey strongly with thought leadership, but the brand's recent controversies show up as elevated risk signals that pull down its composite score.
The consulting industry has the tightest top-to-bottom spread (85 to 77), suggesting a mature industry where most brands maintain solid AI visibility.
Healthcare
Average: 80/100 | Explore the full Healthcare benchmark
Healthcare has the widest performance gap of any industry. Mayo Clinic scores an extraordinary 95 - the highest of any brand in the entire benchmark - driven by massive content libraries, strong entity authority, and consistent coverage across all five models. Cleveland Clinic follows at 89.
But the bottom quartile drops to 72, with UnitedHealth Group at 69 and Bayer at 72. Health insurers and pharma companies face higher risk signals from controversy coverage, while providers like Mayo Clinic benefit from patient trust and educational content.
Retail
Average: 80/100 | Explore the full Retail benchmark
Costco leads at 89, followed by Home Depot and Aldi at 84. Retail is the only industry where Gemini outperforms Perplexity (84 vs 81), likely because Google's training data includes extensive shopping and product information.
ASOS brings up the rear at 71 - its fast-fashion model and purely online presence generate less authoritative content than established physical retailers. Claude scores lowest across retail (76 avg), often providing less specific product and pricing information.
Finance
Average: 79/100 | Explore the full Finance benchmark
Allianz leads at 86, followed by Morgan Stanley at 85. Deutsche Bank sits at the bottom with 71, dragged down by extensive controversy coverage around money laundering settlements and regulatory fines.
Claude scores lowest here at just 69 - a full 16 points below Perplexity (85), the biggest model gap in any industry. Claude appears to apply extra caution around financial brands, producing shorter and less specific responses.
Insurance
Average: 79/100 | Explore the full Insurance benchmark
Munich Re leads at 87, with State Farm and Allianz Insurance both at 85. Reinsurers tend to score higher due to strong B2B authority signals, while consumer-facing insurers score more moderately.
Medibank anchors the bottom at 69. Claude is again the weakest model at 70 - its responses about insurance brands tend to be more cautious and less detailed than other models.
Cross-industry insights
Perplexity is the visibility leader
Perplexity scores highest in 7 out of 8 industries (all except Retail, where Gemini leads). Its average lead over the next-best model is 3 to 5 points. Real-time web access gives Perplexity an edge - it pulls current data rather than relying solely on training data.
Claude lags consistently
Claude scores lowest in 7 out of 8 industries (all except Legal, where Grok trails). The gap is often significant - in Finance, Claude averages 69 versus Perplexity's 85. Claude tends to provide more cautious, less specific brand information, especially in regulated industries like insurance and healthcare.
Healthcare has the widest spread
The 18-point gap between Healthcare's top quartile (90) and bottom quartile (72) is the largest of any industry. This suggests that healthcare is an industry where targeted optimization can make a huge difference - the ceiling is high and most brands have not reached it.
Consulting is the most compressed
Just 8 points separate the top from the bottom quartile in consulting (85 to 77). Differentiation is harder here - consulting firms produce similar types of content and operate through similar channels.
Authority is the strongest dimension
Across all 149 brands, authority scores consistently exceed visibility, sentiment, and consistency scores. AI models have strong opinions about which brands are authoritative in their space - the challenge for most brands is not authority, but ensuring that authority translates into actual mentions in recommendation queries.
How to benchmark your own brand
You do not need to run 149 scans manually. RankSignal.ai handles this automatically:
Run a free scan at ranksignal.ai - enter your brand or domain and get your Signal Score across all five AI models in minutes.
Compare against your industry - use the benchmark pages to see where you stand relative to real competitors in your sector.
Track over time - RankSignal Pro monitors your AI visibility on a regular schedule so you can measure the impact of changes you make.
Fix what matters - your scan report highlights specific issues: inaccurate pricing, missing structured data, negative sentiment on specific models, and more.
The live benchmark is updated as we add more brands and re-scan existing ones. Bookmark your industry page and check back for the latest data.
FAQ
Where does the data in this benchmark come from?
Every score comes from an actual RankSignal scan run on 20 March 2026. We queried 149 brands across five AI models (ChatGPT, Claude, Perplexity, Gemini, and Grok) and computed Signal Scores from the responses. No estimates or dummy data were used.
Which AI models were included?
The benchmark covers ChatGPT (GPT-4o), Claude (Anthropic), Perplexity, Gemini (Google), and Grok (xAI). These are the five platforms most commonly used for brand research and product discovery.
Why does Perplexity score highest in almost every industry?
Perplexity has real-time web access, letting it pull current information rather than relying solely on training data. This gives it fresher, more detailed brand data - which translates into higher visibility and accuracy scores.
Why does Claude score lowest?
Claude tends to provide more cautious, less specific brand information - especially in regulated industries like healthcare, insurance, and finance. It appears to apply stronger safety guardrails around commercial recommendations, which lowers its visibility and depth scores.
Can I see the data for my specific industry and country?
Yes. Every industry has a live benchmark page at ranksignal.ai/benchmark with per-country and per-brand breakdowns. You can drill down from industry to country to individual brand reports.
How often is the benchmark updated?
We plan to re-scan all benchmark brands monthly. The current data reflects scans from March 2026. Check the live benchmark pages for the latest data as new scans are completed.
What is a good Signal Score for my industry?
The average across all 149 brands is 81/100. Industry averages range from 79 (Finance and Insurance) to 84 (SaaS). A score above your industry average puts you ahead of most competitors. Check the benchmark page for your industry to see the exact distribution.
How does RankSignal.ai help with AI visibility?
RankSignal.ai scans all five major AI models and generates a Signal Score from 0 to 100. It identifies inaccuracies, tracks competitor mentions, and monitors changes over time. Run a free scan at ranksignal.ai to see where you stand.
