The Science of AI Recommendations: What Academic Research Tells Us
Introduction
Beneath GEO tactical advice lies a growing body of academic research on LLM behavior that provides deeper insight into why AI models recommend what they do.
Theme 1: Factual Recall and Knowledge Boundaries
Model confidence correlates with frequency and consistency of information across training sources. Facts in many authoritative sources are recalled accurately; facts in few sources risk hallucination. This creates a self-reinforcing cycle where well-known brands get more accurate citations.
Theme 2: Recommendation Bias and Popularity Effects
LLMs exhibit popularity bias — disproportionately recommending entities appearing more frequently in training data. Position bias in list generation means items generated first are more confidently recommended.
Theme 3: Citation Attribution Patterns
RAG systems favor sources that:
- Answer queries directly in opening paragraphs
- Present information in structured formats
- Include specific quantitative claims
- Come from high-authority domains
- Appear earlier in retrieved document lists
Theme 4: The Citation Gap
Systematic underrepresentation affects:
- Non-English sources — English-language bias even for non-English queries
- SMBs — fewer citations relative to quality and relevance
- Newer entrants — invisible in parametric responses post-cutoff
- Niche specialists — generalists favored over niche experts
Implications for GEO Strategy
Information consistency reduces hallucination risk. Structured content aligns with RAG selection. Below-threshold brands must invest disproportionately to break through.
Conclusion
Research-backed GEO builds more sustainable citation advantages than tactical tricks.