Suggestions
Christopher Manning
Christopher Manning is a distinguished individual with a rich academic and professional background. He serves as the Director of Stanford Artificial Intelligence Laboratory and the Associate Director of Stanford Institute For Human-Centered Artificial Intelligence at Stanford University. With a deep-rooted passion for Natural Language Processing (NLP) and Artificial Intelligence (AI), he is the Founder of stanfordnlp. Christopher Manning holds a Doctor of Philosophy in Philosophy and Linguistics from Stanford University and a Bachelor of Arts in Mathematics, Computer Science, and Linguistics from The Australian National University. Previously, he has held various positions at Stanford University, including Professor, Associate Professor, and Assistant Professor of Computer Science and Linguistics. He has also contributed to institutions like the University of Sydney and Carnegie Mellon University.
Highlights
In AMO-Bench we trust (in late 2025)?
At any rate, it’s not maxxed out like several of the other math datasets….
And, here are all of our @stanfordnlp EMNLP 2025 (@emnlpmeeting) papers: 😮💨
• Culture Cartography: Mapping the Landscape of Cultural Knowledge https://t.co/jUzm6q02dh
• Identifying Unlearned Data in LLMs via Membership Inference Attacks https://t.co/lmscxs96b3
• EquiBench: Benchmarking Large Language Models’ Reasoning about Program Semantics via Equivalence Checking https://t.co/PpFUicGp62
• INFINI-GRAM MINI: Exact n-gram Search at the Internet Scale with FM-Index https://t.co/CrsaJDed8B
• Making VLMs More Robot-Friendly: Self-Critical Distillation of Low-Level Procedural Reasoning https://t.co/16HLQ3fzeQ
• Opt-ICL at LeWiDi-2025: Maximizing In-Context Signal from Rater Examples via Meta-Learning https://t.co/LO48jfs2Ly
• CHURRO: Making History Readable with an Open-Weight Large Vision-Language Model for High-Accuracy, Low-Cost Historical Text Recognition https://t.co/B5cVMFxSh7
• Detecting Corpus-Level Knowledge Inconsistencies in Wikipedia with Large Language Models https://t.co/K92dRKFQNN
• s1: Simple test-time scaling https://t.co/eHlP0IFojB
• Causal Interventions Reveal Shared Structure Across English Filler–Gap Constructions https://t.co/bXEVCOG1D0
• Distinguishing fair from unfair compositional generalization tasks https://t.co/mJwQlDhtft
• False Friends Are Not Foes: Investigating Vocabulary Overlap in Multilingual Language Models https://t.co/1cY2ft4ioD
• In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties https://t.co/BqZGNczGtO
• Stronger Baselines for Retrieval-Augmented Generation with Long-Context Language Models https://t.co/So5r8GlEzw
• Mechanisms vs. Outcomes: Probing for Syntax Fails to Explain Performance on Targeted Syntactic Evaluations https://t.co/nAO9J8zymH