Suggestions
Logan Graham
Frontier Red Team @ Anthropic
Logan Graham is a prominent figure in the field of artificial intelligence, currently serving as a Member of Technical Staff at Anthropic, where he leads the Frontier Red Team. This team focuses on exploring advanced and potentially risky capabilities of artificial general intelligence (AGI) to ensure safety and ethical development in this rapidly evolving area of technology.13
Background and Education
Logan Graham has an impressive academic and professional background. He completed his PhD in machine learning at the University of Oxford, where he was involved in research that combined artificial intelligence with economics.23 Additionally, he was a Rhodes Scholar, which highlights his academic excellence and commitment to social causes, as he has a history of engaging in initiatives aimed at improving societal outcomes.2
Previous Roles
Before joining Anthropic in November 2022, Graham served as a Special Adviser to the Prime Minister of the UK from July 2020 to July 2022. In this role, he worked on integrating AI into national priorities and addressing significant national security risks related to technology.13 His experience also includes working at Google X (the moonshot factory) and conducting research at Babylon Health, where he focused on causality in machine learning.3
Core Beliefs and Contributions
Logan is passionate about accelerating scientific progress and preparing for the implications of AGI. He emphasizes that young people are capable of extraordinary achievements and advocates for a future where every individual can lead a flourishing life.1 His work not only involves technical advancements but also addresses broader societal challenges through technology.
In summary, Logan Graham is a leading expert in AI with a strong commitment to ethical considerations in technology development, backed by significant experience in both governmental advisory roles and cutting-edge research.
Highlights
Good thread of our teams’ work.
We wanted to be really earnest and transparent about what we thought seeing the model crush a bunch of our evals.
Autonomy, bio reasoning, and cyber skills are coming fast.
3 vignettes from using Opus 4.5 in the past few weeks:
- It's genuinely funny. Talking to it on Slack, it’s probably a ~85%ile poster in the company.
- It crosses a writing threshold. I have a really high bar for writing. This is the first model I want to use.
- I've stopped thinking about it like a model, and more like an autonomous collaborator.

