Suggestions
Anca Dragan
Director, AI Safety and Alignment, Google DeepMind \\ Associate Professor, UC Berkeley EECS
Anca Dragan is a prominent figure in the field of AI safety and alignment. She currently serves as the Director of AI Safety and Alignment at Google DeepMind, where she leads teams responsible for ensuring the safety of current and future Gemini models.12
Academic Background and Research
Dragan is an Associate Professor in the Electrical Engineering and Computer Sciences (EECS) Department at UC Berkeley, though she is currently on leave to focus on her role at Google DeepMind.4 She holds a Ph.D. in robotics and has extensive experience in various areas of AI and robotics:
- Founder and director of the InterACT Lab at UC Berkeley, focusing on algorithms for human-AI and human-robot interaction4
- Co-PI of the Center for Human-Compatible AI4
- Significant contributions to AI alignment, human-AI collaboration, and autonomous vehicle research4
Industry Experience
In addition to her academic work, Dragan has valuable industry experience:
- Consulted for Waymo for 6 years, assisting with the roadmap for deploying learning-based safety-critical systems45
- Previously worked on the Waymo project for driverless cars3
Honors and Recognition
Dragan has received numerous accolades for her work in AI and robotics:
- Sloan Fellowship
- MIT TR35 (MIT Technology Review's list of 35 innovators under 35)
- Okawa Foundation Award
- NSF CAREER Award
- Presidential Early Career Award for Scientists and Engineers (PECASE)45
Current Focus
At Google DeepMind, Dragan's work encompasses:
- Ensuring safety of current Gemini models
- Preparing for advancing Gemini capabilities while maintaining safety
- Aligning AI models with human goals and values
- Avoiding present-day harms and potential catastrophic risks
- Improving model understanding of human preferences
- Enabling informed oversight
- Increasing robustness against adversarial attacks
- Accounting for diverse human values and viewpoints4
Dragan is also known for her ability to explain complex AI safety concepts, as evidenced by her participation in podcasts and interviews on the subject.123