Sign In
Get Clay Free →

Suggestions

    Eli Lifland

    ML Research Engineer at Ought

    Eli Lifland is a research engineer known for his work in AI safety and effective altruism. He has a background in computer science and economics, having graduated from the University of Virginia. Lifland previously worked at Ought, where he contributed to the development of AI research tools, including Elicit, and was involved in various AI safety projects.

    Career Highlights

    • Ought: Lifland served as a software and research engineer from August 2020 to January 2022, focusing on AI alignment and forecasting projects. He was part of teams working on existential risk assessments related to AI and contributed to the RAFT Benchmark for text classification.125
    • Sage: After leaving Ought, he co-founded Sage, an organization aimed at improving decision-making through judgmental forecasting. His work there included organizing forecasting competitions and developing platforms for AI risk assessment.34
    • Long Term Future Fund: Lifland is also a guest fund manager at this organization, which focuses on funding initiatives that aim to ensure the safe development of advanced AI technologies.2

    Interests and Contributions

    Lifland is deeply engaged in discussions surrounding AI alignment and existential risks posed by advanced AI systems. He emphasizes the importance of structured forecasting as a means to navigate complex decision-making scenarios in this field. Additionally, he has authored various writings on effective altruism and AI risk management, contributing to the broader discourse on these critical issues.123

    Personal Insights

    In interviews, Lifland has expressed his belief that strategic clarity is essential for progress in longtermist domains like AI safety. He has also shared personal experiences related to mental health challenges while navigating his entrepreneurial journey, highlighting the importance of addressing such issues within high-stakes fields like AI research.34

    Overall, Eli Lifland is recognized for his contributions to AI safety research and his commitment to fostering effective altruism through innovative forecasting methodologies.

    Highlights

    Eli Lifland, on Navigating the AI Alignment Landscape
    Eli Lifland, on Navigating the AI Alignment Landscape

    Related Questions

    What projects has Eli Lifland worked on at Ought?
    How did Eli Lifland get involved in AI safety?
    What is the Elicit AI research assistant that Eli Lifland worked on?
    Can you tell me more about the forecasting work Eli Lifland did for the Future Fund?
    What is the RAFT Benchmark that Eli Lifland co-created?
    Eli Lifland
    Eli Lifland, photo 1
    Eli Lifland, photo 2
    Get intro to Eli
    Add to my network

    Location

    Washington DC-Baltimore Area