Suggestions
Tristan Harris
Nonprofit organization focused on humane technology and digital infrastructure
Tristan Harris: Co-Founder of the Center for Humane Technology
Tristan Harris is an American technology ethicist and the co-founder of the Center for Humane Technology (CHT), a nonprofit organization dedicated to realigning technology with humanity's best interests.123
Early Life and Education
Harris was raised in the San Francisco Bay Area. He studied computer science at Stanford University, where he took a class from B.J. Fogg at the Persuasive Technology Lab before dropping out.2
Career
In 2007, Harris launched a startup called Apture, which Google acquired in 2011. He then worked on Google Inbox.2
In 2013, while at Google, Harris authored a presentation titled "A Call to Minimize Distraction & Respect Users' Attention", which sparked conversations about the company's responsibilities.2 He left Google in 2015 to co-found Time Well Spent, later renamed the Center for Humane Technology.23
At CHT, Harris advocates for understanding and minimizing the negative impacts of digital technologies, such as addiction, distraction, isolation, polarization, and fake news.2 He was featured in the Netflix documentary "The Social Dilemma", which examined how social media's design and business model manipulates people's views, emotions, and behavior.24
Harris has expanded his focus to close the gap between the accelerating pace of technology and the capacity of culture and institutions to respond adequately.2 He has testified before the U.S. Congress about the risk of online deception and manipulative tactics employed by social media platforms.4
Impact
Harris' work has had a significant impact on the tech industry and public awareness. In 2018, Facebook CEO Mark Zuckerberg described feeling a "responsibility to make sure our services aren't just tools that are used to make people's lives worse".4 The Center for Humane Technology's online course on building ethical technology has received notable media coverage and had over 10,000 participants as of June 2022.4
Highlights
This conversation is on #YourUndividedAttention is a must-listen.
This episode breaks down the addictive features of many AI companions on the market and the competitive pressures that fuel unsafe design. These products are free to use and currently being shipped to 20M+ users, many of whom are minors, and have been featured as “Editor’s Choice” in Google’s Play Store.
The attention economy that fuels social media predictably becomes the “race to intimacy” with AI. Now there’s a whole market for AI companions competing to “solve loneliness.” But these AI companions are far more persuasive and intimate than social media, and the technology is being rolled out at warp speed.
The outcome of this case could be a first step toward holding companies accountable for unsafe design. Everyone with kids in their lives should listen to this.
Watch: https://t.co/RYufInmP0O
Listen: https://t.co/DPSVoC5yLP
Jailbreaking AI-controlled robots running LLMs is just as easy as jailbreaking LLMs behind frontier AI models like ChatGPT or Sonnet.
What could go wrong?