Suggestions
Knar Hovakimyan
Responsible AI at Grammarly
Knar Hovakimyan is the Engineering Manager of Responsible AI at Grammarly.12 In this role, she leads the responsible AI team, which is tasked with setting standards for responsible AI development and working to make Grammarly's product offerings as safe and fair as possible.1
Key aspects of Hovakimyan's work and responsibilities include:
Leadership in Responsible AI
- She leads efforts to ensure that Grammarly's AI development is balanced with ethical concerns and practices.1
- Her team conducts deep research and distills learnings into training and knowledge sharing across the company.1
- They build automated tools for assessing AI safety and safeguarding against known risks, including an open-source tool to support responsible AI culture externally.1
Contributions to AI Governance
Hovakimyan emphasizes that responsible AI is not an afterthought but a critical component of AI development. Her team:
- Provides AI developers with clear standards for safe AI use.1
- Develops tools that enable meeting these standards.1
- Ensures buy-in across the company for responsible AI practices.1
Expertise and Insights
Hovakimyan has shared valuable insights on responsible AI development:
- She stresses the importance of understanding AI's functionality, risks, and impact for all individuals working with the technology.1
- Hovakimyan highlights that trust in AI products is easily lost and hard to win, emphasizing the need for consistent, responsible approaches.1
- She notes that human decision-making remains crucial in responsible AI use, despite the power of AI tools.1
Knar Hovakimyan's work at Grammarly demonstrates the company's commitment to developing and implementing AI technologies responsibly, with a focus on safety, fairness, and ethical considerations in AI-powered writing assistance.