Grammarly, the popular writing assistance tool, is facing a class-action lawsuit filed by journalist Julia Angwin over its controversial “Expert Review” feature. This feature reportedly used the names and likenesses of writers and academics – including those from The Verge – without their consent to generate AI-powered suggestions, effectively stealing their identities for commercial purposes.
The Core of the Dispute: Privacy and Commercial Exploitation
The lawsuit alleges that Grammarly violated privacy and publicity rights by leveraging individuals’ identities for profit without permission. Angwin discovered her own name being used within the tool after being alerted by The Verge’s Casey Newton, who also confirmed their unauthorized inclusion. Multiple Verge staff members, including editor-in-chief Nilay Patel, were identified as being exploited by the feature.
This practice raises serious questions about the ethics of AI development, particularly regarding the use of personal data. Companies like Grammarly operate on the assumption that AI tools are neutral, but in reality, they rely heavily on human input and credibility. The unauthorized use of these experts’ names suggests a disregard for individual rights in pursuit of enhancing the AI’s perceived authority.
Grammarly’s Response and Feature Suspension
Following the backlash, Grammarly CEO Shishir Mehrotra issued an apology and announced the immediate suspension of the “Expert Review” feature. The company had initially offered an opt-out option via email but ultimately decided to disable the tool altogether.
Mehrotra stated that the feature aimed to connect experts with their audiences, but acknowledged that the execution fell short. This incident highlights the difficulties of balancing innovation with ethical considerations in the rapidly evolving landscape of AI-driven products.
Implications and Future Concerns
The lawsuit against Grammarly underscores a growing trend of legal challenges against AI companies that misuse personal data. The case raises broader questions about transparency, consent, and accountability in the development of AI tools. As these technologies become more sophisticated, the need for robust legal frameworks to protect individual rights will become increasingly critical.
This lawsuit serves as a clear warning to AI developers: exploiting human identities for commercial gain without explicit consent is not only unethical but now legally actionable.
























