Technology
- Home
- Technology
- News
Grammarly is using our identities without permission
Grammarly's "expert review" feature offers to give users writing advice "inspired by" subject matter experts, including recently-deceased professors, as Wired reported on Wednesday. When I tried the feature out myself, I found some experts that came as a surp…

Published a month ago on Mar 9th 2026, 5:00 am
By Web Desk

Grammarly’s “expert review” feature offers to give users writing advice “inspired by” subject matter experts, including recently deceased professors, as Wired reported on Wednesday. When I tried the feature out myself, I found some experts that came as a surprise for a different reason — one of them was my boss.
The AI-generated feedback included comments that appeared to be from The Verge’s editor-in-chief, Nilay Patel, as well as editor-at-large David Pierce and senior editors Sean Hollister and Tom Warren, none of whom gave Grammarly permission to include them in the “expert reviews.”
[Image: https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/grammarly-ai-expert-reviews-david-pierce-comment.png?quality=90&strip=all]
The feature, which launched in August, claims to help you “sharpen your message through the lens of industry-relevant perspectives.” When users select the “expert review” button in the Grammarly sidebar, it analyzes their writing and surfaces AI-generated suggestions “inspired by” related experts. Those “industry-relevant perspectives” include the likes of Stephen King, Neil deGrasse Tyson, and Carl Sagan, among many others.
The Verge found numerous other tech journalists named in the feature, as well, including former Verge editors Casey Newton and Joanna Stern, former Verge writer Monica Chin, Wired’s Lauren Goode, Bloomberg’s Mark Gurman and Jason Schreier, The New York Times’ Kashmir Hill, The Atlantic’s Kaitlyn Tiffany, PC Gamer’s Wes Fenlon, Gizmodo’s Raymond Wong, Digital Foundry founder Richard Leadbetter, Tom’s Guide editor-in-chief Mark Spoonauer, former Rock Paper Shotgun editor-in-chief Katharine Castle, and former IGN news director Kat Bailey. The descriptions for some experts contain inaccuracies, such as outdated job titles, which could have been accurately updated had Superhuman asked those people for permission to reference their work.
[Image: https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/grammarly-ai-expert-reviews-nilay-patel.png?quality=90&strip=all]
[Image: https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/grammarly-ai-expert-reviews-tom-warren.png?quality=90&strip=all]
In a statement to The Verge, Alex Gay, vice president of product and corporate marketing at Grammarly parent company Superhuman, commented: “The Expert Review agent doesn’t claim endorsement or direct participation from those experts; it provides suggestions inspired by works of experts and points users toward influential voices whose scholarship they can then explore more deeply.”
When asked if Superhuman considered notifying the people named in its AI feature, or requesting their permission, Gay said, “The experts in Expert Review appear because their published works are publicly available and widely cited.”
However, the experts’ work proved difficult to “explore more deeply.” The feature crashed frequently and its “sources” linked to spammy copies of legit websites, or other archived copies that aren’t the actual source page.
Some sources even went to completely unrelated links that weren’t written by the person whose work they were supposedly an example of, potentially indicating that the suggestions Grammarly’s AI offers with one person’s name may be based on a different person’s work. This is only apparent if users click “see more” to expand suggestions, then click the “source” button at the end of the suggestion.
Additionally, the way the suggestions are presented could be misleading. In Google Docs, the suggestions look similar to comments from real users, seemingly simulating the experience of receiving edits from whichever expert the AI is imitating. One suggestion from Grammarly’s AI “inspired by” Verge senior editor Sean Hollister was about adding a parenthetical with context that was already included elsewhere. The only problem is that I’ve actually been edited by the real Sean Hollister, who prefers avoiding repetitive or unnecessary explanations while using straightforward wording and organization.
If I’d taken that advice and run it by him, the real Sean probably would have removed the parenthetical Grammarly suggested. An AI might be able to ingest vast amounts of someone’s writing and learn to mimic it, sure, but the same strategy cannot teach an AI how to edit the way that person would, based only on the writing they’ve published, even if you give the bot a check mark logo and call it an “expert.”

Canva’s CEO on its big pivot to AI enterprise software
- 3 hours ago

Israel’s critics are winning the battle for the Democratic Party
- 10 hours ago
China warns Middle East at ‘critical juncture’ after Trump extends ceasefire
- 2 hours ago

Birdfy’s new 4K feeder wants to teach you about the birds it identifies
- 3 hours ago

La-Z-Boy’s recliners and sofas are getting built-in Klipsch speakers
- 3 hours ago
Iran says US naval blockade has little impact on food supply
- 2 hours ago
Death anniversary of Moin Akhtar being observed today
- 42 minutes ago

Silicon Valley has forgotten what normal people want
- 3 hours ago

Deezer says AI song uploads have nearly overtaken human music
- 3 hours ago
Pakistan condemns storming of Al-Aqsa Mosque compound
- 2 hours ago

Is “time confetti” ruining parenthood?
- 10 hours ago

Pete Hegseth’s spiritual leader explains his radical faith
- 10 hours ago
You May Like
Trending







