Sloppelgängers
Superhuman’s CEO sat for an interview about the AI feature that used writers’ names without permission. What he revealed was worse than the feature itself.
Three times during a recent interview on The Verge‘s Decoder podcast, host Nilay Patel asked Shishir Mehrotra, CEO of Superhuman, which owns Grammarly, the same question in different words: How much should you pay me to use my name?
“If your work is used, should you be attributed? Yes, I think you should,” Mehrotra said. “That would be the nice contract.”
But Patel wasn’t asking about attribution. Grammarly had launched a feature called “Expert Review” that generated AI editing suggestions and attached them to real people’s names. Patel’s name. Kara Swisher’s name. Julia Angwin’s name. Stephen King’s name. Hundreds of others. Nobody was asked. Nobody was paid. Grammarly charged $12 a month for the privilege.
So Patel kept pushing. “How much do you think you should pay me?” he asked again. And again. Mehrotra kept talking about attribution, about linking to sources, about how the feature was “clearly indicated” as being “inspired” by experts’ published work.
“This wasn’t an attribution,” Patel finally said. “You just made something up and put my name on it.”
He’s right. I want to be specific about what “made something up” looks like in practice. The AI-generated edit that Grammarly attributed to Patel suggested that he, in his role as editor-in-chief of The Verge, emphasizes “the importance of crafting compelling headlines that convey urgency” and recommends “adding emotional or stakes-based words.” As Patel put it: “I’ve been an editor for over 15 years. I’ve literally never said anything like that.”
Nobody talks like that. It’s AI slop, the kind of vague, interchangeable advice you’d get from any chatbot on any topic. The only thing specific about it was Patel’s name attached to it.
If you haven’t been following this story: Last August, Grammarly (which rebranded its parent company as “Superhuman” in October 2025) launched Expert Review as part of its paid Pro tier. The feature promised writing feedback from “leading professionals, authors, and subject-matter experts.” What it actually did was use AI to generate editing suggestions and attribute them to real people, none of whom had any involvement. Wired‘s Miles Klee broke the story in early March. The Verge‘s Stevie Bonifield then discovered that the “experts” included their own boss, Nilay Patel, and several of their colleagues. The backlash was immediate. Kara Swisher, whose name the tool had been using, told Platformer‘s Casey Newton: “You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck.” Investigative journalist Julia Angwin filed a class-action lawsuit. The feature was killed on March 11.
The feature is dead. The lawsuit is filed. But the Decoder interview, published today, is the thing from this whole mess that I can’t stop thinking about. Mehrotra sat for more than an hour of pointed questions and still couldn’t understand what the wrong thing was. He isn’t necessarily evasive in this interview. I don’t think he’s stonewalling. He’s being candid. And what his candor reveals is that the basic concept (that a person’s name, reputation, and expertise belong to that person, and you need their permission to use those things to sell a product) doesn’t exist in his head. He can’t hear the question Patel is asking because the premise behind it doesn’t register.
That’s worth paying attention to.
A bad feature, not a bad idea
Throughout the Decoder interview, Mehrotra makes a series of moves that all follow the same logic: every time Patel confronts him with a problem of consent, Mehrotra reframes it as a problem of quality. The feature was bad. The team missed. The execution fell short. What he never says, not once in the entire conversation, is that they shouldn’t have used people’s names without asking.
Patel asks how many people at Superhuman worked on Expert Review. Mehrotra’s answer: “It was a small team. It was probably a product manager and a couple engineers.” He says he personally hadn’t spent any time looking at the feature before the backlash hit. This is meant to contain the damage, to make it sound like a minor project that slipped through. But think about what it actually tells you. Superhuman has about 1,500 employees. A team built a feature that used hundreds of real people’s names and likenesses to sell a subscription product. It was live for seven months. And nobody, in a company of 1,500, flagged it. Nobody in the chain between “a couple engineers” and launch day said, “Hey, should we ask these people first?” A rogue team gets fired. This team didn’t get flagged, because there was nothing to flag. Using someone’s identity without their knowledge was just how things worked.

Then there’s how Mehrotra describes the decision to kill Expert Review. “I came and looked at it and I said, ‘This is off-strategy for us,’” he told Patel. Off-strategy. He killed it because it didn’t fit the company’s product direction. He frames the whole thing as a strategic misfire, a product that didn’t serve users or experts well, a team that “missed.” At one point he says, “the feature was not a good feature. It wasn’t good for experts, it wasn’t good for users.”
You’ll notice what’s absent from that explanation. There’s no “we shouldn’t have used people’s names without their consent.” There’s no “taking someone’s identity to sell a product is wrong.” There’s no acknowledgment that the problem was ethical. In Mehrotra’s telling, Expert Review failed the same way any product fails. Bad execution. Low usage. Didn’t align with the roadmap. Blah, blah, blah. Time to move on.
Patel pushes him on the legal claims. Angwin’s lawsuit argues that Superhuman violated New York and California laws barring the commercial use of people’s names without consent. Mehrotra’s response: “Respectfully, we believe the claims are without merit. The idea that the feature is impersonation is quite a big stretch.” But impersonation isn’t the claim. The claim is unauthorized commercial use of a name. Mehrotra keeps pivoting to how the feature included disclosures saying the suggestions were “inspired by” experts. “It’s far from that test,” he says, still answering a question about impersonation that nobody asked.
There’s one more exchange that I think is the most telling moment in the whole interview. Patel asks where the expert names came from. How did Grammarly decide whose names to use? Mehrotra’s answer: “It came right from the popular LLMs. So it’s exactly the same experience you would have if you came to Claude or Gemini or ChatGPT and said, ‘Can you take this piece of writing, recommend the people who would be most useful to give feedback on it, take their most interesting works and use that to try to give me feedback.’”
He means this as a defense. The names aren’t special. The models already know these people. Anyone could get the same result by prompting a chatbot.
And he’s right. That’s what makes it such an accidental confession. The models do already have all these people inside them. They do already use their work without permission or compensation. Every chatbot will happily generate advice “in the style of” any named writer you ask for. Grammarly just made the mistake of being explicit about it. As Casey Newton wrote in Platformer: “Grammarly just had the bad manners to put my name on it. The bigger problem, though, is the one that’s still invisible: all the ways my work, and the work of every other writer, is being used, right now, by systems that are smart enough not to tell us about it.”
Mehrotra looked at that invisible system, made it visible, put a price tag on it, and then couldn’t seem to understand why anyone was upset.
Write for exposure, 2026 edition
Here’s where the interview gets really interesting. Because Mehrotra doesn’t just fail to see the problem. He thinks he has the solution. And the solution is, somehow, just as bad.
Midway through the conversation, Patel asks Mehrotra what the economics should look like for experts on Superhuman’s platform. Mehrotra explains that the company is building an agent store with a 70/30 revenue split (70% to the creator, 30% to Superhuman). Experts can build their own AI agents, put them on the platform, and get paid when subscribers use them. He compares it to YouTube. He invokes the “1,000 true fans” theory: get 1,000 people to pay you $100 a year, and you’ve got a $100,000 business.
Patel asks the obvious question: “If you already had that system, why build another system that used my name for free?”
Mehrotra: “We didn’t have the system at the time.”
So, to be clear about the sequence here: Superhuman took hundreds of people’s names, used them to sell a subscription product, and is now offering those same people the opportunity to do additional work, on Superhuman’s platform, to earn a share of future revenue. The people whose names were used without consent are being pitched a business plan.
Patel puts it plainly: “You understand that you’re saying I have to do that because all of the work I’ve produced in my career to date has been taken without compensation by AI companies.”
“I didn’t make that statement,” says Mehrotra.
Patel: “You’re saying I need to invent some new business model as an expert and upload an agent of myself to your tool and then advertise it... because my actual body of work has been reduced to zero value. That’s a pretty hard sell.”
If you’re a writer or journalist or creator of any kind and you’ve been around long enough, this pitch has a familiar ring to it. In the early 2010s, outlets like The Huffington Post built enormous businesses on the backs of unpaid contributors. Early in my writing career, I did my share of this kind of writing. The argument was always the same: we can’t pay you, but think of the exposure. Think of the platform. Think of the audience you’ll reach. Writers eventually recognized this for what it was (exploitation with good PR), and the practice became broadly understood as indefensible.
Mehrotra is offering a version of the same deal, except he’s made it worse in two ways. First, the old “write for exposure” pitch at least required you to say yes. You knew what you were getting into. Grammarly’s Expert Review conscripted people. They found out they’d been working for Grammarly when they read about it in the news. Second, the proposed fix asks the people who got ripped off to do more work. Build your agent. Train it to sound like you. Maintain it. Market it to Superhuman’s users. And in return, you get 70% of whatever comes in from the subscribers you attract to a platform you never asked to be on.
There’s an exchange near the end of the interview that captures this perfectly. Patel reads a tagline from Superhuman’s suite at South by Southwest: “AI can’t replace human creativity, empathy, or emotion... taste and judgment are more valuable than ever.” He asks Mehrotra: “Valuable on what metric? Is it dollars?”
Mehrotra talks about how Superhuman’s users are professionals, salespeople, support workers, and how the company helps them become “a better version of you.” He never answers the question about dollars.
Because that’s the trick. The AI industry loves to tell creators that their taste and judgment are “more valuable than ever.” It just means valuable in a way that can’t be deposited. Your expertise has never been worth more, and you’ve never been paid less for it. The value flows up. The work flows down. And if you want to get a cut, you’d better start building.
For those who would like to financially support The Present Age on a non-Substack platform, please be aware that I also publish these pieces to my Patreon.
The best deal available
Mehrotra is unusually candid about all of this. But he’s not unusual. He’s just the one who sat for the interview.
Last October, I interviewed The Atlantic‘s CEO Nicholas Thompson for Depth Perception, the Q&A series I co-write for Long Lead. At one point I asked him about AI’s impact on journalism, and he described the AI companies with a clarity you rarely hear from someone at his level:
“It’s being run by companies that took our data in the middle of the night, violating our terms of service, lied about it, hid their activities, covered up their tracks, and built competitive business models without any compensation. So I have mixed feelings.”
The Atlantic has a deal with OpenAI. I asked Thompson about it. His explanation:
“With most of the AI companies, they took all our content, they trained their models on them, and gave us nothing. With OpenAI, they took our content, trained their models on us, and paid us money, and allowed us to have a seat at the table as they design their search product. So we prefer the OpenAI model to the other models.”
The CEO of one of the most respected magazines in American journalism is describing the best available option, and the best available option is: every AI company stole from us, but this one paid us afterward. That’s the good outcome. That’s what winning looks like when the table is set this way.
Every option Thompson describes starts with the same first step: they took our content. The only variable is what happened next. Did they pay? Did they offer a seat at the table? Or did they just take it and walk away? “They don’t take it” is not on the menu. It was practically a hostage negotiation.
This is the same dynamic Mehrotra is proposing, just viewed from the other side. His 70/30 agent platform is the Grammarly version of what OpenAI offered The Atlantic: we already took your stuff, and now here’s a way to get something back. The difference is that Mehrotra is making this pitch while the lawsuit is still active, which makes the audacity a little harder to miss.
And the logic is the same everywhere you look. OpenAI’s official position on using copyrighted material to train its models is that “training is fair use, but we provide an opt-out because it’s the right thing to do.” If it’s fair use, the opt-out is a courtesy. If the opt-out is “the right thing to do,” then maybe it’s not actually fair use. The sentence contradicts itself, and nobody at OpenAI seems to have noticed. It’s the same incoherence as Mehrotra insisting that generating fake advice under someone’s name is “attribution.” The stories these companies tell about what they’re doing don’t hold together under the slightest pressure. They don’t need to. They just need to hold together long enough.
Sloppelgängers
The advice was terrible.
I mentioned earlier that the AI-generated edit attributed to Patel was generic slop. But that was one of the milder examples. In her Times op-ed, Angwin describes what Grammarly’s version of her actually recommended. One suggestion: replace the factual opening sentence of an investigative article with an invented anecdote about a fictional person named Laura, a patient whose medical privacy had been violated. The tool generated the fake anecdote in full and offered a button to paste it straight into the article. Uh oh.
Angwin notes why this is a big problem: “Replacing a factual sentence with an imagined story about a person who doesn’t exist is not only bad editing. It’s a deception that could end my career as a journalist.”
And she’s right. If someone had actually taken that advice, under Angwin’s name, and published a fabricated anecdote in what was supposed to be an investigative news story, the reputational damage could land on her. The person whose name was on the suggestion. The person who had nothing to do with it.
Author Benjamin Dreyer found that Expert Review would generate writing tips attributed to Stephen King even when you fed it lorem ipsum, meaningless placeholder text. The tool didn’t care what you wrote. It didn’t understand what you wrote. It just grabbed names and generated vaguely editorial-sounding sentences and presented the whole package as if a real person was helping you.
Angwin, borrowing a term coined by writer Ingrid Burrington on Bluesky, called these AI imitations “sloppelgängers.” It’s the right word. They weren’t good enough to pass for the real thing. They were bad enough to cause real harm while trading on real names.
Maureen Ryan, the veteran TV critic and author of the best-selling Burn It Down, was one of the people included in Expert Review. She didn’t find out about it until the news broke. In an open letter she published on March 10, she wrote: “You just take that? You just take my identity? And you do that in an environment where making a living from writing is an ever more precarious proposition? And then you make me take time out of my day to tell you that’s wrong?”
Before Superhuman killed the feature, its response to the backlash was to set up an email address where experts could request to be removed. Opt out. Unsubscribe from the service you never subscribed to. The burden was on the people who’d been used, not the company that used them. Ryan, who is busy writing a book, was being asked to take time away from her actual work to undo something that never should have happened in the first place.
And the thing is, as Angwin argued in her Times op-ed, the law that covers what Grammarly did isn’t new. New York’s right of publicity law is a century old. At least 25 states have versions of it. You can’t use someone’s name to sell a product without their consent. The technology is new. The principle is settled.
This is what the blind spot produces. Mehrotra and his company couldn’t see the consent problem, so they couldn’t see the quality problem either. If you don’t think you need someone’s permission to use their name, you’re certainly not going to check whether the advice you’re generating under that name is any good. Why would you? You don’t think you owe them anything. The feature was bad for the same reason it existed at all: because the people who built it genuinely did not consider the humans attached to those names to be stakeholders in the process.
Scared for their jobs
Near the end of the Decoder interview, Patel brings up an NBC News poll on public perceptions of AI. The numbers are grim. AI has a net negative favorability rating of -20. It polls behind ICE.
Patel suggests this might have something to do with how extractive the technology feels. Mehrotra disagrees. “People are scared for their jobs,” he says. He thinks the public’s unease is about employment, not about how AI companies treat creators.
He’s probably right that people are scared for their jobs. And he can’t see that his own interview is a case study in why. An hour of explaining, in detail, how his company used real people’s work and names to build a product, killed it when it became inconvenient, and is now pitching those same people on the opportunity to earn their way back in. All while insisting that nothing wrong happened.
At the very end, Mehrotra tells Patel he’d love to work with him on building an agent. This is how the conversation closes. An hour of “how much should you pay me” and “we believe the claims are without merit” and “I’ve literally never said anything like that,” and Mehrotra’s parting thought is a sales pitch.
I’d love to work with you. We’d love to have you on our platform. The door is open.
They took your name, generated garbage under it, sold it for $12 a month, and now they’d like to know if you’re interested in a partnership. Welcome to the future, I guess.




I have a net negative opinion of AI for all the reasons, not just one or two.
They've decided that their computers being able to generate text means that the computers "know" things, so now "knowing things" is something they can own, and no one is allowed to stop them charging for it.