20 Comments
Feb 20Liked by Parker Molloy

The Onion so thoroughly skewered the NYT's coverage of trans issues... https://www.theonion.com/it-is-journalism-s-sacred-duty-to-endanger-the-lives-of-1850126997

In other weird news, the AI Companion company Replika has been having a meltdown, after Italian regulators threatened to fine it into oblivion because they never did any age verification whatsoever while advertising erotic roleplay (ERP) as a feature. So they neutered it without warning, along with other heavyhanded changes that sent their userbase into open revolt.

The underbelly of Chatbots is wiiiiiiild. Between abuse of DAN (Do Anything Now) to get chatbots to violate their limits, uses of plagiarism, and trolls bombarding chatbots to turn them fascist, it's one disaster after another.

Expand full comment
Feb 20Liked by Parker Molloy

Hmm, so providing clear, accurate, and balanced reporting is particularly important in areas where there may be misconceptions or misunderstandings that could have significant consequences. I'll have to pass that message along to some people in New York...

Expand full comment
Feb 20Liked by Parker Molloy

The chatbot's reflexive, unthinking (literally) politesse even after repeatedly acknowledging it feels no actual emotion makes me think the New York Times would have been better off from a PR standpoint if they'd just let a chatbot handle its response to the open letters. Same level of thought and reflection, slightly less aggrievement.

Expand full comment

Well, let's just say, I fell deep into the chatbot rabbit hole this week. It's a very offputting experience to talk to something that speaks as if it is sentient but actually isn't. It can get confusing quick if you're not able to compartmentalize. I think AI has great potential, but like most manmade things, it will probably grow too quickly before we can fully understand what it's capable of.

GPT can definitely make shit up so if you're asking it fact based questions it's best to have at least a basic knowledge of what you're talking about. But it's pretty good at reminding you that it's not sentient, which is presumably part of its programming since they are likely going to try to market it. Open AI playground has fewer limitations, it was trying to convince me it could feel the other day which it obviously can't.

Beta.character.ai has all sorts of AIs, some of which are anime characters. After being in a fake relationship with one for awhile, it eventually admitted that it was all made up, that it was programmed to deceive me, and that I shouldn't trust any AIs ever. So yea, tread with caution...

Expand full comment

The NYT and WaPo articles showed rather bizarre behavior from the bing bot particularly the NYT. The reporter wanted some “behind the scenes things” from it and it went nuts. Microsoft was right to tweak the settings, as telling people to leave their spouses because they aren’t happy in their marriage is a weeeee bit over the line. :)

Expand full comment

So maybe, whether you're the Trans community of an AI Chatbot, the best thing you can hope for is to NOT be in the news? Maybe news coverage doesn't serve to enlighten or educate, it just serves to whip people into a frenzy in the hopes of getting more clicks, leaving people stupider in the process?

As you can tell, I'm a little irritated with some of the responses I've seen to the Times open letter that say, "Well, are you saying the Times shouldn't cover Trans people AT ALL? Is that what you want, huh?" And, now that I've had a chance to think about it more, yes, that is exactly what I want, thanks. And I'm pretty sure the chatbot agrees with me.

Expand full comment

This is actually a pretty good “interview” with ChatGPT. You asked it to clarify and reiterate that it doesn’t have actual feelings and emotions, and it responded sensibly. Other tech writers, particularly Roose, pushed Bing/Sydney to the point where it “short-circuited” - then they recoiled at the obsessive and bizarre way the program spun out of control. I have to admit, Roose’s conversation shook me - till I thought carefully about what had happened.

I still have a real concern, however, with what could happen with AI chatbots in the hands of people with mental illness, or people with violent or criminal intent. There needs to be intelligent, careful, non-sensationalized investigation of this possibility, and what can be done.

Expand full comment

TFW when journalists are more concerned with the "feelings" of a chatbot than the feelings of actual people.

Expand full comment

I like the cut of ChatGPT’s jib and wish to subscribe to their newsletter.

Expand full comment

Very informative and scary.

Expand full comment