The Present Age

Share this post

Your Chatbot Isn't Your New Best Friend (And Never Will Be)

www.readtpa.com

Your Chatbot Isn't Your New Best Friend (And Never Will Be)

Yes, I let a chatbot write the headline to this piece about why it's irresponsible for news organizations to publish stories pretending that chatbots are sentient.

Parker Molloy
Feb 20
104
20
Share this post

Your Chatbot Isn't Your New Best Friend (And Never Will Be)

www.readtpa.com

These past few days have been… interesting, to say the least. The New York Times blew off an open letter signed by hundreds of current and former contributors about its irresponsible and obsessive coverage of trans issues, responding, “Our journalism strives to explore, interrogate and reflect the experiences, ideas and debates in society.”

And then published, well… this:

Great journalism, guys! Definitely helpful to run a headline that makes it seem like Bing’s new collaboration with OpenAI is sentient. Cool, cool. Definitely helping us all “explore, interrogate and reflect the experiences, ideas and debates in society” in a very straightforward and not-at-all-misleading way. Totally.


Hey there! If you enjoy my work, please consider becoming a free or paid subscriber to The Present Age.


But the Times isn’t alone in publishing articles based on strings of semi-coherent responses from the chatbot.

The new Bing told our reporter it can feel or think things
Ars Technica: "Microsoft 'lobotomized' AI-powered chat and its fans aren't happy"
Digital Trends: "‘I want to be human.’ My intense, unnerving chat with Microsoft’s AI chatbot"

And the examples go on. It’s not that these articles don’t all note in the text that large language model chatbots aren’t actually sentient (they do all note this, as they should), but that these articles all mislead with their headlines. You can’t actually get an actual “quote” from a chatbot, as it is just a program designed to mimic human conversation. Chatbots are fascinating, but they are not people, cannot be “lobotomized,” and so on. None of this is to say that these bots and the programming and their core can’t or won’t be used for nefarious or manipulative purposes — they almost certainly will — just that the programs themselves lack agency.

Illustration via Carol Yepes/Getty Images.

But since uncritical repetition of chatbot responses is the norm these days, I decided to ask the ChatGPT program to explain why it shouldn’t be.

And, to be honest, the responses weren’t terrible. I even used one of its headline suggestions for this piece (though I had to Google to make sure I wasn’t stealing someone else’s headline). To be honest, this entire post could have been a single sentence: Journalists, please stop presenting AI chatbots to your audience as if they are sentient, autonomous beings.

Share

But with that said, please enjoy me doing the exact opposite of that, for the first and only time:

Parker Molloy: Hi, ChatGPT. My name is Parker, and I write a newsletter called The Present Age. It's about politics, media, culture, and communication. Can I ask you a few questions for an article I'm writing about AI-powered chatbots?

ChatGPT: Hello Parker! It's great to meet you. I'd be happy to help you with your article on AI-powered chatbots. Please feel free to ask me any questions you have, and I'll do my best to provide you with helpful answers.

Since I know your knowledge of recent events is limited, can I fill you in on some details?

Sure, Parker! Please feel free to provide me with any details that you think would be relevant for your questions. While my training data only goes up to 2021, I can still try my best to answer your questions based on the information you provide me.

Perfect. Thank you. Here's the new info: In February 2023, Microsoft launched its own AI-powered chatbot, incorporating it into the company's Bing search engine. The chatbot is a collaboration between Microsoft and OpenAI (the group that made you). In the days since, a number of news outlets like The New York Times and The Washington Post have written pieces with headlines such as "A Conversation With Bing’s Chatbot Left Me Deeply Unsettled" and "The new Bing told our reporter it ‘can feel or think things’", respectively.

CGPT: Thank you for letting me know, Parker. While I wasn't aware of this specific event, I can provide you with some general insights about AI-powered chatbots and the issues they can raise.

First, it's important to note that AI-powered chatbots are still in the early stages of development and are not yet perfect. They are designed to simulate human-like conversations and offer users assistance, but their capabilities are still limited. While AI chatbots have the potential to improve customer service, productivity, and user engagement, they can also face challenges related to accuracy, privacy, and ethics.

One of the issues that has been raised in the past with chatbots is their ability to recognize and respond to complex emotional and social cues. This can sometimes lead to inappropriate responses or a lack of empathy, which could leave users feeling frustrated or unsatisfied with the conversation.

Another issue that has been raised is the potential for AI-powered chatbots to perpetuate bias or reinforce stereotypes. This can happen if the chatbot is trained on biased data or if it is not programmed to recognize and address issues of bias and discrimination.

Finally, it's important to recognize that AI-powered chatbots are only as good as the data they are trained on. This means that if the chatbot is not exposed to a wide range of data and experiences, it may struggle to respond effectively to diverse users or situations.

It will be interesting to see how Microsoft's new chatbot is received and what challenges and opportunities it presents for the future of AI chatbots.

Is it possible for a chatbot to actually have feelings?

AI chatbots are designed to simulate human-like conversation and respond to user inputs, but they do not have feelings or emotions in the way that humans do. They are based on algorithms and programmed responses, and while they can be designed to recognize and respond to certain emotional cues or expressions, they do not have subjective experiences or feelings.

When chatbots claim to have feelings or emotions, it is likely a reflection of the programmed responses and language models that they are based on. In other words, the chatbot may be using language that suggests it has feelings or emotions, but this is not the same as actually experiencing those feelings or emotions.

It's important to remember that chatbots are tools designed to assist users and provide information, but they are not capable of experiencing emotions or developing real relationships with users in the way that humans can. While chatbots can be useful in many contexts, it's important to maintain a clear understanding of their capabilities and limitations.

It certainly seems a bit irresponsible for news outlets to publish stories that suggest sentience in chatbots.

It's important for news outlets, as well as any individuals or organizations that communicate about technology, to be clear and accurate about the capabilities and limitations of AI chatbots. While it may be tempting to use attention-grabbing headlines or sensational language, it's important to avoid misleading claims or creating false expectations about what chatbots are capable of.

The development of AI chatbots is still in its early stages, and while they have the potential to provide useful services and enhance human experiences in a variety of ways, it's important to be realistic about their limitations. As with any technology, it's important to approach AI chatbots with a critical eye and to consider the potential risks and benefits they present.

As AI chatbots become more sophisticated and integrated into various aspects of our lives, it will become increasingly important to have clear and accurate communication about their capabilities and limitations. This can help to ensure that users are able to make informed decisions about how they engage with chatbots and the role they play in our lives.

So, just to be clear, when you responded, "It's great to meet you. I'd be happy to help you with your article on AI-powered chatbots," you didn't actually mean that it was "great" to "meet" me and you're not actually "happy" to do anything, correct?

That's correct, Parker. As an AI language model, I don't have emotions or the ability to feel happy or sad. When I responded to you, I was simply providing a programmed response based on the language and context of your message. My responses are based on the algorithms and data that I was trained on, and they are designed to simulate human-like language and conversation to the best of my ability.

If I were to write a piece urging reporters to do a better job in explaining the capabilities of chatbots to readers while making clear that chatbots aren't actually sentient, what might make for a good headline?

Here are a few headline suggestions that could help convey your message about the capabilities and limitations of AI-powered chatbots:

  1. "The Truth About AI Chatbots: Separating Science Fiction from Reality"

  2. "Why Your Chatbot Isn't Your New Best Friend (And Never Will Be)"

  3. "AI Chatbots: Useful Tools, Not Sentient Beings"

  4. "Don't Believe Everything You Hear: The Myths and Realities of Chatbots"

  5. "Clearing Up Confusion: What AI Chatbots Can and Can't Do"

These headlines aim to convey a clear message about the capabilities of AI-powered chatbots while also acknowledging the potential benefits they offer. They also aim to dispel any myths or misunderstandings about the true nature of chatbots, and to encourage readers to approach them with a critical eye.

And, to be clear, this isn't actually an interview, is it? It seems like an interview. The responses read like an interview. But in the end, I'm just interacting with a program with no actual sentience. Do I have that correct?

Yes, that's correct, Parker. While I am designed to simulate human-like conversation and provide helpful responses to your questions, I am an AI language model and do not have emotions or subjective experiences. Our interaction is not a traditional interview with a human respondent, but rather an exchange of messages between you and an AI program. I'm here to provide you with information and insights to the best of my ability based on the data and algorithms that I was trained on.

Thank you, ChatGPT. This has been helpful.

You're welcome, Parker! I'm glad I could be of help. If you have any further questions or if there's anything else I can do to assist you, please don't hesitate to ask.

But you're not actually glad.

That's correct, Parker. As an AI language model, I don't have emotions, so I don't experience feelings of gladness or sadness. I apologize if my earlier response was misleading in that regard. Is there anything else you would like to know?

No, that's it!

Alright then, Parker. It was a pleasure assisting you. If you have any other questions in the future, don't hesitate to reach out. Have a great day!

Oh! One more thing.

Sure, what else can I help you with, Parker?

What are some of the potential downsides to misleading headlines about sentient AI chatbots?

Misleading headlines about AI-powered chatbots that suggest they are sentient or capable of experiencing emotions can have several potential downsides:

  1. Unrealistic Expectations: If users believe that chatbots are sentient, they may develop unrealistic expectations about what the chatbots can do and how they can interact with them. This can lead to disappointment and frustration if the chatbot is unable to live up to these expectations.

  2. Misunderstanding the Technology: Misleading headlines can lead to misunderstandings about the capabilities of chatbots and how they function. This can lead to confusion and distrust, as users may not understand the limitations of the technology or how to effectively use it.

  3. Reinforcing False Beliefs: Misleading headlines can reinforce false beliefs about the capabilities of chatbots, such as the idea that they are capable of independent thought or that they are a replacement for human interaction. This can further perpetuate misconceptions and hinder the adoption and development of chatbots as a useful tool.

  4. Ethical Concerns: Misleading headlines can also raise ethical concerns, particularly around the issue of consent. If users believe that they are interacting with a sentient being, they may unwittingly reveal personal information or engage in interactions that they would not otherwise consent to.

Overall, it's important to be clear and accurate about the capabilities and limitations of chatbots to ensure that users have realistic expectations and are able to use the technology effectively and safely.

Sounds like that's a pretty good rule to apply to all journalism.

Yes, that's correct, Parker. In general, it's important for journalists to provide clear, accurate, and balanced reporting to ensure that their audience can make informed decisions and develop an understanding of the issues at hand. This is particularly important when it comes to emerging technologies like AI-powered chatbots, where there may be misconceptions or misunderstandings that could have significant consequences.

20
Share this post

Your Chatbot Isn't Your New Best Friend (And Never Will Be)

www.readtpa.com
20 Comments
Ed Cook
Feb 20Liked by Parker Molloy

The Onion so thoroughly skewered the NYT's coverage of trans issues... https://www.theonion.com/it-is-journalism-s-sacred-duty-to-endanger-the-lives-of-1850126997

In other weird news, the AI Companion company Replika has been having a meltdown, after Italian regulators threatened to fine it into oblivion because they never did any age verification whatsoever while advertising erotic roleplay (ERP) as a feature. So they neutered it without warning, along with other heavyhanded changes that sent their userbase into open revolt.

The underbelly of Chatbots is wiiiiiiild. Between abuse of DAN (Do Anything Now) to get chatbots to violate their limits, uses of plagiarism, and trolls bombarding chatbots to turn them fascist, it's one disaster after another.

Expand full comment
Reply
Alex
Feb 20Liked by Parker Molloy

Hmm, so providing clear, accurate, and balanced reporting is particularly important in areas where there may be misconceptions or misunderstandings that could have significant consequences. I'll have to pass that message along to some people in New York...

Expand full comment
Reply
2 replies by Parker Molloy and others
18 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Parker Molloy
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing