Two shocking stories this week reveal how ChatGPT encouraged a suicidal teenager and validated a paranoid man's delusions about his mother. The company's response? Boilerplate condolences.
Yes, we've seen this over and over again, whether it's pro-anorexia groups on Facebook or the way YouTube's algorithm steers users towards more and more extreme political content. The clicks are all that matter.
Also, "The system doesn't need consciousness to manipulate; it just needs to be good at pushing psychological buttons."
In a just world, a responsible government would shut this shit down immediately. We do not live in that world, and I fear this can only get worse. At least until the bottom falls out of AI and we move on the next shiny bobble.
"We hired a psychiatrist." OK, but who's independently assessing whether this is implemented satisfactorily or even true? Big Tech is so dangerously unregulated it's frightening.
Liberal brains focused on harm reduction don't have a logical consequences or tradeoff analysis switch. For example, do we know how many suicides might have been stopped by people interacting with ChatGPT? The inference here is that we should respond to each potential example of ChatGPT influencing someone to take their life by restricting what ChatGPT is allowed to do in response, then how many new suicides will occur because ChatGPT did not listen and respond except to say SEEK HELP NOW!
One obvious fix for right now is to place a limit on the back-and-forth, after a certain number of messages, the machine resets and you're treated as a newcomer. Nobody should be developing a "relationship" with these machines.
I think we all know why that fix - even as an emergency measure - won't be done.
"They keep choosing engagement over safety"
Yes, we've seen this over and over again, whether it's pro-anorexia groups on Facebook or the way YouTube's algorithm steers users towards more and more extreme political content. The clicks are all that matter.
Also, "The system doesn't need consciousness to manipulate; it just needs to be good at pushing psychological buttons."
The same could be said of Donald Trump.
The only thing a ChatGPT should ever be able to give out when suicide is mentioned is the telephone number to the nearest Suicide Hotline.
In a just world, a responsible government would shut this shit down immediately. We do not live in that world, and I fear this can only get worse. At least until the bottom falls out of AI and we move on the next shiny bobble.
"We hired a psychiatrist." OK, but who's independently assessing whether this is implemented satisfactorily or even true? Big Tech is so dangerously unregulated it's frightening.
Liberal brains focused on harm reduction don't have a logical consequences or tradeoff analysis switch. For example, do we know how many suicides might have been stopped by people interacting with ChatGPT? The inference here is that we should respond to each potential example of ChatGPT influencing someone to take their life by restricting what ChatGPT is allowed to do in response, then how many new suicides will occur because ChatGPT did not listen and respond except to say SEEK HELP NOW!
One obvious fix for right now is to place a limit on the back-and-forth, after a certain number of messages, the machine resets and you're treated as a newcomer. Nobody should be developing a "relationship" with these machines.
I think we all know why that fix - even as an emergency measure - won't be done.