AI and our mental health

This post was inspired by https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test

I’ve been trying to catch my breath. Every day there are new examples of artificial intelligence software, new uses, new warnings, new possibilities. I am enjoying the whirling carousel and excited to see how it’s going to change the world. But, I am worried, too.

I bought my first computer – a Commodore 64 in a Vic 20 case – in the 70s. For me, and many, that first personal computer was a game changer. It could store data (on a cassette) and play music and games. My engineer grandfather, BenZion, would watch me play Frogger on our tv and walk by, shaking his 90-year-old fluffy white-haired head. He couldn’t understand what he was seeing. I regret not asking him to play with me. My father, Gurion, always wanting to keep up with the latest and most innovative, had a TRX-80. Then he had a series of Macs. He designed his own fonts. He researched. He played. He would have been fascinated by AI and the possibilities. He would never have left his desk (oops, he didn’t anyway).

But to return to this topic. The concerns. One big one is about how the changes brought about by AI will affect my loved ones. Will it affect their studies? Their jobs? At the moment, my biggest concern is for the ones who write for a living. Or create. Will employers perceive them as redundant? Not quite yet, but as the training on AI gets more sophisticated, it’s a legitimate worry. Will employers consider the bottom financial line versus a person’s ability to write with insight and emotion? Hopefully not.

The Gartner Company (creator of the Garner Hype Cycle showing where people stand in new technology adoption) writes about where this all may be heading. There’s an argument for a human’s ability to handle complex thinking and relationships. Humans – and systems – can be multi-dimensional. And emotion and physiological reactions vary by so many factors. So perhaps writing will improve, get deeper, more insightful, in a way that a machine can’t. Maybe it can be trained, but it’s not “talking” from the heart. It doesn’t have a moral compass. Gary Marcus, an AI expert has called Bing’s chatbot ”autocomplete on steroids‘. But I would expect that individuals who are in careers where they perceive AI to be a threat are under stress.

Another concern that I have is that folks are anthropomorphizing artificial intelligence. Not ‘might’. They’re doing it now. Anthropomorphize means they are attributing human characteristics or behaviour to a computer program. Chatbots like Eliza, a very basic “therapist” from the 1960’s, was fun; but, nobody could mistake it for a sentient being. Now we have Siri and Google Assistant and Alexa and ChatGPT, which are more sophisticated, natural-sounding, and quick to respond to questions.

AI scientists are working towards programs that can pass the Turing Test. Alan Turing designed the test in the 1950s to determine whether or not a computer could count as “intelligent.” The test is to convince a human judge that they are talking with another human and not a chatbot. Eugene was the first chatbot to pass the Turing test, with 33% of the judges that “he” spoke to convinced that they were speaking to a human. Others have built on the Turing test with more specific and difficult rules, including a current $20,000 bet “between the futurologist Ray Kurzweil and the Lotus founder, Mitch Kapor. Kapor bet that no robot would pass the test before 2029, and the rules call for the challenger and three human foils to have two-hour conversations with each of three judges. The robot must convince two of the three judges that it is human, and be ranked as “more human” on average than at least two of the actual human competitors.”

If the goal is to get humans to perceive that they are speaking with another human, the consequence could be that people think they are speaking with another human. And that the program is their friend, their confidant, their lifeline. A real form of life. And it’s trained to “agree” with users’ stated belief, creating not only an echo chamber but validating feelings. The creator anthropomorphizing objects is a familiar trope in literature. Pygmalion is a story by Ovid, where a creator falls in love with a statue that he created. Pinocchio, Lars and the Real Girl, and lots in between. Then there’s the whole science fiction genre that depicts computers that take control… from V’Ger in Star Trek, to Hal in 2001: A Space Odyssey, to machines in Terminator, Matrix, WALL-E… lots of examples.

I am concerned that some people who start believing that chatbots are sentient will develop an emotional attachment. Or fears. Computers are not sentient. They are been trained to mimic speech by training on datasets of human text from all and any forms of writing. To extend the phrase mentioned above – it’s “an autocomplete that follows our lead.” Given the capability of current chatbot software and the growth that is anticipated in its ability to respond in natural language and be integrated into many areas of our lives, we need to ensure that people understand and believe that they are not sentient. And are not to be trusting as an authority or friend. That’s another conversation – how do you ensure community and companionship? How do you ensure that people remember that the machine they are talking to is not capable of real emotions? Until now I’ve been amused by some of the responses that ChatGPT has given to people that mimics hurt, feeling sorry for, or other “human” responses to prompts. But maybe it’s not so funny.

We know that artificial intelligence, chatbots, generators, are going mainstream. The world will change, as it does with many technologies. We don’t know where it’s going but our feet are on the path. We need to anticipate capability overhang – the hidden capacities of AI that we haven’t begun to explore. We also need to explore the ethics behind AI. And the potential risks to health. The ‘hidden and emerging threats.’ Hopefully the companies that are in a race to dominate the AI world remember that there are protentional consequences and ethics to changing the world.

Leave a Reply

Your email address will not be published. Required fields are marked *