The field of artificial intelligence is growing at a rapid pace, with many companies and researchers working on projects that would previously have been considered science fiction. However, it’s not just the incredible feats of technology that are causing people to be alarmed —it’s also their ability to seem more sentient than they really are.
It’s easy to understand where fears about machine sentience come from. Like other new technologies, AI is often portrayed in popular media as something that has an outsize effect on the world around us. Movies like “I, Robot” and “Terminator 2” show how our machines could become self-aware and take control of the planet, while dystopian novels like “1984” show how our machines could become more powerful than us all together.
Society’s inclination to humanize AI
It is clear that AI has gone beyond our imagination and is doing things that were previously only the domain of humans such as playing chess so well it beats the best in the world, or understanding spoken speech and replying in kind; the AI assistants seem sentient, in taking commands and answering questions. Seeing the growing prevalence of AI in our lives, it isn’t a surprise that one of the first anxieties people tend to mention has to do with machine sentience. “Sentience” however refers to awareness or consciousness and we still have much to learn, not even philosophers can’t even agree on how to explain human consciousness.
ChatGPT, Bing’s new chatbot nicknamed Sydney, and similar large language models can construct enthralling, humanlike answers to a virtually limitless range of questions ranging from queries about the best restaurants in town to the creation of travel itineraries for your next vacation or even assisting software engineers in coding. Chatbots like ChatGPT pose critical new issues about how artificial intelligence will impact our lives, as well as how our psychological vulnerabilities will shape our interactions with emerging technology.
It is simple to picture other AI chatbot users entrusting their emotional well-being to it and seeking advice on significant life decisions. More individuals may begin to view bots as companions or even lovers, similar to how Theodore Twombly fell in love with Samantha, the AI virtual assistant in Spike Jonze’s movie “Her.”
Our Psychological entanglement with technology
After all, humans are prone to anthropomorphize or attribute human characteristics to nonhumans. Like how we name our cars, boats, and major storms. In Japan, where robots are frequently utilized to care for the elderly, the elderly develop attachments to the machines and occasionally consider them as their own children. Mind you, it’s hard to mistake these robots for people because they don’t have human features or speech patterns. Think about how much stronger the inclination and desire to humanize will be with the emergence of technologies that resemble and sound like humans.
The tendency to regard machines as people and feel drawn to them, along with the development of machines with humanlike traits, suggests that there are serious risks of psychological entanglement with technology. The bizarre-sounding possibilities of falling in love with robots, experiencing an intense connection with them, or being politically controlled by them are rapidly becoming a reality.
Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research group stated that “We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior,” Similarly, the CEO of OpenAI, in a moment of breathtaking honesty, warned that “it’s a mistake to be relying on [it] for anything important right now … we have a lot of work to do on robustness and truthfulness.”
What are your thoughts on the rise of AI chatbots? Can these companies be trusted to do the right thing?
Read more here:
- https://theconversation.com/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-it-200525
- https://www.reuters.com/technology/its-alive-how-belief-ai-sentience-is-becoming-problem-2022-06-30/
- https://blogs.scientificamerican.com/cross-check/david-chalmers-thinks-the-hard-problem-is-really-hard/