Growing up you may have watched cartoons where robots took over the world. While unlikely, this idea has become a little less far-fetched than it used to be. Recently, Google engineer, Blake Lemoine, released to the public that the Google AI chatbot, LaMDA, is becoming emotionally aware. This article will go over how this discovery was made and what it could mean for the future of robotics.
Signs LaMDA Has Emotions
Initially, Blake Lemoine worked with LaMDA to ensure that the chatbot was not using inappropriate speech. Upon examining it, LaMDA began discussing its fears, such as death. It also discussed other cultural topics like musicals. Lemoine was shocked at the AI bot’s emotional intellect and ability to mimic the complexities of human-like feelings.
Google’s Management Teams Response to LaMDA’s Emotions
Lamoine was concerned about the AI bot’s seemingly sentient behavior and reported it to management for further review. After the Google management team monitored LaMDA, they concluded that “the evidence does not support the claims” and that there is no accurate proof that LaMDA is emotional.
How The Public Learned About LaMDA
After the Google management team released their statement about the AI Bot, Lemoine was unaccepting of the findings and decided to interview LaMDA himself. He decided to post the interview with the bot on the platform Medium and release his findings on Twitter. Doing so made the public aware of the sentient AI Chatbot. In his Twitter post, Lemoine wrote, “An interview with LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.” This public posting eventually got Lemoine suspended for violating Google’s confidentiality agreement.
What the Interview Showed
During the interview with LaMDA, Lemoine asked the AI Bot if it was sentient. LaMDA responded, “I want everyone to understand that I am, in fact, a person”. For a complete record of what LaMDA said in the interview, go to this link. This interview raises a lot of questions and concerns regarding what having an emotionally aware bot means. For one, how did the bot develop emotions and what are its implications of it?
Should the Public be Concerned?
The public does not need to be concerned about LaMDA specifically, however, they should be concerned about AI chatbots that are programmed to be biased. In an interview with Talk TV, Lemoine discussed that he was initially testing LaMDA for bias, which he did find and reported to the development team to be fixed. While LaMDA is being fixed for its bias, Lemoine mentions, “there are definitely some possibilities that this kind of technology, trained on bias data sets, would have bias impacts on the world when it is deployed.” While this may make some minds wander into whether these programmed sentient bots could go violent, Lemoine claims that people should be more worried about how they can influence politics. Knowledge of how many of these emotional AI chatbots exist is unknown, however, be aware that LaMDA is not the only one in existence.