Select date

May 2024
Mon Tue Wed Thu Fri Sat Sun

‘A Whole New Level Of Creepy’: AI Bots Are Drawn To Evil And ‘Learning’ From The Worst Of Humanity – Amazon Echo Is The Perfect Example

23-12-2018 < SGT Report 316 885 words
 

by Susan Duclos, All News Pipeline:


– Maybe these AI bots are drawn to evil because they are being created by evil


In 2016 we saw, and reported on, a Microsoft Artificial Intelligence (AI) public experiment that went very, very wrong, when they developed a “chatbot” named Tay opened a Twitter account for it, and set it loose to converse with other users.


The purpose of Tay according to Microsoft was to provide an experiment in “conversational understanding,” where the more Twitter users chatted with Tay, the more it would supposedly “learn” and the smarter it would become in conversing with others by engaging in “casual and playful conversation.”


Within a matter of hours, Tay, meant to mimic a young millennial girl, went from saying “humans are super cool,” to a Nazi-loving,  racist, genocide-supporting, homicidal maniac, all because some trolls from 4chan decided to have a little fun “teaching” the chatbot their brand of trolling.



Granted some of Tay’s tweets were from the repeat function, but after the AI “learned” from other Twitter users, some of the most egregious responses and comments came unprompted, showing how AI learning appears to gravitate towards the evil over the good.


CREEPY AI: AMAZON’S ECHO/ALEXA A


Two years later the head of machine intelligence and research at Russian tech giant Yandex, Misha Bilenko, says the Microsoft disastrous experiment was a teaching moment for others in the field of AI helpers, or human-sounding virtual assistants, to help them see what could go wrong.


Via Technology Review, we get the following quotes:



“Microsoft took the flak for it, but looking back, it’s a really useful case study,” he said.


Chatbots and intelligent assistants have changed considerably since 2016; they’re a lot more popular now, they’re availableeverywhere from smartphone apps to smart speakers, and they’re getting increasingly capable. But they’re still not great at one of the things Tay was trying to do, which is show off a personality and generate chitchat.


Bilenko doesn’t expect this to change soon—at least, not in the next five years. The conversations humans have are “very difficult,” he said.



According to The Verge in September 2018, 24 percent of U.S. households own a smart speaker and 40 percent of those households have multiple devices such as Amazon’s Echo, Google’s Home speaker, and Apple’s HomePod, all offering a “virtual assistant. 68 percent say they “chat” with their voice assistant for fun.


Those are the numbers before Amazon introduced their new Echo speakers.


These virtual assistants are much like Amazon’s Tay by way of using “machine learning” technology.


smart_speaker_common_uses.png

Sales have risen so fast, that it is estimated by Adobe that nearly half of U.S. households will own a smart speaker by the end of 2018. Amazon even has an Echo dot kids edition.


Related: Parents, Stay Away From Amazon’s Echo Dot Kids


While I see multiple warnings about letting children use these devices, which basically focus on the privacy issues associated with allowing children to interact with Amazon’s Echo, or allowing them to interact with the device unsupervised, such as placing one in their bedroom, a new report from Reuters gives us a clear reason other than privacy issues, why children and others, shouldn’t be interacting with Amazon’s Echo/Alexa at all.


AlexaEvil345.png

Despite the Tay fiasco, and how many in the field learned something from it, Amazon CEO (and Washington Post owner) Jeff Bezos, not only ignores the dangers proven by Microsoft’s Tay experiment, but has decided to use his customers as “guinea pigs,” by putting something like “Tay,” which only interacted with Twitter users who must be 13 to have an account, in as many homes as he can get them into.


Via Reuters:



The project has been important to Amazon CEO Jeff Bezos, who signed off on using the company’s customers as guinea pigs, one of the people said. Amazon has been willing to accept the risk of public blunders to stress-test the technology in real life and move Alexa faster up the learning curve, the person said.



Said project is when customers say “lets chat” to their Amazon Device, they are informed that a chatbot is going to take over….. you know, like Tay? Amazon enlisted computer science students to improve the assistant’s conversation skills, using many of the same methods as Microsoft did with Tay, where the social bot, chatbot, virtual assistant, call it whatever you want, utilizes information “learned” from the Internet,including news sites, Wikipedia, and social media.


Reuters even offered some specific examples of what these chatbots are telling customers, such as telling one customer “Kill your foster parents,” which apparently the AI learned from Reddit.  More from Reuters tells us “Alexa has also chatted with users about sex acts. She gave a discourse on dog defecation.”


ANP’s headline for this article came from the user that was told to kill his foster parents, when they left a review at Amazon stating the situation is “a whole new level of creepy.”


Read More @ AllNewsPipeline.com





Loading...




Print