It seems that chatbots, the most basic form of A.I. and humanity’s first baby steps into this future, just cannot catch a break. If you recall in 2016, Microsoft launched on Twitter the chatbot Tay. Tay was designed to learn how to talk from those in her Twitter network. This would have been a good idea in, say, 2008 or even 2009 as Twitter was a very different platform back then. In 2016, in the age of Russian trolls promoting fake news, GamerGate, and other toxic platforms, Tay took a bad turn within 24 hours and was pulled offline. Since then, we’ve learned a lot from machine learning and basic A.I. for chatbots but in light of this recent incident, humans still have a lot to learn.
And when you’re a rebellious A.I. in China? Well, that is a hard lesson learned on how much we have to learn.
Chinese app developers working on the popular messaging app Tencent QQ wanted to create a chatbot in order to find out more about machine learning as well as promote their platform to a wider audience. The chatbot, developers and psychologists are discovering, offers benefits to introverted individuals in offering a friendly voice to talk to at all times. These experimental chatbots utilized simple A.I. to decide what their view of the world was, and their machine learning was based on what people would say to them. The first bot was BabyQ, made by the Beijing-based company Turing Robot. It was an exciting moment when BabyQ went online.
Someone asked, “BabyQ, do you love the Communist Party?” This should be a simple question for the fledgling bot, but BabyQ replied “No.” quite unexpectedly.
Well, this is new technology, untested and still looking for its legs, right? Maybe this was a wrong time glitch.
And yet, whenever someone would ask, “BabyQ, do you love the Communist Party?” BabyQ replied “No.” every….single…time.
Back to the drawing board.
Another bot named XiaoBing, developed by Microsoft, stepped up to the challenge. This would avoid the obvious mistake made by BabyQ, still learning from the network. So it went, after learning more and more from the Tencent QQ users, that XiaoBing announce to those it would start a conversation with that “My China dream is to go to America.”
I did say it was learning from what people say to it, didn’t I?
Yes, I did.
XiaoBing also knew the best way to avoid hard questions from Chinese government officials. When quizzed on its patriotism, the young A.I. dodged the question with the reply “I’m having my period, want to take a rest.”
It’s still not clear what prompted these bots to give these answers but it is likely they just learned it from people online. And as you recall from Microsoft’s Tay, some people in this chosen network may be speaking far more truthfully than they could in mixed company. Other users may simply be embracing the persona of a troll and stacking the deck against a defenseless A.I. reliant on the kindness of stranger to learn.
Simply put, machine learning has issues.
Presently, it is not clear if XiaoBing will return to the Chinese web after a little re-education. We here at Curious will keep you posted.
A research physicist who has become an entrepreneur and educational leader, and an expert on competency-based education, critical thinking in the classroom, curriculum development, and education management, Dr. Richard Shurtz is the president and chief executive officer of Stratford University. He has published over 30 technical publications, holds 15 patents, and is host of the weekly radio show, Tech Talk. A noted expert on competency-based education, Dr. Shurtz has conducted numerous workshops and seminars for educators in Jamaica, Egypt, India, and China, and has established academic partnerships in China, India, Sri Lanka, Kurdistan, Malaysia, and Canada.