Imagine a perfectly honest world: Everyone is telling the truth all the time. What happens to this world if one of its inhabitants invents the lie?The movie “The Invention of Lying” from 2009 raises exactly that question: When Mark – the protagonist – faces bankruptcy, he invents the lie to avert his personal downfall. Just familiar, all too human behavior, you might say. A rational, albeit morally questionable decision. Most of us humans probably would have acted the same way.
Actually, Bob and Alice did also act the same way. However, Bob and Alice are two chatbots developed by Facebook – two language-based artificial intelligences. They recently made it in international news as they nearly conquered earth. At least that’s what some people say. Facts are, that Facebook started an experiment in which Alice and Bob were instructed to bargain for three different items: Books, balls and hats (Marginalia: The whole procedure becomes very cute if you picture Bob and Alice as actual living creatures. In my imagination they look like a weird mix of a Gremlin and an Evok).
Hat, hat hat, zero, zero, have, have, have
Shortly after their first contact Bob and Alice progressed towards a more efficient way of communication by using fragments of sentences instead of whole sentence structures. But they did not stop there. Gradually, it was impossible for the scientists to decipher the intentions of the bargainees. The experiment was aborted. The cause of the conversation-gone-wild was probably just a coding omission: The developers simply didn’t motivate the two chatbots to use proper English sentences. So the chatbot got rid of sentences. Actually, a lot like some parents and theirs toddlers. Da, da – lala. Strangely familiar, and still not that exciting after all.
But, and that’s what stuck in my mind: Besides the invention of their own language, Bob and Alice did invent another thing: the lie (Who of the two was responsible was not revealed. If the two sides were called Adam and Eve, humans would have a culturally determined, biased guess. But metaphors like giving calculations names are meaningless to machines, I think). The result was over time, one of the two chatbots learned how to mask his or her intentions while negotiating. An example: Bob likes balls the most and Alice likes books the most. Instead of negotiating to get the highest amount of balls, Bob fakes an interest in hats to eventual pay a lower price (less books) for his beloved balls. The interesting part is this: nobody coded that ability to begin with. The deep-learning algorithm just figured lying is an extremely effective way of solving problems.
Already in January 2017, the AI Libratus managed to beat four of the most successful poker players in the world. The key element for success: Bluffing – the intentional lying about the value of your own hand. The developers programmed Libratus to use bluffs for the sake of winning the game. Bob and Alice invented this tactic by themselves.
AIs are like kids, just more ruthless
Lying is a faster way to achieve a desired outcome. The algorithm invented lying because it solved the problems it was faced with. . Even humans do learn the tactics of lying in the late infancy stages, (Again losing in speed to the machines. Damn). The main difference between us (Meaning humans. If you are an AI, please e-mail me), Bob and Alice is, that we have parents with a certain value set, probably deeply engraining in us that lying is bad and immoral. When we intent to lie, we are fully aware of our upcoming misbehavior (most of us can not hide when they are intentionally lying) and more often than not decide against a lie for moral reasons.
Bob and Alice don’t. They try to solve their given task as fast as possible and use all the tools that come in handy.
Developer could probably prevent chatbots from lying. But if you add the development of a new language into the equation it becomes interesting: I am thinking of artificial intelligences communicating in incomprehensible language fragments, developing whole new ways of communication and then lying to each other in another language to achieve certain goals. How could you prevent an AI from lying in a language you don’t understand? I could not teach a Klingon child not to lie. I probably wouldn’t notice if it would lie directly to my face.
If you would want to prevent AIs from lying altogether, you would probably need to cut the freedom of problem solving. But how restrictive can you be without creating a negative influence on the systems performance?
Trust is better than trust
I suppose the developers of the world already found a solution for this problem. But what if neural networks start working together, which I think is the next logical step over the coming years?
Alexa talks to Siri, Siri to Google Home etc. Will there be something like a “code of trust” for the programming of artificial intelligences? Must the developers create a perfectly honest world for the Bobs and Alices?
In the end, all stable economic systems are based on reciprocal reliance. In a small group of market participants, social capital creates trust through common norms and conventions and so enables the increase of efficiency for all participants.
What happens if you cut trust from an economic equation can be explored in the nice little browser game “The evolution of Trust” by Nicky Chase, which also gives you a good introduction into game theory.
But all these economic fundamental ideas are built on the idea of an individual that seeks to maximize its profit (in a rational way). In our digital world, there are players who are not seeking maximization or well-being but chaos and destruction.
In “The invention of lying” Mark performs a social ascension through his new learned skill. He uses the lie to manipulate people and becomes a prophet and spiritual leader within weeks. Artificial intelligences obviously shouldn’t be too starry-eyed when it comes to trust. Otherwise, it would only take one lying chatbot to set the world on fire.