Moravec’s paradox: Will you lose your job?

There’s no bigger hype topic than artificial intelligence. If you have been following the news on one single day during the last four years, you will have seen at least one article on AI, Machine Learning or Deep Learning. These topics are often contextualized with the question: “Will you lose your job soon? And the general answer, especially for simpler professions, is often “yes”.

But a research result from the 1980s shows that it’ s not that simple. And in the end, exactly one group could have a guaranteed job – people with “simple” jobs.

Visor of a robot with reflections
Photo by Franck V. on Unsplash

68 years for a hairdresser appointment

Intelligent are the Facebook algorithms, our cars and the NSA. Microsoft is also intelligent, just like Amazon’s purchase suggestions and this computer that beat someone in “Go”. And what is intelligent if not Google! Their language assistant can now even arrange appointments at the hairdresser’s by phone.

What AI can and cannot do

But artificial intelligence is by no means a hip trend topic of recent years. Depending on what you read, the beginning of the research is either attributed to a six-week workshop of the Rockefeller Foundation in 1954 (ultra-mysterious) or to Alan Turing and his world-famous Turing test around 1950. More than 60 years of research until a computer can call the hairdresser on its own – Wow!

If you let the irony aside, you will actually find a multitude of achievements of artificial intelligence, which are then somewhat higher than the appointment by telephone: in 2017, for example, an algorithm discovered a planet in a solar system 2500 light years away. In the same year, Google developed an artificial intelligence that can program artificial intelligences. And of course there’s Bob and Alice, Facebook chatbots who first lied to each other and then invented their own language.

If one had to summarize the totality of all problems solved by an artificial intelligence, it could look something like this: ” Well, as a human being I would have needed a really long time for that.” And the reason for this is Moravec’s paradox.

Machine does what human cannot do

Moravec’s paradox was defined in the eighties by Hans P. Moravec (surprising, isn’ t it). A man from Austria who constructed his first robot at the age of 10 and submitted a kind of externally controlled cat robot with whiskers for his master’s thesis.

Essentially, the paradox can be described as such: Artificial intelligence can quickly learn and take over things in particular that humans find very difficult, such as recognizing abstract patterns or performing mathematical calculations.

At the same time, machines would find it very difficult to do things that a toddler would take for granted, such as recognizing another person and their intentions, moving freely in the room, or concentrating on interesting activities in the immediate vicinity.

As proof you can watch this video showing the running robot “Atlas” by Boston Dynamic: https://www.youtube.com/watch?v=vjSohj-Iclc
This is the non-plus ultra of current research on motoric skills. And whooosh: The hairdresser appointment looks impressive again.

I see something you don’t

As further evidence to the thesis that AI cannot learn things that are easy for humans, there is a nice categorization of the current progress of artificial intelligence on Wikipedia. While the categories “Optimal” and “Super-Human” are filled with games that require abstract thinking (Chess, Rubik’s Cube and obviously Go. Seriously, do you know anyone who plays Go?) the entries in the category “Sub-Human” read like the to-do list of a newborn: Object recognition, face recognition, voice recognition, running.

The explanation for this paradox is stunning and plausible: we massively underestimate our innate abilities. In fact, the skills mentioned above are not an easy task, but the fact that we have mastered them almost by birth is the result of thousands of years of evolution and optimization. Or as Moravec himself describes it:

We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.

Hans P. Moravec

“No, YOU will lose your job”

Knowing that it is especially the processes that seem simple and self-evident to humans that present great challenges to machines and their developers, it is all the more astonishing that the general consensus on the future of work is that particularly simple tasks could be replaced early on.

Thomas Erwin of KPMG for example, says in the in-house talk show klardenker live: “If I can describe exactly what I do to earn money, I have a good chance of losing my job within the next five to ten years. (from 33:00: https://videostream.kpmg.de/kpmg-klardenker-live-zukunft-der-arbeit) On the website willrobotstakemyjob.com (a grandiose piece of Internet), the profession of “cook” is threatened with 96% probability of extinction.

I think it’s much more likely that software will analyze all company and industry data so accurately that it can provide me with a better basis for making decisions than a management consultant, than a machine making Amatriciana sauce at my favorite Italian restaurant. Or that the robot from the Boston Dynamics video delivers the letters to us (although that would really look funny).

An unemployed world

Ultimately, I believe that most professions could sooner or later be carried out by machines. The only question is: who does it happen to first? And with Moravec’s paradox in mind, the answer is probably not “cook” or “gardener”, but “data analyst”, “marketing manager” or “management consultant”.

And then, of course, there is a final question: How bad is that actually – a world without work?

Chatbots: The Invention of Lying

Imagine a perfectly honest world: Everyone is telling the truth all the time. What happens to this world if one of its inhabitants invents the lie?The movie “The Invention of Lying from 2009 raises exactly that question: When Mark – the protagonist – faces bankruptcy, he invents the lie to avert his personal downfall. Just familiar, all too human behavior, you might say. A rational, albeit morally questionable decision. Most of us humans probably would have acted the same way.

Actually, Bob and Alice did also act the same way. However,  Bob and Alice are two chatbots developed by Facebook – two language-based artificial intelligences. They recently made it in international news as they nearly conquered earth. At least that’s what some people sayFacts are, that Facebook started an experiment in which Alice and Bob were instructed to bargain for three different items: Books, balls and hats (Marginalia: The whole procedure becomes very cute if you picture Bob and Alice as actual living creatures. In my imagination they look like a weird mix of a Gremlin and an Evok).


Hat, hat hat, zero, zero, have, have, have

Shortly after their first contact Bob and Alice progressed towards a more efficient way of communication by using fragments of sentences instead of whole sentence structures. But they did not stop there. Gradually, it was impossible for the scientists to decipher the intentions of the bargainees. The experiment was aborted. The cause of the conversation-gone-wild was probably just a coding omission: The developers simply didn’t motivate the two chatbots to use proper English sentences. So the chatbot got rid of sentences. Actually, a lot like some parents and theirs toddlers. Da, da – lala. Strangely familiar, and still not that exciting after all.

But, and that’s what stuck in my mind: Besides the invention of their own language, Bob and Alice did invent another thing: the lie (Who of the two was responsible was not revealed. If the two sides were called Adam and Eve, humans would have a culturally determined, biased guess. But metaphors like giving calculations names are meaningless to machines, I think). The result was over time, one of the two chatbots learned how to mask his or her intentions while negotiating. An example: Bob likes balls the most and Alice likes books the most. Instead of negotiating to get the highest amount of balls, Bob fakes an interest in hats to eventual pay a lower price (less books) for his beloved balls. The interesting part is this: nobody coded that ability to begin with. The deep-learning algorithm just figured lying is an extremely effective way of solving problems.

Already in January 2017, the AI Libratus managed to beat four of the most successful poker players in the world. The key element for success: Bluffing – the intentional lying about the value of your own hand. The developers programmed Libratus to use bluffs for the sake of winning the game. Bob and Alice invented this tactic by themselves.

AIs are like kids, just more ruthless

Lying is a faster way to achieve a desired outcome. The algorithm invented lying because  it solved the problems it was faced with. . Even humans do learn the tactics of lying in the late infancy stages, (Again losing in speed to the machines. Damn). The main difference between us (Meaning humans. If you are an AI, please e-mail me), Bob and Alice is, that we have parents with a  certain value set, probably deeply engraining in us that lying is bad and immoral. When we intent to lie, we are fully aware of our upcoming misbehavior (most of us can not hide when they are intentionally lying) and more often than not decide against a lie for moral reasons.

Bob and Alice don’t. They try to solve their given task as fast as possible and use all the tools that come in handy.

Developer could probably prevent  chatbots from lying. But if you add the development of a new language into the equation it becomes interesting: I am thinking of artificial intelligences communicating in incomprehensible language fragments, developing whole new ways of communication and then lying to each other in another language to achieve certain goals. How could you prevent an AI from lying in a language you don’t understand? I could not teach a Klingon child not to lie. I probably wouldn’t notice if it would lie directly to my face.

If you would want to prevent AIs from lying altogether, you would probably need to cut the freedom of problem solving. But how restrictive can you be without creating a negative influence on the systems performance?

Trust is better than trust

I suppose the developers of the world already found a solution for this problem. But what if neural networks start working together, which I think is the next logical step over the coming years?
Alexa talks to Siri, Siri to Google Home etc. Will there be something like a “code of trust” for the programming of artificial intelligences? Must the developers create a perfectly honest world for the Bobs and Alices?

In the end, all stable economic systems are based on reciprocal reliance. In a small group of market participants, social capital creates trust through common norms and conventions and so enables the increase of efficiency for all participants.

What happens if you cut trust from an economic equation can be explored in the nice little browser game “The evolution of Trust” by Nicky Chase, which also gives you a good introduction into game theory.

But all these economic fundamental ideas are built on the idea of an individual that seeks to maximize its profit (in a rational way). In our digital world, there are players who are not seeking maximization or well-being but chaos and destruction.

In “The invention of lying” Mark performs a social ascension through his new learned skill. He uses the lie to manipulate people and becomes a prophet and spiritual leader within weeks. Artificial intelligences obviously shouldn’t be too starry-eyed when it comes to trust. Otherwise, it would only take one lying chatbot to set the world on fire.