Facebook’s News Feed Changes: Content That Matters

One of the first entries on my Facebook timeline dates back to the 22nd June 2006. A guy called Phil writes: “Guten Tag! Can’t wait till you get here.” That was eleven and a half year ago. And when I think back to these times, I’m actually getting a little sentimental.

Since I have been a Facebook user for such a  long time, I will grant myself the right to take a little stroll through my digital past with you (it’s like one of those “grandpas’ war stories”). Compared to the majority of users (especially in Europe) I might actually be a pensioner of the Facebook user group. The reason is simple: In 2006, an exchange student from the US stayed with me (Hi, Ben). Back then, Facebook was an invite-only network. So I sort of became a student of a High School in Arkansas and was in on it. The first twenty posts on my timeline are a result from that strange educational decision: Young american fellow students I didn’t know, congratulated me on a) listening to Jack Johnson, b) watching Sex and the City or c) just for being German and drinking beer (great time).

 

Thumbs up

But one detail still seems peculiar when watching these old posts (and that’s the ultimate prove what ancient of a Facebook user I am): none of them had any likes.
Why is that? Because the like button hadn’t been invented yet (it started in 2009). That was a really romantic time in the history of the internet . You really felt connected with people thousands of kilometres away, engaged in real interactions with you!

Since then (and probably because of the like button in question), Facebook has changed a lot. When in the past you wrote a post on someone’s timeline or virtually poked them, today’s average interaction on Facebook looks like this: “XY linked you in a comment > Like the comment > The End.” And then you are bombarded with “funny” corporate or trash news videos, that are neither funny nor interesting. And when you close the app you feel empty and dumb.

Back to the roots

All of this is well known and probably not worth a blog post on futuccino. However, tada, here’s the thing : Mark Zuckerberg obviously feels the same way about his network, too. And that’s why Facebook announced in January that it was planning to revalue its news feed. Value in terms of human interaction. Facebook also stated that this will lead to less public content and videos will find their way to the news feed. Simple as that.

The next big announcement followed one week later, when Facebook explained that not only would less corporate content be shown within the newsfeed, but in the future those posts will get more airtime, that the community finds trustworthy and informative. “Trustworthiness” in this case applies to the content publisher and is generated through some kind of test group (like the audience rate for national television).

 

The capital up in arms

The effect of these announcements: Everyone creating profits through Facebook went nuts. The share price dropped from 187 to 176 Dollars within one week. Major news networks worried about their business as a whole, anticipating the loss of huge parts of their audiences. Various news sites created how-to-articles  “educating” their Facebook followers what they could do to still see their treasured news content after the changes would apply. Come on, as if anybody in the world was really waitng for their favored news content to pop up.

Many news outlets only input to the digital community is the sharing of short videos and mushy articles for (linking) profit without any additional value just to create some kind of basic internet noise, to still be able to run their banner ads fueled web page.

If the new standard for successful articles on Facebook really is “trustworthy and informative” then their fear might not be without reason. And if you trust the statistics of parse.ly and assume that users find 40 percent of all content on the internet via Facebook, the fear gets more real. A solid angst (finally, a German word) is certainly experienced  right now by those companies who got hooked on FB-clicks in the first place. To be dependent on Facebook-clicks and the networks functionality is an uneasy place for news providers. Ben Thompson says it best in his article about Facebook Instant Articles and compares the situation of dependent companies to a scene in Star Wars when Darth Vader changes a deal – and nobody can do shit about it.

 

“Content that matters? We have never done that before!”

But it will not only be news sites affected by Mark Z.’s little modifications. Some other company profiles will also find themselves on the black list, especially when their content strategy also consists of sharing videos, GIFs and other people’s posts. When in the future there will be less videos and less corporate content in the users news feeds, only those posts will stand out, that provoke real interactions. And call me a doomsayer, I don’t see that happening for many company posts at the moment. If Facebook gets down to business with it’s changes, some companies might think of a real content and social media strategy for the first time to reach any users via social networks. Bad news for everyone who saw Social Media as a colourful, funky attachment to the real marketing. Good news for those companies that developed a content strategy over the years, already producing articles and video that are in line with the company’s’ goals and mission, following corporate communication guidelines and targeting meaningful interaction with their audience. Unfortunately, there are too few of them. We all know some colleagues baffled with disgust: “What do you mean, “create relevant and informative content”? We have never done that before.”

I think there is no doubt: Facebook had to improve the user experience and make their customers happier again (and no, when I say customer I am not talking about the companies who tend to spill their money on ads on the plattform). Because although every statistic is telling you stories of an increasing number of users, my personal perception is: Facebook sucks. Facebook makes you dull.

And before Facebook risks losing their treasured users and in the same instance their aggregation power, it will simply ban those crappy contents that people dislike when browsing the network. Good management.

 

Flicker of Hope

And after painting with (admittedly) broad strokes that somehow dark a picture, I think it is time for a possible loophole for fellow corporate social media teams: Employee Advocacy. Those tools use the social networks of their employees to generate reach by letting them share corporate content with their private accounts. Providers like Smarp or Trapit offer a social network like timeline from where contents can be shared easily and often come with a gamification concept to motivate the employees to actually sharing the content. Maybe not a new way to entirely get around creativity again, but it might be an elegant way to fool Mark’s content police. For some time, that is.

Chatbots: The Invention of Lying

Imagine a perfectly honest world: Everyone is telling the truth all the time. What happens to this world if one of its inhabitants invents the lie?The movie “The Invention of Lying from 2009 raises exactly that question: When Mark – the protagonist – faces bankruptcy, he invents the lie to avert his personal downfall. Just familiar, all too human behavior, you might say. A rational, albeit morally questionable decision. Most of us humans probably would have acted the same way.

Actually, Bob and Alice did also act the same way. However,  Bob and Alice are two chatbots developed by Facebook – two language-based artificial intelligences. They recently made it in international news as they nearly conquered earth. At least that’s what some people sayFacts are, that Facebook started an experiment in which Alice and Bob were instructed to bargain for three different items: Books, balls and hats (Marginalia: The whole procedure becomes very cute if you picture Bob and Alice as actual living creatures. In my imagination they look like a weird mix of a Gremlin and an Evok).


Hat, hat hat, zero, zero, have, have, have

Shortly after their first contact Bob and Alice progressed towards a more efficient way of communication by using fragments of sentences instead of whole sentence structures. But they did not stop there. Gradually, it was impossible for the scientists to decipher the intentions of the bargainees. The experiment was aborted. The cause of the conversation-gone-wild was probably just a coding omission: The developers simply didn’t motivate the two chatbots to use proper English sentences. So the chatbot got rid of sentences. Actually, a lot like some parents and theirs toddlers. Da, da – lala. Strangely familiar, and still not that exciting after all.

But, and that’s what stuck in my mind: Besides the invention of their own language, Bob and Alice did invent another thing: the lie (Who of the two was responsible was not revealed. If the two sides were called Adam and Eve, humans would have a culturally determined, biased guess. But metaphors like giving calculations names are meaningless to machines, I think). The result was over time, one of the two chatbots learned how to mask his or her intentions while negotiating. An example: Bob likes balls the most and Alice likes books the most. Instead of negotiating to get the highest amount of balls, Bob fakes an interest in hats to eventual pay a lower price (less books) for his beloved balls. The interesting part is this: nobody coded that ability to begin with. The deep-learning algorithm just figured lying is an extremely effective way of solving problems.

Already in January 2017, the AI Libratus managed to beat four of the most successful poker players in the world. The key element for success: Bluffing – the intentional lying about the value of your own hand. The developers programmed Libratus to use bluffs for the sake of winning the game. Bob and Alice invented this tactic by themselves.

AIs are like kids, just more ruthless

Lying is a faster way to achieve a desired outcome. The algorithm invented lying because  it solved the problems it was faced with. . Even humans do learn the tactics of lying in the late infancy stages, (Again losing in speed to the machines. Damn). The main difference between us (Meaning humans. If you are an AI, please e-mail me), Bob and Alice is, that we have parents with a  certain value set, probably deeply engraining in us that lying is bad and immoral. When we intent to lie, we are fully aware of our upcoming misbehavior (most of us can not hide when they are intentionally lying) and more often than not decide against a lie for moral reasons.

Bob and Alice don’t. They try to solve their given task as fast as possible and use all the tools that come in handy.

Developer could probably prevent  chatbots from lying. But if you add the development of a new language into the equation it becomes interesting: I am thinking of artificial intelligences communicating in incomprehensible language fragments, developing whole new ways of communication and then lying to each other in another language to achieve certain goals. How could you prevent an AI from lying in a language you don’t understand? I could not teach a Klingon child not to lie. I probably wouldn’t notice if it would lie directly to my face.

If you would want to prevent AIs from lying altogether, you would probably need to cut the freedom of problem solving. But how restrictive can you be without creating a negative influence on the systems performance?

Trust is better than trust

I suppose the developers of the world already found a solution for this problem. But what if neural networks start working together, which I think is the next logical step over the coming years?
Alexa talks to Siri, Siri to Google Home etc. Will there be something like a “code of trust” for the programming of artificial intelligences? Must the developers create a perfectly honest world for the Bobs and Alices?

In the end, all stable economic systems are based on reciprocal reliance. In a small group of market participants, social capital creates trust through common norms and conventions and so enables the increase of efficiency for all participants.

What happens if you cut trust from an economic equation can be explored in the nice little browser game “The evolution of Trust” by Nicky Chase, which also gives you a good introduction into game theory.

But all these economic fundamental ideas are built on the idea of an individual that seeks to maximize its profit (in a rational way). In our digital world, there are players who are not seeking maximization or well-being but chaos and destruction.

In “The invention of lying” Mark performs a social ascension through his new learned skill. He uses the lie to manipulate people and becomes a prophet and spiritual leader within weeks. Artificial intelligences obviously shouldn’t be too starry-eyed when it comes to trust. Otherwise, it would only take one lying chatbot to set the world on fire.