How to stay well-informed when algorithms are deciding for us?

Recommendation algorithms calibrate some information we receive so that it pleases us personally. How can we make sure we can still find out about opinions on current events that are different from our own? What is the responsibility of broadcast platforms in this area and how are they working on these questions?

“Over 58 million people voted for Trump. I don’t know one.”

“Over 58 million people voted for Trump. I don’t know one”. On November 8th 2016, the Republican candidate’s victory came as a surprise to journalist Matthew Hughes, as it did to the majority of political analysts. Since a notable book by internet activist Eli Pariser in 2011 (“The Filter Bubble”), the phenomenon at the origin of Matthew Hughes’s disbelief is known as the “filter bubble”.

In effect, the content that we see online via search engines, in the newsfeeds of social networks, or on video or music on-demand services, is selected by algorithms. They base their choices on our tastes and ideas or on those of our “friends”. For Eli Pariser, this selection bias encloses us in a bubble as it conceals the plurality of opinions and thus the reality of the world. Worse till: it teams up with confirmation bias, as the act of reading only arguments that are similar to our own strengthens our opinions – at the risk of damaging our open-mindedness.

Maths and marketing

In 2018, an MIT study made it possible, in three diagrams, to materialise these filter bubbles and to chart the phenomenon of polarisation of opinion.

On the first diagram, circle size represents the importance of political communities: unsurprisingly, middle ground opinions bring together the largest number of internet users. However, on the second diagram, we see that extremist opinions share a lot more content than the centre, which is rather quiet. As for the third diagram, it shows the homogeneity of the communities: the further we go from the centre, the more the communities are made up of individuals who think exactly the same thing. These “echo chambers” fostered by algorithms are a breeding ground for the spread of “fake news” – which, according to the MIT, are shared 70% more than information from traditional media.

The way in which these filter bubbles work is straightforward maths and marketing: the Newsfeed Ranking Algorithm rolled out by Facebook in 2011, for example, sorts visible information by attributing a score to content, thanks to 100,000 parameters aimed at measuring popularity within the network of close friends and relevance to the user’s profile. In the United States, the social network classifies its users into 98 political categories so as to provide advertisers with finely-tuned targeting. Over one million advertising scenarios can then be generated based on the data of interaction between consumers and the brand. On the search engine side, users are profiled thanks to the cookies they leave as they surf the web. Thus, two users performing the same search will not get quite the same results because, depending on their profile information, the links are ordered differently.

Filters are not the prerogative of algorithms

What are the consequences of this algorithmic processing on access to information, plurality of opinions and their confrontation within society? Eli Pariser’s critics believe the impact of these “filter bubbles” is exaggerated, in particular because the internet is not yet people’s main source of information. According to a study carried out by some economists from Brown and Stanford universities, Americans still name television as the first source of information.

The study doesn’t find greater polarisation of opinions over several years (1996-2012). Furthermore, social networks can have a positive impact on democracy, as illustrated by the Arab Spring. Another example: in 2017, a led to 2.6 million people marching through the streets of Washington for respect for women.

More importantly, filters are not the prerogative of algorithms, far from it in fact: above all, the question is that of the formation of opinion in a social setting. Since the 1950s to 1970s, psychologists have described homophily (the fact of conforming to the opinions of our social circle, even without knowledge of their truthfulness) and, conversely, the tendency to avoid any cognitive dissonance (internal stress that is the result of the confrontation of contradictory ideas). Sociologist Dominique Cardon describes this idea: “The bubble, it’s we that create it. Via a typical social reproduction mechanism. The true filter is the choice of our friends, more so than Facebook’s algorithm”.

A democratic issue

To overcome this bias, we must leave our intellectual “comfort zone” and surround ourselves with friends of varying opinions – but also read and comment on their content on social networks. On search engines, deleting cookies regularly or simply using private browsing mode will provide radically different results to whoever wishes to be informed in a more neutral manner.

And then what? To stay well-informed on the internet, the knowledge of the workings of algorithms becomes a democratic issue, which goes through educating citizens. A task that the media were the first to tackle by creating “fact-checking” units to fight against “fake news”.

Facebook has entered into partnerships with these news watchdogs in many countries and is exploring other avenues, such as the suggestion of alternative related content and of counterinformation below shared opinion pieces, or still systems for flagging disinformation content and tools for detecting “spam farms” and excluding the accounts involved. On 25th October 2019, in the United States the social network launched Facebook News, a service restricted to news “of quality” and that promises to burst filter bubbles thanks to a selection of articles from a range of political parties.

Twitter, for its part, is considering adding messages into user’s feeds from members they don’t know and with whom they do not agree, so as to correct the filter bubble effect. As for Google, it has set up a label signalling verified articles in its search engine and news. In addition to this, Facebook and a media coalition have launched the , whose aim is to combat fake news.

Is this the end of filters and fake news?

Universities and NGOs are also innovating in favour of plurality of information. For example, a browser extension informs the user when they land on a fake news site. Social media aggregator Gobo enables users to “take control” by providing them with filters that they can configure. The MIT Center for Civic Media, MIT Media Lab and Comparative Media Studies at MIT are making the prototype available to all.

Will the combined efforts of the different stakeholders suffice to turn filter bubbles and fake news into a thing of the past? Eli Pariser, an optimist, draws a parallel with eating habits. In an interview with Le Monde in 2018, he declared: “For a very long time, fast-food restaurants developed at incredible speed. But at some point, consumers realised that eating burgers every day created health problems, and they looked for other options. Today we see a whole range of fast-food outlets that offer healthier food.”

Read also on Hello Future

Khiops: Simple and Automated Machine Learning

Discover
GettyImages - WineSensed vin et IA - wine and AI

WineSensed uses artificial intelligence to predict taste preferences

Discover

Translation professions doomed by artificial intelligence, really?

Discover
AI ethics

Ethics and AI: 2023 heralds “a Wild West era in generative AI”

Discover
Getty Images - Face aux fake news, les pouvoirs publics financent des arsenaux IA

Public authorities fund AI arsenal to combat fake news

Discover
Getty Images - young scientist using a digital tablet while working with crops on a farm

Technology against food waste

Discover