How technology disrupted the truth (On social media, ‘truth’ equals ‘likes’)
Illustration: Sébastien Thibault in The Guardian
Algorithms such as the one that powers Facebook’s news feed are designed to give us more of what they think we want – which means that the version of the world we encounter every day in our own personal stream has been invisibly curated to reinforce our pre-existing beliefs. When Eli Pariser, the co-founder of Upworthy, coined the term “filter bubble” in 2011, he was talking about how the personalised web – and in particular Google’s personalised search function, which means that no two people’s Google searches are the same – means that we are less likely to be exposed to information that challenges us or broadens our worldview, and less likely to encounter facts that disprove false information that others have shared.
Pariser’s plea, at the time, was that those running social media platforms should ensure that “their algorithms prioritise countervailing views and news that’s important, not just the stuff that’s most popular or most self-validating”. But in less than five years, thanks to the incredible power of a few social platforms, the filter bubble that Pariser described has become much more extreme.