It goes without saying, that it is impossible for one human to digest the immense amount of information on the internet in one lifetime. YouTube, for example, sees over 82 years worth of new videos uploaded each day, and yet when you access the website via a personal google account, you only see videos about topics you are interested in. It is accepted that algorithms ensure you see entertaining content that interests you. It is wonderful when YouTube recommends gameplay of the Call of Duty you just bought or a Ted talk about a theory you were just reading about.
As time has progressed the general public has come to accept social media as a key source of information, with our reliance on the medium only increasing as social media becomes more socially compulsory and larger chunks of the population become more technologically literate. As such, for many, social media platforms have become the main if not single points of access into the broader community, allowing us to not only keep in touch with extended family and friends but also to see news and opinions from all around the world.
However, the concept of an algorithm controlling what you do and do not see sounds like an Orwellian dystopia when framed in a certain light. As social media becomes a more fundamental source of news and public opinion, it is important that algorithms are not utilised for any sinister motives and do not cause any unexpected consequences.
Being only 18, I have been using social media for almost half my life (I still remember faking my age to make a Facebook account in Year 3) and social media has kept me entertained, actively changing the posts I see, to keep in line with my interests. Honestly, who can blame people for being so reliant on these platforms, there is only so much time in a day, and many cannot spend hours sifting through news articles or sitting in front of a TV to consume traditional media. However, the change from the traditional ‘tried and trusted’ media platforms, to unfamiliar, unregulated, social media platforms has made many apprehensive, while others dismiss these concerns as technophobia or juvenoia.
The studying of this problem is a new field without much concrete evidence, so until more research is done on the topic, argument points are mostly anecdotal. However, as an introduction to the field, we can hypothesise on the effects of algorithms on the news we see and as such how the algorithms influence our opinion. To find which content is interesting and hence worth showing to the user, algorithms prioritise posts with more favorable social analytics. This varies depending on the social media platform but simplicity we will use YouTube model recommends videos based on likes, watch time, interaction (comments, shares, etc.) and metric they refer to as CTR (click-through rate). CTR is ‘the total number of clicks on a video’s title and thumbnail divided by the number of times that title, and thumbnail have been shown’.
The problem with such a metric is it promotes clickbait posts that use shocking or exaggerated statements to trick the reader into following the link. This results in more radical and false journalism to be prompted to the masses. However, while logical, this assumption must still be statistically proven.
As we discussed earlier the algorithm creates a profile on the user to show them content that they want to see. When applying this parameter to a news source, logical progression suggests, if an algorithm determines that you are potentially right-wing, it will bombard you with right-wing news sources.
For example, this happened in October 2018 when Ben Shapiro, who is classified as a radical conservative became a popular internet meme. The way in which people derived entertainment from this meme was purely ironic, however, the algorithm falsely flagged accounts engaging with this meme as potential conservatives and filled users feeds with conservative content. The extent to which, content being focused by the algorithm, affects the general person’s opinion has not been thoroughly researched, however, my personal bias likens the levels of political bombardment to targeted propaganda. I believe unintentionally, the algorithm classifies your political standards and reinforces them, in turn narrowing your mind. However, this belief is not scientifically supported and as such, I must stress, the reader should form their own opinion.