Online Presence: Trump, censorship, power & ethical conundrums

anon1998
15 min readMay 27, 2021
Source

Introduction
Everyone can agree that technology has very much restructured our way of life in the 21st century. If one were to compare contemporary life with 20 years ago, the contrast would be striking, in terms of positive developments as well as some seemingly intractable issues that have arisen in the space of just 2 decades. Social media platforms have revolutionised communication, making connection effortless and instant regardless of geographic and political divides. The boundaries between political and non-political activities on social media have become blurred, thus minimising the intensity of political engagement (Ekstrom & Shehata, 2016). Social media platforms are spaces that enable the increase of political exposure, also known as porous boundaries. Due to their accessibility and virtually no cost for users, these platforms are spaces that are ripe for content creation. This achievement has resulted in the democratisation of expression as well as variegated opinions and discourse (Laird & Jordan, 2020). However, recent circumstances have highlighted the exploitation of such advantages such as hate speech and disinformation.

For decades, academics have postulated different audience theories to understand how, when and why viewers process certain media the way they do. The hypodermic model has often be discredited for assuming that audiences are passive (Bonney et al., 2016). However, it’s hard not to parallel this theory with the riot was incited by Donald Trump that occurred at the Capitol in 2021.

Is Facebook’s decision to keep Trump away from its platform for a further six months correct, after the site was used by the former US President to incite a mob that stormed the Capitol on January 6th?

Source

Trump has been notorious for his rhetoric and bluster which some (like CNN) say borders on hate speech that has disseminated through social media platforms such as Twitter and Facebook. However, on the 6th of January, Trump’s manipulative rhetoric provoked his supporters to storm the Capitol after claims (refuted by most of the mainstream media) that the election was stolen from him after Biden’s victory: “I know your pain. I know you’re hurt. We had an election that was stolen from us” (as cited in Culliford, 2021). Trump’s message was rapidly dispersed on social media platforms and even certain mainstream news media such as Fox News, for example. As a result, at least five individuals were killed, with 65 police officers suffering mild to severe injuries (Jackman, 2021). According to Siripurapu & Merrow (2021), national security experts contend that the riot was instigated heavily by social media platforms.

Consequently, Twitter took action to permanently ban Trump from their platform, whilst Facebook temporarily banned him from their platform as well as Instagram until further deliberation. Thus, leaving the opportunity for their decision to be reversed. Unfortunately, Facebook’s Oversight Board, a panel consisting of academics, lawyers and right activists, created to aid Facebook tackle questions regarding freedom of expression amongst their users, disputed the decision, calling it “indeterminate and standardless” (Culliford, 2021; BBC News, 2021).

Naturally, some have argued that Facebook and Twitter tend to be ‘left-leaning’ in their political orientation but assertions like this are difficult to prove or disprove (with some employees uncomfortable to express their conservative opinions). However, such allegations might cast some doubt over the impartiality of the members of Facebook’s Oversight Board, some of whose members may (rightly or wrongly) be construed as having ‘left’ or Democratic Party leanings.

This is where the problems enter the discussion.

Source

Trump has utilised the affordances of social media platforms to propagate right-wing populist rhetoric. Kreis (2017) argues how politicians like Trump have effectively appropriated the platforms for their own political agendas due to the instantaneous nature, direct communication with their followers, self-promoting themselves and disseminating their ideologies. Interestingly, Kreis points out how Trump used his own personal social media accounts and not official presidential accounts, allowing his followers to feel closer to him as if he was one of them.

This behaviour has been prevalent for numerous years, however, the riot was the cherry on the cake. Undoubtedly, many are arguing that his incendiary and rabble-rousing comments, often containing falsehoods and half-truths have led to the recent acts of ‘indirect terrorism’ on Capitol Hill. Trump has proven to be very adept at instigating his core support base to take action while stopping short of blatantly inciting them. However, he does so in the knowledge that extreme elements within his core base are likely to take some action.

Source

Although Trump did not explicitly tell his supporters to attack and ransack, he probably knew that things could get out of hand. In this sense, the banishment of Trump from Twitter and other social media was probably the ‘right’ thing to do. One can argue that hate speech and inciting behaviour along with this recent act of ‘indirect’ terrorism merits permanent banning due to the volatile, indoctrinated nature of his followers as well as the extremely violent outcomes from the riot which resulted in numerous deaths and injuries. Of course, the banning is essentially a form of silencing, or rather, censorship and goes against Trump’s right to freedom of expression. Having said this, legally speaking, social media platforms are privately-owned organisations that are not regulated by the government, but rather function in accordance with their own policies. Thus, the constitutional right to expression does not stand with social media platforms. However, one can argue that the lack of legal obligation is taken advantage of.

The Oversight Board criticised Facebook’s decision to extend Trump’s ban claiming that uniform action should be taken rather than creating exceptional punishments that do not correspond with their policies. Their argument definitely holds water. Social media platforms should not be allowed to make the rules up as they go but rather, regulate content with policies and codes.

Despite this, the lines between content moderation and censorship are blurred. Granted, punishments should be equally applied to all but the dilemma remains. What content should be moderated or what content should be allowed to be censored? This contemporary idea that social media platforms are the new gatekeepers is a rather thought-provoking one that revolves around questions of power and authority. There is a current ongoing debate on the topic of content moderation. Some argue that it is impossible to do with the immense amount of content that is generated on a daily basis, with some pointing out that factors such as sarcasm, tone of voice and culture, that make it all the more complicated to detect (Siripurapu & Merrow, 2021). Others argue that platforms such as Facebook do not have very high operating costs compared to other million-dollar companies, thus more accurate moderation boils down to how much the company is willing to invest (Edelman, 2020).

Source

It is evident from Facebook’s decision to extend Trump’s ban that the platform was not prepared for this kind of dilemma. Undoubtedly, the extension is to bide themselves more time, highlighting that their decision was reactive rather than a proactive one. Understandably, Twitter and Facebook have been receiving backlash over their decisions, with many claiming the unfairness for singling out Trump rather than applying uniform repercussions. However, if Facebook were to welcome Trump back, it is very likely that his rhetoric and hate speech will perpetuate. Regardless of whether they welcome him back or not, they will still face criticism, thus, making the 6-month extension rather futile. The platform should use Trump as an example for the consequences that will arise if content depicts such ill behaviour but also set a new standard by developing new policies that safeguard the safety of their users, both mentally and physically.

It is worth pointing out that although these problems are coming to light and are indeed positive, they have been prevalent for a long time, particularly amongst other political figures. However, it is worth wondering if cultural imperialism comes into play when determining the newsworthiness of the Capitol riot. Indeed, Facebook and Twitter have taken action but what about the political figures in India targeting Muslims? (Satariano, 2021) Or when a member of the Slovakian Parliament was imprisoned for incitement and racist comments? The list goes on.

Source

Are the major social media platforms such as Facebook and Twitter the new trusted arbiters of online speech?

Source

One of the most attractive aspects of open publishing platforms is the fact that there is no screening or application process. That said, this is also a double-edged sword since anyone who wishes to communicate and disperse content can do so at their free will. These platforms have created a media environment where there is no mediating channel or distribution limit (Gainous & Wagner, 2014). The dangerous part of this freedom is its manipulation, or rather, its misinformation. Nowadays fake news and clickbait have become the new normal. Misinformation is becoming increasingly more convincing, making it difficult to differentiate fabrication from the truth. As Chayko (2016) puts it “we can no longer regulate truth in an era where fake news spreads so easily and that we, therefore, live in a “post-truth” society” (p. 73).

So, the question is, in an age of digital misinformation, are platforms such as Facebook and Twitter arbiters of online speech? The short answer is no. There are a couple of ways to interpret the word arbiter in this scenario. In this context, ‘arbiter’ is understood as ‘protector’.

At this moment in time, Facebook and Twitter have already faced problems and criticism for their content moderation or lack thereof. One could argue that the word ‘trust’ has connotations of the word ‘truth’. Whatever content does get uploaded does not mean it is truthful or accurate (Wilber, 2017). But if it isn’t truthful, should it still be allowed in honour of freedom of expression? Things become even more complicated when one has truths, half-truths and non-truths combined in the same messaging. In many instances, individualised content may not appear harmful, but the culmination and repetition of information and views that are based on inaccuracies, half-truths or distortions of the truth can create a dangerous ripple effect that can lead to racism, violence and hate speech.

Source (echo chambers)

Another interpretation of arbiter is from an authoritarian point of view. As Facebook and Twitter are communicative communities, they enable echo chambers that perpetuate this era of post-truth. These platforms cannot be trusted arbiters of online speech when they enable misinformation through echo chambers and when a large chunk of content is misleading (Yusuf et al., 2014). Echo chambers often consist of members who listen and believe other like-minded individuals and closing their minds to notions that do not correspond with their version of the truth (Gainous & Wagner, 2014). Chayko (2016) contends that social media platforms such as Facebook and Twitter “are increasingly called out for encouraging the development of echo chambers on their platforms” (p. 73). By protecting these echo chambers, these platforms are allowing communities to exploit their freedom of expression. Furthermore, Sunstein (2002) points out how echo chambers are networks for individuals with shared consensuses that are often extreme, creating a polarised media environment. It is worrying to contemplate whether enabling extremists to come together online may result in danger. Take for example anti-vaxxers or COVID-19 deniers, their communities disseminate information claiming that the virus isn’t real or that vaccines are more harmful than beneficial.

Source

Should their content be deleted? Facebook says yes. Early this year, Facebook took down a handful of Covid denial videos as they posed a risk of harm (Quinn & Bland, 2021). However, this posed risk is subjective as these committees believe that they are helping society by exposing their truth as they perceive it. number of rhetorical questions arise:

  • Who defines the parameters of censorship?
  • Is it worth being a trusted arbiter of online speech that encourages freedom of expression, regardless if that freedom leads to death and violence just like the capitol riot?
  • How can Facebook be ‘trusted’ with their decisions on freedom of speech when the company took the decision not to fact-check political advertising during the 2020 election, that in turn caused a riot?
  • Social media platforms such as Facebook and Twitter cannot be authoritative figures that fairly determine what stays and what doesn’t. How can these platforms be given such a responsibility when as James Ball (2018) claims, social media is weaponised?
  • How can Facebook be a trusted arbiter of online speech when the company took the decision not to fact-check political advertising during the 2020 election, that in turn caused a riot that led to numerous deaths and injuries?

Although investment in resources to moderate content will ameliorate the current situation of racism, violence and hate speech, the phrase “new arbiters of online speech” is too bold of a claim. One could argue Facebook and Twitter cannot be trusted arbiters of speech as they have their own agendas just like any other news source. The phrase “subjective arbiters of online speech” be better fitting as what is hateful, violent or racist varies from opinion to opinion. Furthermore, any algorithm that is created to prevent such content would be developed by human scientists, meaning that even code is subjective.

Or is there too much power vested in Facebook and other social media platforms, whose business models will always find ways to highlight divisive content to drive advertising revenues?

Source

In order to generate user engagement, Facebook utilises algorithms that are partly programmed and partly based on machine learning. Machine learning does what it says on the label. It is a form of artificial intelligence that can identify patterns and make judgements without human intervention. Their algorithm identified that sensationalised and polarised content generated substantial amounts of engagement (Lovejoy, 2020). In ‘Post-Truth: How Bullshit Conquered the World’, Ball (2018) goes on to state that “one recent study suggested that fake and hyper-partisan sites got around three times more traffic through Facebook than did through what it called ‘real’ news sites. Social sharing is essential for such sites and seems to serve them better than it does most of the mainstream” (p. 110).

Source

Thus, increased visibility of such content as more engagement means more advertising revenue (approximately $3 billion). Although there are claims that this algorithm was unintended, Facebook’s lack of action says otherwise. Furthermore, Facebook’s defensive argument hid behind the notion of freedom of expression, and that if polarised users wished to engage in content that did not go against their policies, then so be it. It may also be argued that this sort of attitude has allowed politics in many countries, particularly in the USA to reach hitherto unseen levels of vitriol and polarization which if left unchecked could have serious social consequences by undermining the very fabric of society.

If one were to parallel Facebook’s decision to censor Trump and limit his freedom of expression to safeguard users, with their freedom of expression defence to maintain their algorithm which generates revenues, it’s hard not to view freedom of expression as a ‘getaway card’ when it benefits the company. Additionally, this information came to light in 2020, however, this isn’t the first time Facebook has manipulated users for their own gain.

Source

Social media sensationalism has been around for many years. For example, when the Ebola virus hit New York, platforms were filled with headlines saying “Doctor in NY Tests Positive for Ebola” when in reality the headlines omitted that the doctor was no threat to the public. Regardless, Twitter received approximately 6,000 tweets per second which ultimately generated billions of impressions (Rose-Stockwell, 2017). One can argue that these impressions are a demonstration of herd behaviour theory, posited by Simmel (1957), whereby the behaviour of individuals is then adopted by others as a form of ‘social equalisation’ (Park & Jang, 2017). The misinformation brought terror to the masses and was exacerbated by unfounded claims that were uploaded to social media platforms. It is somewhat ironic how in this era of advancement, societies cause panic just as much as the War of the Worlds radio broadcast back in 1938 (Bonney et al., 2016). This example of social media sensationalism not only highlights the exploitative and manipulative power but also underlines how the emotionality of users is used as a tool for profit. In other words, we are living in a society where user engagement is a currency of the attention economy.

Concluding Remarks
The Cambridge Analytica scandal exposed Facebook, unravelling just how much information is obtained from their users and how they utilise it to reach specific objectives, whether it be for political gain or advertising. Additionally, the documentary ‘The Social Dilemma’ highlights the extremities of this information manipulation to the point where users’ behaviours are unconsciously controlled. One could argue that questioning whether social media platforms have too much power is rather rhetorical when the level of control and information they possess somewhat equates to the amount of power government agencies have, if not more.

Undoubtedly, the dilemma regarding social media platforms and power is a phenomenon too recent to have a perfect solution just yet. And as previously mentioned, these companies are private meaning that they are not governed by the state. More proactive and preventative legislation should be put into place to pressure platforms to behave more responsibly and ethically, rather than be purely motivated by revenue. After all, these are uncharted territories where the social, political and psychological repercussions are unknown. The ‘how’ part is the biggest question. However, with a company revenue of approximately $84 billion, it’s hard to think of the word impossible (Tankovska, 2021).

References

Ball, J. (2018). Post-Truth: How Bullshit Conquered the World. Biteback Publishing.

BBC News. (2021). Facebook’s Trump ban upheld by Oversight Board for now. https://www.bbc.com/news/technology-56985583

Bonney, E., Cleasby, E., Keeley-Holden, S., Simpson, C., & Tate, R. (Eds.). (2016). A-Level Sociology: AQA Year 1 AS Complete Revision ‘Pract. Coordination Group Publications Ltd (CGP).

Chayko, M. (2016). Superconnected: The Internet, Digital Media, and Techno-Social Life (Sage Sociological Essentials) (1st ed.). SAGE Publications, Inc.

Culliford, E. (2021). Facebook has six months to determine if Trump returns. Reuters. https://www.reuters.com/world/us/facebook-oversight-board-rule-trumps-return-facebook-2021-05-05/

Edelman, G. (2020, July 29). Stop Saying Facebook Is ‘Too Big to Moderate.’ Wired. https://www.wired.com/story/stop-saying-facebook-too-big-to-moderate/

Ekström, M., & Shehata, A. (2016). Social media, porous boundaries, and the development of online political engagement among young citizens. New Media & Society, 20(2), 740–759. https://doi.org/10.1177/1461444816670325

Gainous, J., & Wagner, K. M. (2014). Tweeting to Power: The Social Media Revolution in American Politics (Oxford Studies in Digital Politics) (Illustrated ed.). Oxford University Press.

Jackman, T. (2021). Police union says 140 officers injured in Capitol riot. Washington Post. https://www.washingtonpost.com/local/public-safety/police-union-says-140-officers-injured-in-capitol-riot/2021/01/27/60743642-60e2-11eb-9430-e7c77b5b0297_story.html

Kreis, R. (2017). The “Tweet Politics” of President Trump. Journal of Language and Politics, 16, 607–618.

Laird, R.D., & Jordan, K.Y. (2020). Regulating Freedom of Speech on Social Media. Humanities and Social Sciences 6(1): 14–20, 2020. INOSR Publications.

Lovejoy, B. (2020). Facebook algorithms promote divisive content, but company decided not to act. 9to5Mac. https://9to5mac.com/2020/05/27/facebook-algorithms/

O’Neil, C. (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Reprint ed.). Crown.

Park, Y., & Jang, S.M. (2017). Public attention, social media, and the Edward Snowden saga. First Monday, 22.

Persily, N. (2020). Social Media and Democracy (SSRC Anxieties of Democracy). Cambridge University Press.

Quinn, B., & Bland, A. (2021, January 29). Facebook removes Save Our Rights UK Covid denial videos. The Guardian. https://www.theguardian.com/world/2021/jan/28/coronavirus-denial-videos-are-removed-from-facebook

Rose-Stockwell, T. (2017, July 28). How Facebook’s news-feed algorithm sells our fear and outrage for profit. Quartz. https://qz.com/1039910/how-facebooks-news-feed-algorithm-sells-our-fear-and-outrage-for-profit/

Satariano, A. (2021, January 17). Facebook and Twitter Face International Scrutiny After Trump Ban. The New York Times. https://www.nytimes.com/2021/01/14/technology/trump-facebook-twitter.html

Smith, R. E. (2019). Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All. Bloomsbury Business.

Sunstein, C. (2002). Republic.com. Princeton: Princeton University Press.

Statista. (2021). Facebook: annual revenue 2009–2020. https://www.statista.com/statistics/268604/annual-revenue-of-facebook/#:%7E:text=Facebook%3A%20annual%20revenue%202009%2D2020&text=In%202020%2C%20Facebook’s%20revenue%20amounted,in%20the%20previous%20fiscal%20year.

Siripurapu, A., & Merrow, W. (2021, February 9). Social Media and Online Speech: How Should Countries Regulate Tech Giants? Council on Foreign Relations. https://www.cfr.org/in-brief/social-media-and-online-speech-how-should-countries-regulate-tech-giants

Wilber, K. (2017). Trump and a Post-Truth World. Shambhala.

Yusuf, N., Al-Banawi, N., & Al-Imam, H. A. R. (2014). The Social Media As Echo Chamber: The Digital Impact. Journal of Business & Economics Research (JBER), 12(1), 1. https://doi.org/10.19030/jber.v12i1.8369

--

--

anon1998
0 Followers

final year uni student trying to graduate