Is it effective to flag social media content made by a prominent individual? Evidence from Donald Trump’s Twitter activity says otherwise
Is It Effective to Flag Social Media Content Made by a Prominent Individual? Evidence from Donald Trump’s Twitter Activity Says Otherwise
Wallace Chipidza and Jie (Kevin) Yan
Traditional broadcast media like print, radio, and television have long been implicated in facilitating extremism, political violence, genocide, and other forms of societal dysfunction. In response, governments across the world have enforced regulatory frameworks to fight against what they deem inappropriate content that could upset the stability of important societal systems. Increasingly, however, social media has competed with traditional media for the public’s news consumption and attention. Yet, particularly in Western democracies, the regulations governing traditional media have not applied to content produced on social media. For example, in the United States, whereas traditional media are responsible for any content produced on their platforms, social media are not, as stated in Section 230 of the Communications and Decency Act. Thus, in the US, it has largely been left to the social media companies to self-regulate regarding content produced on their platforms.
—To what extent should influential social media platforms like Twitter and Facebook censor objectionable posts by prominent individuals?—
But to what extent should influential social media platforms like Twitter and Facebook censor objectionable posts by prominent individuals in the United States and elsewhere? Doing so might be seen as an affront to the tenets of free expression, but not taking action could be perceived as encouraging the spread of reprehensible content submitted by prominent individuals. A tentative middle ground is employing content moderation to signal to social media audiences that certain posts may contain objectionable information through the mechanism of flagging. Content moderation involves “evaluating content, determining if it violates defined rules and standards, and making decisions about whether to keep or remove potentially offending material” [1, p. 2]. Since their inception, social media platforms have had to grapple with how to craft appropriate content moderation practices. Although the internet had been envisioned as a place that allowed unfettered free speech, with ideas judged based on pure merit as opposed to offline status, interactions quickly devolved into anti-social activities such as flaming wars, cyberbullying, and trolling. These developments required that social media platforms adopt more stringent moderation practices (e.g., flagging or deleting) in response to pressure from the public and/or advertisers. However, for a long time moderation has largely been applied to regular users rather than prominent personalities. But after social media was implicated multiple times in political violence, spread of misinformation, and electoral disruption facilitated by prominent politicians and other influential personalities, debate has intensified on the appropriate measures to take against prominent individuals that violate platform policies and terms and conditions.
Flagging has been used by social media platforms as a leading measure for content moderation. It is a mechanism commonly employed on social media to signal to users that content is objectionable or otherwise violates terms of service. The primary motivation for flagging content on social media is to curb the spread of misinformation. The effectiveness of flagging has only been studied for regular users. We still have little knowledge about whether and how social media platforms should moderate content produced by prominent individuals. Would content moderation such as flagging be effective in curbing the spread of objectionable content produced by prominent individuals? We therefore examined the following question in our research: How do social media users react to flagged content, specifically when that content is produced by a prominent individual?
We employ explainable machine learning models to quantify the effect of flagging content produced by former US president Trump on its subsequent resharing. Results show that the difference in the number of retweets between a routine or normal tweet and a flagged tweet amounts to 5,138 retweets (flagged tweets are retweeted more). We examined alternative explanations for our findings and found that even when controlling for topic of tweets and other factors, our findings held. In addition, the effect strengthened over time up until the time that Trump’s Twitter account was suspended.
There are various factors that may explain our findings. For example, the status and extraordinary influence of Trump may play a role considering the polarized nature of US politics and Trump’s enthusiastic base of supporters. When Twitter flagged any of Trump’s tweets, the ensuing publicity may have led to the flagged tweets being reshared even more than expected. Trump’s political opponents and some prominent journalists and news outlets (e.g., New York Times, BBC, and CNN) may have also shared flagged tweets either because of the rationale for flagging the tweets painting him in a negative light, or because of the news value of the flagged tweets. These all amounted to free publicity of the flagged content. Our findings indicate that on social media flagging as a content moderation approach may only be effective for smaller accounts.
Our suggestion, based on our results, is that flagging content deemed objectionable may be counterproductive until it is done so often as to be un-newsworthy. Yet, considering that it is unlikely that flagging content by prominent individuals, especially by the US president, would not generate news value and/or draw attention to the flagged content, a more effective method might be to delete content that violates the platform’s terms of service. Another alternative would be to keep flagging objectionable content while also restricting the ability to reshare that content to achieve the objective of limiting its spread.
We therefore suggest that flagging, the middle ground treaded by platforms as a compromise between those urging drastic punishment (e.g., banning errant user accounts) and those advocating laissez faire participation (e.g., leaving objectionable content unmoderated), is not a sustainable solution for foiling misinformation.
This research is detailed in the following paper:
Chipidza, W., & Yan, J. (Kevin). (n.d.). The effectiveness of flagging content belonging to prominent individuals: The case of Donald Trump on Twitter. Journal of the Association for Information Science and Technology, n/a(n/a). https://doi.org/10.1002/asi.24705
References
[1] Gilbert, S. A. (2020). “I run the world’s largest historical outreach project and it’s on a cesspool of a website.” Moderating a Public Scholarship Site on Reddit: A Case Study of r/AskHistorians. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1), 1–27.Cite this article in APA as: Chipidza, W., & Yan, J. (2022, August 11). Is it effective to flag social media content made by a prominent individual? Evidence from Donald Trump’s twitter activity says otherwise. Information Matters, Vol. 2, Issue 8. https://informationmatters.org/2022/08/is-it-effective-to-flag-social-media-content-made-by-a-prominent-individual-evidence-from-donald-trumps-twitter-activity-says-otherwise/