Endangered Social Media Innovations Part 1: Usenet’s Small World Model Preventing Large Scale Content Manipulation
Endangered Social Media Innovations Part 1: Usenet’s Small World Model Preventing Large Scale Content Manipulation
Christoper Lueg
Yet another troubling development in social media (as if there aren’t enough already—e.g. continuous surveillance to sell targeted advertisements) is research that looks at ways to help online discussions stay on-topic and to prevent discussions from descending into exchanges of hostilities. Researchers do so by computing numbers that may predict that a corresponding discussion is likely to deteriorate. Based on that score, measures may be implemented to prevent that expected deterioration from ever happening. Measures may include manipulation of the way discussions are ranked for certain participants. The idea is that reduced exposure to respective discussions would take some heat off the encounters.
At first this kind of content manipulation may sound justified, even though questions remain as to how “discussion deterioration” will be calculated and how these algorithms would reflect the fact that robust discussion can be desirable (Andersen 2021).
—I wonder why we seem to have been de-inventing in the social media space and why this happened.—
Not showing content can be done deliberately, as described, or it can happen as collateral damage of algorithms designed to prioritize engagement. Tufekci (2017) describes such a scenario where political topics like the 2014 Ferguson unrest in the USA may become buried, attention-wise, by social media algorithms that prioritize showing their clientele “fun stuff” like the then-popular ice bucket challenge just to prolong engagements which in turn allows to push more ads. Tufekci warns that the same mechanisms can also be used to bury information deliberately (plenary at ASIS&T 2018), denoting platforms as “suitable for potential authoritarianism.”
Today’s social media are largely operating on technological infrastructure controlled by a small number of very powerful corporate entities. These entities are able to impose largely arbitrary criteria as to what information can be shared using their infrastructure, and what criteria are used when determining what content will be shown.
Imposing some restrictions is not fundamentally wrong as some libertarians would argue (the prototypical argument for some level of restrictions is the sharing of child sexual abuse material) but it is deeply flawed to leave the control over information sharing to corporate entities that are pursuing, by definition, corporate interests.
It was not always like that and I wonder why we seem to have been de-inventing in the social media space and why this happened. This interest made me reflect on past accomplishments in the design of social media that existed long before the term “social media” emerged in 2004 (Merriam-Webster 2022).
Back in 1979 (that is more than four decades or about two generations ago) two Duke University graduate students, Tom Truscott and Jim Ellis, put together the building blocks of what would later be known as Netnews or Usenet, and called “a democratic and technological breakthrough” (Hauben and Hauben 1998, see also Pfaffenberger 2003).
In a nutshell, Usenet is a distributed network of so-called Usenet servers that host and share people’s discussion contributions. Using a floodfill approach, Usenet servers distribute new contributions to any other Usenet server they agreed to share content with (called “feeds” in Usenet land). Other servers may or may not accept contributions based on a variety of criteria, including whether they already received the same contribution from another server.
An important aspect of this distribution model is that a server not accepting certain contributions does not necessarily impact on other servers since these other servers would still be able to obtain the contributions from elsewhere. In this context Internet pioneer and Electronic Frontier Foundation founder John Gilmore is often quoted saying: “The Net interprets censorship as damage and routes around it.”(*) Gilmore (n.d.) explains, “In its original form, it meant that the Usenet software (which moves messages around in discussion newsgroups) was resistant to censorship because, if a node drops certain messages because it doesn’t like their subject, the messages find their way past that node anyway by some other route.”
This means that the Usenet ecosystem allows for every single participating site (server) to set their own content filtering rules, and they can do so without affecting others.
To illustrate this by example, one could set up a Usenet server for a Facebook-like community where images of breastfeeding mums are banned for being “explicit.” While arguably a weird construct, the banning of any posting on that particular server that shows a breastfeeding mum would not affect any other Usenet server and their respective conceptualizations of what constitutes explicit images. The impact of the corresponding world view would be limited to this particular site which again shows how censorship is limited by design in the Usenet ecosystem.
That doesn’t mean that Usenet was inherently more peaceful than other online spaces. Pfaffenberger (1996) discusses that Usenet’s overall culture “stems from a lengthy and often traumatic history, in which designers, users, and administrators struggled to conceptualize and control the growing network in the face of rapid and unpredictable technological change.” Kayani (1998) observes that the way people engage with each other much depends on the particular communities (newsgroups) in which they participate a pattern which we can also observe in contemporary social media.
Final thoughts
At a time when academics and politicians contemplate the idea of breaking up big tech to counter the risk of large-scale content manipulation it might be worth revisiting the Usenet model and its time-tested foundations: a universe of small information worlds enabled by sharing among peers and the lack of central (corporate) control. Plus, endless opportunities for innovation in interface design due to the decoupling of transport and presentation layers and unrestricted access to the contents produced by participants.
From the point of view articulated in this article, the most recent wave of highly specialized chat apps is a solid step backwards since they push a content sharing model where content and interface are glued together in one package with no scope for independent interface designs and no scope for reviewing presentation algorithms for possible interferences like the ones discussed earlier. The argument that they afford sheltered spaces for discussions does not hold, certainly not from a technical point of view, since a Usenet server can deliver exactly that as long as it does not trade articles with other servers, albeit in a much more transparent way.
There is lots that we need to remember in social media.
(*) Thanks to the folks at Quote Investigator for tracking this one down https://quoteinvestigator.com/2021/07/12/censor/
References
Andersen, I. V. (2021). Hostility online: Flaming, trolling, and the public debate. First Monday, 26(3). https://doi.org/10.5210/fm.v26i3.11547
Gilmore, J. (n.d.) Things I’ve Said (That People Sometimes Remember). http://www.toad.com/gnu/ Last updated November 27, 2013.
Hauben, M., & Hauben, R. (1998). The Social Forces Behind the Development of Usenet (Chapter 3). First Monday, 3(7). https://doi.org/10.5210/fm.v3i7.609
Kayany, J.M. (1998). Contexts of uninhibited online behavior: Flaming in social newsgroups on usenet. Journal of the American Society for Information Science Vol 49, Issue12. Special Issue on Social Informatics, pp 1135–1141.
Merriam-Webster (2022) “Social media” Merriam-Webster.com Dictionary https://www.merriam-webster.com/dictionary/social%20media. Accessed 8 Mar. 2022.
Pfaffenberger, B. (2003). A standing wave in the web of our communications. USENET and the socio-technical construction of cyberspace values. In C. Lueg and D. Fisher (eds.) From Usenet to CoWebs: Interacting with Social Information Spaces, pp.20–44. London, UK, Springer
Pfaffenberger, B. (1996). If I Want It, It’s OK”: Usenet and the (Outer) Limits of Free Speech. The Information Society 12:4, 365-386. Taylor & Francis.
Tufekci, Z. (2017). Twitter and Tear Gas. The Power and Fragility of Networked Protest. Yale University Press, New Haven CT, USA & London UK.
Vance, A. (2011). This Tech Bubble Is Different. Bloomberg Business Week April 15, 2011. Retrieved from https://www.bloomberg.com/news/articles/2011-04-14/this-tech-bubble-is-different
Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1), 75–89. doi:10.1057/jit.2015.5
Cite this article in APA as: Lueg, C. (2022, April 7). Endangered social media innovations part 1: Usenet’s small world model preventing large scale content manipulation. Information Matters, Vol. 2, Issue 4. https://informationmatters.org/2022/04/endangered-social-media-innovations-part-1-usenets-small-world-model-preventing-large-scale-content-manipulation/