How Everyone Can Agree on What Counts as Quality Information
How Everyone Can Agree on What Counts as Quality Information
Frans van der Sluis
It’s tricky for people to agree on what counts as quality information. We see this every day with echo chambers on social media, where everyone has their own idea of what’s true or important. And with the spread of fake news, it’s clear that not everyone assesses information quality the same way. In fact, studies show that people agree only in 6-32% cases with each other when assessing quality. This suggests that information quality is a product of our own, individual minds—one that is best captured in “like,” “love,” “haha,” and “wow” buttons on social media. But our research suggests there’s a way to get people on the same page about information quality, and it involves making some changes to how we assess and share information online.
—people don't view the quality assessment of information as merely a matter of personal opinion—
Our research revealed a fascinating insight: people don’t view the quality assessment of information as merely a matter of personal opinion. Rather, they recognize the potential for assessments to hold universal validity—a concept we’ve termed “inter-subjective validity.” This implies that an assessment can be applicable not just to the individual making it but also to others, provided certain conditions are met. For instance, if someone deeply involved in football discusses a piece about football, their expertise makes their judgment more valid. Likewise, if someone actively uses the discussed information in their daily activities, their assessment gains additional validity.
In our first study, we explored online discussions on a variety of topics, from football and cooking to politics and fashion. We found that people often engage in discussions about the quality of information, especially when they feel knowledgeable. This willingness to discuss what makes information valid or not underscores the idea that information quality is a matter of collective concern, not solely personal judgment. Our second study presented participants with scenarios to test if people could agree on the quality of information. The results were encouraging. With a clear context—like understanding the assessor’s expertise, the intention behind their assessment, and the reputation of the information source—the validity of quality claims significantly improved, with agreement rates increasing from 18% to 61%. This notable increase suggests that achieving consensus on information quality is indeed possible, especially when assessments are made with transparency and clarity.
Right now, search engines and social media platforms don’t do much to help users understand why some information is considered high-quality and other information isn’t. This lack of clarity underscores the commonly observed disagreements—people only agree about information quality about 32% of the time at best. Our findings show that if users knew more about why information is judged the way it is, we could see that agreement rate jump to as much as 61%. It turns out, people can often agree on quality—if the conditions are right.
So, what can we do about it? For starters, we can make the process of assessing information quality more open and collaborative. If search engines and social media platforms were more transparent about how they judge quality, and if they let users weigh in on those judgments, we could all start to have a better understanding of what makes information good or not. And if we’re going to let information spread far and wide, it should first be vetted for quality. This doesn’t mean censoring content; it just means encouraging a more thoughtful look at what we share and consume online.
With the rapid emergence of content-generating technologies and the increasing sophistication of deep fakes, making quality assessments more open and collaborative has never been more critical. Platforms like Twitter/X are beginning to explore these concepts through features like community notes, which let users share brief assessments of misleading content. This is a step in the right direction, but there’s room to expand these efforts to cover all types of information, not just the obviously false stuff. By working together to figure out what counts as quality information, we can start to build a shared understanding of the world that’s based on more than just personal opinion.
Want to read more on these ideas? The full paper is available open access at JASIST–the Journal of the American Society for Information Science and Technology–via this link. A shorter workshop paper outlining some of these ideas is furthermore available here.
Cite this article in APA as: van der Sluis, F. How everyone can agree on what counts as quality information. (2024, April 12). Information Matters, Vol. 4, Issue 4. https://informationmatters.org/2024/04/how-everyone-can-agree-on-what-counts-as-quality-information/
Author
-
Dr. Frans van der Sluis is a dedicated researcher and academic in the field of human-information interaction. With a background in computer science and cognitive psychology, Dr. Van der Sluis explores how people interact with digital information systems, from search engines and databases to innovative platforms like social media and interactive AI. His work primarily focuses on enhancing user experience by making digital interactions more intuitive and effective. He investigates how information systems can better incorporate and reflect quality considerations to foster engagement and understanding among users. By exploring how design and technology can better align with human thinking, Dr. Van der Sluis aims to make information retrieval not only more efficient but also more meaningful for users across the globe.
View all posts