Information Integrity, Academic Integrity, and Generative AI
Information Integrity, Academic Integrity, and Generative AI
Ali Shiri
As generative AI tools and technologies are penetrating various aspects of our digital and physical lives, it is particularly timely and important to consider the fundamental principles of information integrity and academic integrity. AI agents and algorithms are increasingly finding their way into social media platforms, search engines, learning management systems, and shopping and travel websites, just to name a few. While the concepts of information integrity and academic integrity have different connotations, contexts, and origins, the current AI development landscape calls for a critical examination of the ways in which these concepts intertwine, overlap, and have the potential to contribute to a conceptual framework for the responsible development and use of AI technologies. Information science is well-positioned to lead and critically inform the development of theoretical and technological frameworks that support information integrity and academic integrity. If we agree with the argument that generative AI tools and technologies share such high-level facets as people, information, data, technology, and their interaction with information science, then it is our responsibility to embrace the opportunities and address the emerging informational, social, ethical, and cultural challenges.
—How do we re-define and re-imagine information integrity and academic integrity in our increasingly AI-enhanced scholarly and academic environment to ensure innovation, but more importantly ethical and social accountability?—
Information integrity is conceptualized as one of the key facets of information security that promotes the protection of information and information systems from threats to confidentiality, integrity, and availability, throughout the life cycle, i.e., from creation or origin through storage, transmission, processing, and destruction (Harley and Cooper (2021). Information integrity has been defined and approached from various perspectives and in various contexts, including information quality, data, and information privacy, cybersecurity, information literacy, data governance, accounting, banking, telecommunication, electronic health records, voting, and autonomous vehicles. To ensure the integrity of information, various mechanisms and frameworks have been developed by digital preservation professionals and information security researchers, including cryptography, digital signatures, and watermarking, some of which are particularly important to be re-imagined, repurposed and reused in the context of AI-generated text, images, and digital objects.
Academic integrity, on the other hand, is defined as a commitment to six fundamental values of honesty, trust, fairness, respect, responsibility, and courage by scholars, educators, students, researchers, learners, and administrators to inform and improve ethical decision making within scholarly and academic communities (ICAI, 2021). Academic integrity, and more specifically research and scholarship integrity, encompasses such key facets as originality, giving credit and acknowledgment, transparency, privacy, confidentiality, and ethical and responsible generation, collection, analysis, and reporting of data and information. With the rise of generative AI tools such as ChatGPT, Perplexity.AI, and Google Bard, the discussion of academic integrity and information integrity has become even more important than ever before. In her discussion of post-plagiarism and writing in the age of artificial intelligence, Eaton (2021) notes that “hybrid human-AI writing will become the norm, humans can relinquish control but not responsibility, attribution remains important, and that traditional definitions of plagiarism no longer apply.” In fact, many academic and higher education institutions around the world are actively addressing emerging AI trends by reviewing and revising their policies and guidelines around admissions and awards, academic and scholarship integrity, students’ code of academic behaviour, research ethics protocols, and intellectual property and copyright.
A review of information integrity and academic integrity definitions shows that these two concepts share a number of common and overlapping principles and practices such as trustworthiness, accountability, credibility, reliability, authenticity, ethical standards, and traceability of data/information. The shifting conceptual coverage of these two concepts and their emerging convergence across physical and digital boundaries demands a closer examination. Shneiderman’s new, thought-provoking book on Human-Centered AI (2022) clearly articulates a set of key principles that can guide the development of future responsible AI-enabled systems. He notes that “human-Centered AI shows how to make successful technologies that amplify, augment, empower, and enhance human performance.” He further emphasizes the key principles of safety, trustworthiness, and reliability as particularly important for the development of any AI-enabled systems that aim to be human-centred. These key principles can also inform the development of theoretical and technological frameworks that support information integrity and academic integrity.
It is necessary to develop conceptual and operational frameworks that allow us to apply these key principles to the ways in which we design, develop, and use AI-enhanced tools and technologies. An important question to be asked here is: how do we re-define and re-imagine information integrity and academic integrity in our increasingly AI-enhanced scholarly and academic environment to ensure innovation, but more importantly ethical and social accountability? One important area for information science to lead in this emerging AI landscape is to actively engage in AI literacy research and development. Drawing upon the decades of research and development in the areas of design and evaluation of a broad and diverse range of information access and retrieval systems and user-centred information search and information behaviour methodologies and frameworks, information science is well-positioned to contribute to information integrity and academic integrity in new ways that will have a significant and positive impact on higher education, research and scholarship, and the society at large. The emerging confluence and integration of machine learning and human learning and cognition calls for critical pedagogies and literacies that can capture the nuances of the interplay between human and AI agents. A critical AI literacy framework that takes into consideration the key principles of information integrity and academic integrity may prove useful in supporting educators, scholars, students, and the general public in their informed and responsible approach to generative AI. Such an AI literacy framework should take a holistic approach to data, information, and algorithms as well as social, cultural, legal, and ethical aspects and issues. As Bawden (2001) argued, for the current digital information environment we need to develop a broad and complex type of literacy that would encompass all types of skill-based competencies without being confined or attached to one particular technology or set of technologies. This approach will ensure that the information and technology literacy skills, knowledge, and competencies developed to date can organically and meaningfully contribute to the development of an AI literacy framework.
References
Bawden, D. (2001). Information and digital literacies: a review of concepts. Journal of documentation, 57(2), 218-259.
Eaton, S. E. (2021). Plagiarism in higher education: Tackling tough topics in academic integrity. Bloomsbury Publishing USA.
Harley, K., & Cooper, R. (2021). Information integrity: are we there yet?. ACM Computing Surveys (CSUR), 54(2), 1-35.
ICAI (International Center for Academic Integrity) .(2021).The fundamental values of academic integrity(3rded.). International Center for Academic Integrity. https://academicintegrity.org/images/pdfs/20019_ICAI-FundamentalValues_R12.pdf
Shneiderman, B. (2022). Human-Centered AI. Oxford University Press.
Cite this article in APA as: Shiri, A. Information integrity, academic integrity, and generative AI. (2023, October 4). Information Matters, Vol. 3, Issue 10. https://informationmatters.org/2023/10/information-integrity-academic-integrity-and-generative-ai/
Author
-
Ali Shiri is a Professor in the School of Library and Information Studies at the University of Alberta, Edmonton, Alberta, Canada, and is currently serving as the Vice Dean of the Faculty of Graduate Studies and Research. He received his PhD in Information Science from the University of Strathclyde in Glasgow, Scotland in 2004 and has been teaching, researching, and writing about digital libraries and digital information interaction in the past two decades. In his current research, funded by the Social Sciences and Humanities Research Council of Canada (SSHRC), he is developing cultural heritage digital libraries and digital storytelling systems for the Inuit communities in Canada’s Western Arctic. More recently, Ali has been researching and writing about AI and ethics.
View all posts