Looking Backwards to See Ahead: The Case of Expert Systems Development in Libraries
Looking Backwards to See Ahead: The Case of Expert Systems Development in Libraries
Michael Ridley
They say history doesn’t repeat, it rhymes. However, perhaps more ominously, Winston Churchill warned, “those that fail to learn from history are doomed to repeat it.” The information science field is characterized as one always moving forward, striving for innovation. Looking to the past is not a particular strength. During the current moment, as generative AI dominates our thinking, both for its extraordinary performance and serious flaws, a new direction is needed.
The way forward may involve looking backward.
—failure is success in progress—
The addressing the deficiencies of generative AI would benefit from reviewing, and incorporating, some of the lessons from expert system development during the late 20th century. During this period, libraries attempted to design expert systems for one of the core, and most complex, processes in libraries: the reference desk and the questions users ask.
In many ways, the library reference desk service is the OG of contemporary chatbots. Effective reference service requires dialog (at the level of expertise and need of the requestor), clarification (people never ask for what they actually want; go figure), extensive knowledge base (generalists and specialists), problem solving (navigating the complex information world), and referrals (know where to go when you don’t know). Essentially, all things AI chatbots need to have.
How does 50 year-old expert systems development work inform current AI work?
Expert systems were rule-based systems with knowledge bases drawn from experts in the field where the system was to be used. As it turns out, expert systems were brittle, hard to update, and applicable to only narrow domains. A failure. So why look to them for inspiration?
Because as Albert Einstein says: “failure is success in progress.”
The lessons from expert systems developments in library reference service can be summed up in three words: knowledge, explanation, and flexibility.
Knowledge: not merely data but “reasoned information.” Symbolic knowledge, logic statements, and codified information that augments and supports the large data stores and learned information. Large Language Models (LLMs) are data dumps with inferred understanding. The knowledge representation in expert systems is codified human experience.
Explanation: explanations of the output of intelligence systems are essential to inform both system designers and system users. Tania Lombrozo underscores the importance of explanations noting that they “are more than a human preoccupation–they are central to our sense of understanding, and the currency in which we exchange beliefs.” Transparency and explainability remain outstanding problems for current AI (see The Explainability Imperative).
Flexibility: resist narrow or dogmatic approaches. Rigid beliefs in expert systems or in neural networks obscure other avenues. Seek input, advice, and partnership from other disciplines with different perspectives, methodologies, and assumptions. One size fits none; diversity is the strength of technical solutions as well as human creativity.
Expert systems relied on symbolic knowledge and were found wanting. Generative AI relies on neural networks and is now found wanting. However, integrating these perspectives suggests a way forward: neurosymbolic AI (see below for a link to the Substack post from Gary Marcus).
Fortunately, as William Faulkner said “the past is never dead. It’s not even past.”
For more on this see:
Michael Ridley (2024). Prototyping expert systems in reference services (1980–2000): Experimentation, success, disillusionment, and legacy. Library & Information History, 40(1), 46–67. https://doi.org/10.3366/lih.2024.0165
Gary Marcus (2024, July 28). AlphaProof, AlphaGeometry, ChatGPT, and why the future of AI is neurosymbolic. Marcus on AI. https://garymarcus.substack.com/p/alphaproof-alphageometry-chatgpt
Cite this article in APA as: Ridley, M. Looking backwards to see ahead: The case of expert systems development in libraries. (2024, November 6). Information Matters, Vol. 4, Issue 11. https://informationmatters.org/2024/11/a-new-critical-lens-to-examine-factors-influencing-differences-in-global-scholarly-communication-experiences/
Author
-
For many years, Michael Ridley was the Chief Librarian and Chief Information Officer (CIO) at the University of Guelph where he is now Librarian Emeritus. Ridley recently completed a PhD at the Faculty of Information and Media Studies, Western University ("Folk Theories, Recommender Systems, and Human Centered Explainable AI (HCXAI)". Prior to his appointment at Guelph, he held positions at the University of Waterloo (Associate University Librarian) and McMaster University (Head of Systems and Technical Services, Health Sciences Library). His professional career as a librarian began where it ended. Ridley's first appointment as an academic librarian was at Guelph where he served as a Reference Librarian, Catalogue Librarian, and Library Systems Analyst.
View all posts