Translation

Translation

Opposite ends of the Tay: Collaboration between the NHS and Public Libraries in Tayside (Scotland)

Opposite Ends of the Tay explores a growing collaboration between NHS Tayside and public libraries to strengthen information literacy and support preventative, person‑centred healthcare. Set against stark health inequalities in Scotland, it argues that libraries are vital community infrastructure for enabling people to find, judge and use health information safely.

Read More
Translation

Why Slow Journal Decisions Hurt More Than We Think

Anyone who has submitted a research paper knows the feeling. After months or years of work, the manuscript disappears into the peer review system. Days become weeks, weeks become months. You keep checking the submission portal, waiting for an answer. The final outcome certainly matters. Acceptance brings relief, rejection brings disappointment. But our study finds that the wait itself also matters, and more than we usually admit.

Read More
Translation

Can AI Describe Art as We Do? A Case Study on a Pottery Collection

There are two capabilities of current large language models (LLM)-based AI systems that we attempt to evaluate for improving the discoverability of library and museum collections, which are often searched by using expert-defined keyword vocabularies through complex hierarchical categories: 1) Vector search: differing from the traditional keyword search, it improves discovery of word semantic relationships in a broader natural language domain and 2) Multimodal large language models (MLLM):  combining computer vision processing images alongside LLMs, boosting understanding of the image both textually and visually. We explore how visual language models (VLM) and MLLMs can bridge vocabulary gaps in search between expert-generated descriptions and the public.

Read More
Translation

Can AI Really Understand Scientific Novelty? Insights from a New Benchmark

In academic research, novelty is one of the most important criteria for publication. A paper is expected to contribute something new, whether a method, a dataset, or a theoretical insight. But identifying novelty is not straightforward. Even experienced reviewers may disagree, and the rapid growth of scientific publications has made the task increasingly difficult. As the volume of submissions continues to rise, the peer review system faces growing pressure. This has sparked interest in whether artificial intelligence, particularly large language models (LLMs), can assist in evaluating research novelty. But before we can rely on AI for this task, a fundamental question must be answered: do LLMs actually understand novelty?

Read More
Translation

Beyond the Boolean: Is Natural Language search opening or closing the discovery gap for university e-library users?

For decades, the “search box” at the heart of the university library has been a gatekeeper. To unlock the vast treasures of academic databases, users had to speak a specific, rigid language, Boolean. For expert researchers, terms like AND, OR, and NOT are second nature. But for many students without appropriate information searching skills and training, the traditional search interface has often acted more as a barrier than a bridge.

Read More
EducationTranslation

From Content Generation to Content Validation: Why Human Judgment Still Matters in the AI Era

In the past year, the focus of AI in education has shifted from generating content to evaluating its quality. While large language models can now produce vast amounts of material in seconds, ensuring that this content is accurate, reliable, and pedagogically sound remains a challenge. Emerging research shows that using AI as an evaluator is still unreliable, making human judgment more essential than ever. In this new paradigm, the real bottleneck is no longer creation but validation.

Read More
Translation

AI-Native for Data Intelligence: Constructing a Conceptual System and Evolution Framework from an International Standardization Perspective

As artificial intelligence becomes deeply embedded in communication networks, software architectures, and industrial systems, AI-Native has fundamentally changed the system design, operation, and governance. Despite its growing influence, the concept of AI-Native remains ambiguously defined across domains, creating cognitive fragmentation and regulatory uncertainty. To harmonize different understandings and avoid confusions, this study develops a conceptual system and maturity evolution framework for AI-Native from an international standardization perspective, offering a structured foundation for both theoretical clarification and practical governance.

Read More
Translation

Expert Colleague or Dancing Bear? The Mixed Responses to AI in Digital Humanities Research

A recent study explored how scholars in the digital humanities research domain are navigating this new and complex landscape. Digital humanities is an interdisciplinary research field where scholars employ digital tools and computational methods to investigate cultural and humanities questions. Drawing on an international survey of 76 respondents and 15 in-depth interviews, the study found that scholars are not simply embracing or rejecting these tools. Instead, they are adopting AI systems cautiously, using them to speed up routine tasks, explore ideas, and build new skills, while navigating problems of accuracy, authorship, and what these systems might mean for the future of scholarship. The big question is no longer just whether AI is impressive, but whether it is becoming a genuine research partner, a useful tool, or, for some, still more of a “dancing bear” than a trusted collaborator. By tracing these mixed reactions and everyday practices, the study offers a grounded look at how AI is beginning to reshape academic life.

Read More
Translation

Explanation Singularity of Explainable Artificial Intelligence (XAI): An AIGC Information Adoption Perspective

Generative AI (GenAI) has rapidly become a common source of advice in high-stakes domains such as healthcare and in everyday decision-making. Yet their black box nature often leaves users uncertain about how outputs are produced and whether they should be trusted. Explainable Artificial Intelligence (XAI) is widely viewed as a potential remedy. However, research and recent debates suggest an important tension: adding explanations does not always lead to better outcomes. This study addresses a central question for research on human–AI interaction and information science: When do explanations facilitate information adoption, and when do they hinder it?

Read More