AI

EditorialFeatured

How Do You Like Them Agents?

As autonomous agent technologies rapidly permeate our digital landscape, a critical question emerges: what roles should computational agents fulfill to best augment human capabilities? The capabilities of today’s agents—from voice-activated personal assistants to code-generation systems—continue to expand dramatically, prompting urgent questions about their optimal design, function, and integration into human activities. Despite significant technical advances, we lack a coherent framework for conceptualizing the different relationships humans might have with agents, hampering both the evaluation of existing technologies and the principled design of future systems.

Read More
FeaturedTranslation

Rethinking Reuse in Data Lifecycle in the Age of Large Language Models

In the world we are living in, a digital world, some data slips past our awareness, but very little data ever truly disappears. As we, information scientists, are concerned with reproducibility and responsibility of research, data lifecycle models have been developed to manage the complexity. To foster open, transparent, and collaborative science, data is often archived in a repository at the end of the project according to such data lifecycle models. This is often followed by the last step of the lifecycle models, data reuse. Traditionally, this model is cyclical, with reused data leading to new questions and fueling subsequent rounds of research.

Read More
FeaturedInfoFire

LLMs, AI, and the Future of Research Evaluation: A Conversation with Mike Thelwall on Informetrics and Research Impact

In this episode of InfoFire, I sit down with Professor Mike Thelwall, a well accomplished scholar of Informetrics, to explore the intersections of Large Language Models (LLMs) and research evaluation. We delve into how LLMs are reshaping the landscape of research assessment, examining the promises they hold and the challenges they present in ensuring fair, meaningful, and context-aware evaluations.

Read More
EditorialFeatured

Here Come Agents

An agent is an autonomous entity or program that takes preferences, instructions, or other forms of inputs from a user to accomplish specific tasks on their behalf. And there is a huge hype around agents these days, thanks to advancements in various GenAI technologies. As big and small companies and individual developers continue investing heavily in development and deployment of agents, we are often missing some of the basic considerations, including what problems are we solving and how users, their tasks, and their contexts are incorporated in these developments.

Read More
EditorialFeatured

Can We Really Control AI?

AI is playing an increasingly larger part in our lives and the world around us. By some projections, we will have AGI, or Artificial General Intelligence, in less than a decade. Some are even arguing we are already there. Regardless of this timeline, it is clear that AI unchecked has potentials to cause great harms. Can we control or contain AI such that we can stop those harms? It’s not easy.

Read More
Translation

Exploring Women’s Health Information Literacy with AI: A South Asian Study

The relationship between AI and people’s health information is increasingly significant, and AI chatbot provides significantly more accurate answers to patients. However, while technology can help, it is up to people to decide how they want to use it. Even an AI tool like ChatGPT says “ChatGPT can make mistakes. Consider checking important information.” Using AI tools to make health-related decisions requires a good understanding of the information these tools provide. The project “AI and Health Information Literacy: A study exploring the perceived usefulness, and readiness among women in South Asia” aims to address the questions like “How do women in South Asia (SA) perceive the usefulness of AI in enhancing health information literacy?” and “What  factors  influence  their  readiness  to  adopt AI-driven health  information technologies?”

Read More
FeaturedFrontiersOpinion

Looking Backwards to See Ahead: The Case of Expert Systems Development in Libraries

During the current moment, as generative AI dominates our thinking, both for its extraordinary performance and serious flaws, a new direction is needed. The way forward may involve looking backward. The addressing the deficiencies of generative AI would benefit from reviewing, and incorporating, some of the lessons from expert system development during the late 20th century.

Read More