LLMs

FeaturedTranslation

Simulating Social Perceptions with LLMs: From a Policy Case to a Full-Pipeline Benchmark

People can experience the same public policy very differently. Some feel their lives are improving; others feel left behind. This is not simply disagreement, it reflects a core part of policy impact that is hard to capture with objective indicators alone: public perception. Traditional social surveys are designed for this purpose, but they are often slow, expensive, and hard to adapt quickly. They also face challenges such as fixed question formats, limited flexibility, and cross-cultural comparability.

Read More
EducationFeatured

Lessons in AI Literacy and Explainability from Lucy and Ricky

In the classic 1950s TV sitcom I Love Lucy, when Lucy did something outrageous her husband Ricky would exclaim “Lucy, you’ve got some explainin’ to do!” Typically, Lucy would come up with some sort of implausible response. Hilarity ensued. Well, it’s not the 1950s anymore but 70+ years later Large Language Models (LLMs) and AI chatbots (e.g., ChatGPT, Gemini) are doing outrageous things (hallucinations, fabrications, misinformation, and worse) and the explanations, if there are any, are just as implausible. And it isn’t funny.

Read More
FeaturedOpinion

Can AI Have a Conscience? A Look at Ethics in Machine Learning

Can AI have a conscience? Of course, today’s AI isn’t a sentient being with feelings or guilt. It won’t lose sleep over a tough decision. But as artificial intelligence plays a bigger role in our lives, we do expect it to act responsibly. In essence, we want AI to follow ethical principles,  a sort of programmed “conscience” so that it helps society without harming it. This is the crux of AI ethics, an increasingly important topic now that machine learning systems are making decisions that matter.

Read More
FeaturedTranslation

Rethinking Reuse in Data Lifecycle in the Age of Large Language Models

In the world we are living in, a digital world, some data slips past our awareness, but very little data ever truly disappears. As we, information scientists, are concerned with reproducibility and responsibility of research, data lifecycle models have been developed to manage the complexity. To foster open, transparent, and collaborative science, data is often archived in a repository at the end of the project according to such data lifecycle models. This is often followed by the last step of the lifecycle models, data reuse. Traditionally, this model is cyclical, with reused data leading to new questions and fueling subsequent rounds of research.

Read More
FeaturedInfoFire

LLMs, AI, and the Future of Research Evaluation: A Conversation with Mike Thelwall on Informetrics and Research Impact

In this episode of InfoFire, I sit down with Professor Mike Thelwall, a well accomplished scholar of Informetrics, to explore the intersections of Large Language Models (LLMs) and research evaluation. We delve into how LLMs are reshaping the landscape of research assessment, examining the promises they hold and the challenges they present in ensuring fair, meaningful, and context-aware evaluations.

Read More