LLMs

Education

Lessons in AI Literacy and Explainability from Lucy and Ricky

In the classic 1950s TV sitcom I Love Lucy, when Lucy did something outrageous her husband Ricky would exclaim “Lucy, you’ve got some explainin’ to do!” Typically, Lucy would come up with some sort of implausible response. Hilarity ensued. Well, it’s not the 1950s anymore but 70+ years later Large Language Models (LLMs) and AI chatbots (e.g., ChatGPT, Gemini) are doing outrageous things (hallucinations, fabrications, misinformation, and worse) and the explanations, if there are any, are just as implausible. And it isn’t funny.

Read More
FeaturedOpinion

Can AI Have a Conscience? A Look at Ethics in Machine Learning

Can AI have a conscience? Of course, today’s AI isn’t a sentient being with feelings or guilt. It won’t lose sleep over a tough decision. But as artificial intelligence plays a bigger role in our lives, we do expect it to act responsibly. In essence, we want AI to follow ethical principles,  a sort of programmed “conscience” so that it helps society without harming it. This is the crux of AI ethics, an increasingly important topic now that machine learning systems are making decisions that matter.

Read More
FeaturedTranslation

Rethinking Reuse in Data Lifecycle in the Age of Large Language Models

In the world we are living in, a digital world, some data slips past our awareness, but very little data ever truly disappears. As we, information scientists, are concerned with reproducibility and responsibility of research, data lifecycle models have been developed to manage the complexity. To foster open, transparent, and collaborative science, data is often archived in a repository at the end of the project according to such data lifecycle models. This is often followed by the last step of the lifecycle models, data reuse. Traditionally, this model is cyclical, with reused data leading to new questions and fueling subsequent rounds of research.

Read More
FeaturedInfoFire

LLMs, AI, and the Future of Research Evaluation: A Conversation with Mike Thelwall on Informetrics and Research Impact

In this episode of InfoFire, I sit down with Professor Mike Thelwall, a well accomplished scholar of Informetrics, to explore the intersections of Large Language Models (LLMs) and research evaluation. We delve into how LLMs are reshaping the landscape of research assessment, examining the promises they hold and the challenges they present in ensuring fair, meaningful, and context-aware evaluations.

Read More
InfoFireMultimedia

The Power and the Pitfalls of Large Language Models: A fireside chat with Ricardo Baeza-Yates.

Have you ever wondered how Google helps you complete your search query by suggesting the next terms of your query? Large Language Models (LLMs) power this feature. But LLMs go beyond that feature. Today, LLMs are used in building AI systems and applications ranging from recognizing speech to writing poetry. They have become very powerful, but there are also pitfalls.

Read More