Editorial

EditorialFeatured

Trust, But Verify: Can We Make LLMs Trustworthy?

Large language models (LLMs) are starting to get integrated in many of our daily information access activities. But they often hallucinate and provide information that can be harmful. We can’t and shouldn’t trust them blindly, but what choice we have if they become our primary method of accessing information? We want to leverage the enormous potential LLMs have, but we also want them to be trustworthy. How do we do this? I offer three specific suggestions.

Read More
Editorial

AI and the Future of Information Access

AI offers tremendous benefits for deepening and widening access to information. But there are also many problem and challenges. The future of AI-driven information access is not set yet, but it’s constantly being shaped through our discourse, policies, and actions. What will we choose and do? Is it possible to leverage AI’s potentials while minimizing risks and harms? I propose three ethos to make that happen.

Read More
EditorialFeatured

The Questions We Are Not Asking Enough About AI

There are plenty of hopes and hypes about AI these days, but what is often lacking in these conversations is a set of questions that are more fundamental than what AI could, should and would do. These questions are about what we consider as intelligence, free will, and indeed what it is to be a human. The answers are not going to be easy or fast, but starting to ask these questions could help pave the way for not just the future of AI, but also the future of humankind.

Read More