Trust, But Verify: Can We Make LLMs Trustworthy?
Large language models (LLMs) are starting to get integrated in many of our daily information access activities. But they often hallucinate and provide information that can be harmful. We can’t and shouldn’t trust them blindly, but what choice we have if they become our primary method of accessing information? We want to leverage the enormous potential LLMs have, but we also want them to be trustworthy. How do we do this? I offer three specific suggestions.
Read More