Are Chatbots Safe? A Look at User Privacy Concerns
Are Chatbots Safe? A Look at User Privacy Concerns
Ece Gumusel, Ph.D. Candidate, Indiana University Bloomington
The European Union has taken the lead in regulating data privacy with laws like the General Data Protection Regulation (GDPR), which establishes strict rules on data handling and user consent. Additionally, the EU AI Act aims to further regulate AI technologies. In contrast, the United States currently lacks comprehensive laws specifically addressing AI chatbots, leaving user privacy largely at the discretion of individual companies.
To better understand these privacy concerns, a literature review (https://doi.org/10.1002/asi.2489834) was conducted. Out of a total of 894 papers reviewed from the past five years, 38 research papers were related to user privacy issues related to conversational text-based AI chatbots. These selected studies were manually analyzed through the lens of social informatics focusing their theoretical and methodological approaches and findings on user privacy concerns. But why was social informatics chosen for this analysis? Social informatics is crucial for understanding how users interact with AI chatbots because it highlights the social factors at play. This approach helps guide ethical design practices, inform policies and regulations, promote collaboration across disciplines, and raise public awareness of privacy issues.
—Social informatics is crucial for understanding how users interact with AI chatbots because it highlights the social factors at play—
The review identified a lack of a unified theoretical framework for understanding user privacy concerns in chatbot studies. Research in this area draws from three main disciplines: social science, information technology management, and cognitive science. Most studies rely on social science theories, such as innovation diffusion theory and the privacy/technology paradox, while technology-focused research often uses the Technology Acceptance Model (TAM).
Methodologically, there has been a surge in studies focusing on user privacy, particularly after 2020, with many employing quantitative or mixed methods and involving over 200 participants. However, there are few qualitative studies, which suggests a gap in understanding user experiences. While existing research has provided valuable insights into user privacy issues, it has also pointed out limitations that require further exploration of how users navigate privacy risks when interacting with chatbots.
Research shows that chatbot users have various privacy concerns, including decision-making and manipulation, self-disclosure, trust, data collection and storage, secondary use, legal compliance, and data breach and security. Many studies examine how self-disclosure impacts user privacy, especially when chatbots are designed in ways that could manipulate users. However, there remains a significant gap in clearly defining the specific privacy harms and risks users face. More detailed information can be found in the privacy findings section of the paper.
To address user privacy concerns with AI chatbots, the literature suggests several key solutions. First, enhancing user control allows individuals to manage their data—deciding what to share and how long it should be stored—potentially through tools that enable data deletion or access restrictions. Second, improving transparency with clear explanations of data usage empowers users to make informed choices. Additionally, integrating privacy features into chatbot designs fosters ethical practices and builds user trust. Lastly, cross-disciplinary research that involves psychology, social science, and technology is essential for developing effective solutions to protect user privacy in the evolving AI landscape.
In a world where technology plays a crucial role in our lives, addressing the privacy concerns associated with conversational AI chatbots is more important than ever. Ensuring that users can engage with these technologies without compromising their personal information is a challenge that requires collaboration between researchers, developers, and policymakers. As the conversation around privacy in AI continues, it is important to prioritize understanding user privacy harms and risks, user trust, and protection in the design and deployment of these innovative tools.
This article is a translation of: Gumusel, E. (2024). A literature review of user privacy concerns in conversational chatbots: A social informatics approach: An Annual Review of Information Science and Technology (ARIST) paper. Journal of the Association for Information Science and Technology, 1–34. https://doi.org/10.1002/asi.24898
Cite this article in APA as: Gumusel, E. Are chatbots safe? A look at user privacy concerns. (2024, October 16). Information Matters, Vol. 4, Issue 10. https://informationmatters.org/2024/10/are-chatbots-safe-a-look-at-user-privacy-concerns/