When YouTube Meets Grandma: How Older Adults Perceive and Influence AI Recommendations
When YouTube Meets Grandma: How Older Adults Perceive and Influence AI Recommendations
Yuhao Zhang, Jiqun Liu
Imagine your grandma watching YouTube for gardening tips. Within days, her homepage blossoms with videos about fertilizers, pruning techniques, and even miracle plants. Who decided that?
Behind every “You May Also Like” video sits an algorithm — an invisible curator quietly shaping what we see and what we don’t. Most of us accept this invisible hand as usual. We know it’s there, even if we don’t fully understand it. But for older adults, those who first learned about the world through newspapers, radio, and television, the logic of these recommendations can be confusing.
Algorithms have become the new gatekeepers of information, but people experience their power in very different ways. These questions inspired our studies of how older adults understand and interact with personalized recommendation algorithms.
—Algorithm literacy is not a fixed skill, but a lived practice—
What Should We Know First
Before we can talk about how people interact with algorithms, we need to know two key ideas.
- Algorithm literacy — knowing that algorithms exist, understanding (at least partly) how they make decisions, and learning how to influence them.
- Mental models — the internal “stories” we construct to explain how a technological system works. If you think “YouTube shows me what’s popular” or “Facebook listens to my conversations,” you’ve built a mental model, whether right or wrong. These models influence how we interact with technology and how much we trust it.
Once we understand these ideas, a question emerges: how do older adults make sense of algorithms? Using a qualitative, visual method, we invited participants aged 60 to 75 to draw diagrams of how they thought YouTube selected videos for them.
How Older Adults Learn What Algorithms Do
Our first study explored where older adults’ mental models of video recommender systems come from. Three distinct patterns emerged.
- From old media to new. Those who grew up with newspapers or television often explain YouTube in those same terms. Some imagine it works like a free newspaper — full of ads that pay for the stories. Others see it as a kind of TV, where commercials are made to appeal to whoever happens to be watching. These ways of thinking aren’t entirely wrong, but they belong to a time when audiences were receivers, not contributors.
- Borrowing ideas from other apps. People bring lessons, expectations, and interaction strategies from other apps into how they think about YouTube. If Facebook seems to “listen” when you talk about refrigerators, it’s easy to assume YouTube does too. If you can adjust your Pinterest board or Amazon suggestions, you might expect to have the same control over your YouTube homepage. These shortcuts make new systems feel more familiar — even when they only tell part of the story.
- Learning through use. Many describe “teaching” YouTube through watching and searching, and notice how the system seems to learn their preferences over time. They notice that the more gardening tutorials they click, the more similar ones appear the next day. They realize that switching from a phone to a tablet can change what shows up on the homepage. Bit by bit, they piece together their own understanding of how it “learns.”
From Awareness to Action
That is not the whole story. Our second study examined algorithm literacy—how older adults recognize, interpret, and sometimes influence personalization systems.
Most were aware that algorithms were deciding what appeared on their screens. They could name the clues: what they watched, searched for, or subscribed to, sometimes even their age or location. Some believed the system worked by comparing videos—showing “more like this” based on past clicks, an item-based way of thinking (Fig.1a). Others saw it as learning about the person behind the screen—guessing tastes from age, gender, or where they lived, a user-based logic (Fig.1b). Beyond that, they developed strategies to manage their feeds. Some tried to “train the algorithm” by consciously clicking on certain videos and ignoring others. A few resisted personalization altogether—using YouTube without logging in, or clearing their history to “start fresh.”

Together, these habits show that older adults aren’t passive receivers of algorithms. They are active interpreters, drawing on old experiences to build new understanding. They learned through trial, conversation, and observation, proving that algorithm literacy is not a fixed skill, but a lived practice.
Why It Matters
Why should we care about how older adults see algorithms? Because their experiences tell us something about how everyone learns to live with technology. When systems assume that users are young, fast, and fluent, they overlook the many ways people actually learn and adapt.
In practice, this calls for clear design and explicit teaching. Designers can help by showing how recommendations are made and giving users simple tools to change them. Educators and libraries can support this by explaining these systems in everyday language. Making algorithms easier to see and shape is not just good design — it is a step toward a more open and fairer digital world for everyone.
This article is based on the following paper:
Zhang, Y., & Liu, J. (2025). Falling behind again? Characterizing and assessing older adults’ algorithm literacy in interactions with video recommendations. Journal of the Association for Information Science and Technology, 76(3), 604-620.
Zhang, Y., & Liu, J. (2024, December). Unpacking Older Adults’ Mental Models of Video Recommender Systems: A Qualitative Study. In Proceedings of the 24th ACM/IEEE Joint Conference on Digital Libraries (pp. 1-5).
Cite this article in APA as: Zhang, Y., & Liu, J. (2025, October 29). When YouTube meets grandma: How older adults perceive and influence AI recommendations. Information Matters. https://informationmatters.org/2025/10/when-youtube-meets-grandma-how-older-adults-perceive-and-influence-ai-recommendations/
Authors
-
-
Jiqun Liu is currently an assistant professor of data science at the University of Oklahoma. He holds a PhD in Information Science from Rutgers University-New Brunswick. His Human-Computer Interaction and Recommendation (HCIR) research lab focuses on the intersection of human-computer interaction (HCI), interactive information seeking/retrieval (IS&R), and cognitive psychology. Dr. Liu’s research projects seek to apply the knowledge learned about people interacting with information and knowledge in bias-aware user modeling, proactive fair recommendation, and intelligent nudging. More information about Dr. Liu can be found at: https://jiqunl.github.io/me/.
View all posts