Opinion

Co-Pilot, Not Autopilot: Navigating the Future of Qualitative Research

Co-Pilot, Not Autopilot: Navigating the Future of Qualitative Research

Pathum B Rathnayake

The Elephant in the Room

In the world of research, there are two main camps. There are the “numbers people” (quantitative), who measure things, and then there are people like me, the “story people” (qualitative). We interview people, observe cultures, and read diaries to understand why humans act the way they do. For decades, our tools were simple: highlighters, index cards, and our own brains. Then came software to help us organize. Now, we have Generative AI (Gen AI), tools like ChatGPT, Gemini and Claude.

Recently, a group of 3 published an open letter signed by over 400 qualitative researchers. Their message was clear: “Keep AI away from our work.” They argue that using AI to find themes in data kills the “reflexive” process, the deeply human act of interpreting nuance, emotion, and meaning. I have spent my career championing the human touch in the EdTech works and research. I understand their fear. However, I believe this total rejection is a mistake. We should not be banning Gen AI; we should be teaching researchers how to master it. The history shows that banning technology rarely works. When tape recorders were invented, some researchers feared we would stop taking good notes. When software arrived, they feared we would stop reading the texts. In both cases, we adapted. Gen AI is the next evolution. It is powerful, and yes, it is risky if used lazily. But if we use it with authority, transparency, and wisdom, it does not erase the human researcher.

—Gen AI is the sous-chef, not the head chef—

The Sous-Chef, Not the Head Chef

The biggest misconception is that if we use AI, we are asking it to do the thinking for us. This is wrong. Imagine a master chef. They are defined by their palate, their creativity, and their ability to develop a menu. They are not defined by how fast they can chop an onion. Gen AI is the sous-chef. It can chop the onions (process large amounts of text), organize the pantry (categorize data), and suggest flavor pairings (identify potential patterns). But the Head Chef (the Qualitative researcher) must still taste the sauce, adjust the seasoning, and decide if the dish is ready to serve.

When I analyze interview transcripts today, I don’t ask the AI: “Tell me the answer.” Instead, I ask: “Here are ten transcripts. Summarize the participants’ views on workplace stress. Then, tell me what I might be missing.” That does not replace my interpretation; it challenges it. It acts as a conversational partner that never gets tired.

Speeding Up to Slow Down

One of the greatest enemies of good qualitative research is fatigue. Analyzing complex data is exhausting. By the time a researcher has read their fiftieth interview transcript, their brain is tired. They might miss a subtle connection simply because they are burnt out. Gen AI handles the “grunt work.” It can standardize formatting, correct typos in transcripts, and pull out specific quotes in seconds, tasks that used to take me weeks. By handing these mechanical tasks to the AI, I “buy back” my time. I can spend those saved hours doing what humans do best: thinking deeply, feeling empathy for the participants, and constructing a meaningful narrative. In this way, the machine doesn’t make the work faster and shallower; it makes the process efficient enough to allow for deeper reflection.

The “Reflexivity” Argument

The signatories of the 2025 open letter worry about “reflexivity.” In simple English, reflexivity means being aware of your own biases. If I am a middle-aged man, I might interpret a teenage boy’s interview differently than a teenage boy would. Good researchers constantly check themselves: “Am I seeing this pattern because it’s there, or because I expect to see it?” 

Critics argue AI kills this self-reflection. I argue it saves it. We all live in echo chambers. When I work alone, I normally trapped in my own perspective. Gen AI can act as a “devils advocate.” I can feed it my preliminary findings and say: “Critique this logic. What alternative explanations exist?” The AI might say, “You have focused heavily on financial stress, but three participants mentioned social isolation more often.” Suddenly, the AI has forced me to be more reflexive. It shines a light on my blind spots. As David L. Morgan suggests, the AI becomes a dialogue partner that helps us see our data from a new angle.

The Three Golden Rules

If we are to ignore the ban and use these tools, we must do so responsibly. I propose three rules for the modern researcher:

  1. Researcher Authority (The “Human Loop”)

    The human must always have the final say. AI is known to “hallucinate“. It can be biased. A researcher should never copy-paste an AI’s analysis into a final report. You must verify every claim against the original data. The AI suggests; the human decides.

  1. Transparency

    We must be honest. In the past, if I used a research assistant, I thanked them in the acknowledgments. Today, if I use AI to help brainstorm codes, I must state that clearly in the methodology or Acknowledge section. We must tell our readers exactly how the tool or tools were used.

  1. AI Literacy

    We cannot use tools that we do not understand. Researchers need to learn how Large Language Models work. We need to know that they are predicting the next word in a sentence based on probability, not “understanding” the human soul. Knowing the tool’s limits prevents us from over-trusting it.

Closing Reflections

The fear expressed by group of Qualitative Researches comes from a good place: a desire to protect the integrity of human research. But history shows that banning technology rarely works. Gen AI is the next evolution. If we use it with authority, transparency, and wisdom, it does not erase the human researcher. It liberates us. Let us not lock the door on this technology. Let us invite it into the room, sit it down, and supervise it closely.

TL;DR (too long; didn’t read.)

In October 2025, an open letter signed by hundreds of scholars called for a rejection of Generative AI in qualitative research, fearing the loss of human depth. This article argues the opposite. By viewing AI not as a replacement but as a collaborative tool, researchers can enhance their analysis, reduce administrative fatigue, and actually deepen their critical thinking. The future of research is not human versus machine, but human with machine, provided the human remains firmly in charge.

Cite this article in APA as: Rathnayake, P. B. (2025, December 4). Co-pilot, not autopilot: Navigating the future of qualitative research. https://informationmatters.org/2025/12/co-pilot-not-autopilot-navigating-the-future-of-qualitative-research/

Author

  • Pathum B Rathnayake is a passionate Educational Technologist and E-Learning Consultant dedicated to transforming learning experiences. He graduated in IT and Information Management and obtained a Doctor of Education degree. His primary research interest encompasses Educational Technology, e-learning, Gamification and social media.

    View all posts

Pathum B Rathnayake

Pathum B Rathnayake is a passionate Educational Technologist and E-Learning Consultant dedicated to transforming learning experiences. He graduated in IT and Information Management and obtained a Doctor of Education degree. His primary research interest encompasses Educational Technology, e-learning, Gamification and social media.