AI-Powered Peer Review: How Review Reveal Can Detect Bias, Improve Fairness, and Transform Scholarly Publishing
AI-Powered Peer Review: How Review Reveal Can Detect Bias, Improve Fairness, and Transform Scholarly Publishing
Anita Sundaram Coleman
When I submitted my book proposal, Infophilia Unbound: A Positive Psychology of Information, to a highly respected LIS publisher, I had to provide contact details for at least three reviewers. The editor approached all three: two agreed to review, one declined citing “exhaustion.” Within the promised time frame, the editor sent me the two reviews. One was a glowing endorsement; the other was a harsh critique that felt more personal than professional and recommended the book not be signed for publication. That encounter prompted me to ask: how many authors face similar bias? How often does reviewer language reinforce power hierarchies? What tools can expose these patterns?
I found that this is a broader, systemic problem. Reviewer language, it turns out, can be a subtle weapon that de‑legitimizes research and researchers who fall outside dominant norms. However, if we treat each review as a signal and the manuscript as a receiver, we can map the transmission for distortions—biases, power imbalances, and information‑style mismatches—using a tri‑lens theoretical framework.
—Reviewer language can be a subtle weapon that de‑legitimizes research and researchers who fall outside dominant norms—
Three lenses that reveal hidden bias
Language can reinforce or create power hierarchies. When reviewers employ demeaning metaphors or dismissive qualifiers, they signal that the author’s identity or epistemic stance is less legitimate. This is an act of discursive violence.
Scholars hold multiple identities and privilege, marginalization or oppression depends on factors such as race, gender, country of residence, and institutional position held. This is intersectionality.
Information engagement (infophilia) exists on a spectrum from healthy to disordered, observable at individual, institutional, systemic, and societal levels. Adaptive infophilia allows us to quantify how reviewer language aligns with or diverges from the author’s communicative approach. By mapping sentiments across reviewer comments, we can identify “information‑style mismatches” that often signal bias: a reviewer who expects dense, jargon‑heavy prose may penalize a clear, accessible style, thereby disadvantaging scholars from non‑Western contexts who prioritize readability.
Together, these lenses transform reviewer feedback from opaque judgments into measurable, actionable data.

A personal story that turns theory into reality
I’m a woman from the Global South who lives on the coast of Southern California. Throughout my career in the US I’ve faced both subtle and overt attacks in a variety of academic settings—comments that reinforce stereotypes about my competence and undermine my voice. I earned promotion and tenure, resigned from several positions, and even as a satisfied pro bono community informatics leader and independent scholar, I routinely practice self‑censorship. For example, after unanimously favorable reviews, another major publisher offered me a contract for Infophilia Unbound; I declined.
These experiences underscore how institutional pressures can shape and silence scholarly work.
In reflecting on these encounters, I applied the lens of adaptive infophilia, a framework that treats reviewer language as signals of information-engagement style. For example, a reviewer called my subject “pseudo-multicultural female dominant circles of privileged scholarly networks.” Mapping my own experiences through this framework reveals a spectrum of engagement, from constructive critique to hostile over‑analysis, highlighting the ways in which power dynamics distort scholarly communication.
I set out to build a tool, Review Reveal, that treats reviewer comments as a data stream. First, it scans for sentiment and bias, flagging hostile or exclusionary language. Second, it maps each comment to the specific section of the manuscript it critiques, so authors can see precisely where concerns arise. Third, it audits for equity‑related phrasing—terms that may reflect gendered, racialized, or decolonial bias—and surfaces hidden power dynamics. Finally, it offers counter‑narratives, suggesting more inclusive ways to phrase the same critique. It not only highlights problematic language but also explains why it matters and how to rewrite it. In effect, Review Reveal turns a source of frustration into a constructive learning tool.
Review Reveal is presently a conceptual vision; it has not yet been implemented or validated, largely due to a lack of collaborators and my decision to pause the project. The prototype’s analytic framework was first applied to my book proposal reviews and was later corroborated by two published book reviews from the same negative reviewer.
Creating a dataset of reviewer comments paired with manuscripts is technically demanding and ethically sensitive. Peer‑review, idealized as “organized skepticism,” continues to be reflected in today’s open corpora such as Peer Read and MORPD. These are useful for machine‑learning tasks, but they lack the fine‑tuned lenses needed to expose ethical violations and discursive violence. Collecting this data requires strict confidentiality protocols: reviewers expect anonymity, and journals have rigorous privacy policies. For instance, anonymized reviewer IDs must be stored separately from manuscript identifiers.
One solution is to invite reviewers to voluntarily share anonymized comments after the final decision is published. Another is to generate synthetic datasets that mimic real reviewer language without revealing personal details. Beyond the data, we must also address the community’s willingness to participate. Many scholars fear retaliation or reputational harm if their critiques are exposed. Building trust — through transparent data‑use agreements, guarantees of anonymity, and clear communication of the tool’s benefits — will be essential to gather the evidence needed to improve peer review.
Review Reveal’s impact extends beyond individual authors:
- For reviewers, it serves as a reflective aid, encouraging more mindful language and helping them recognize their own unconscious biases.
- For editors, it offers a transparent lens to assess reviewer fairness and consistency, potentially reducing the need for manual oversight.
- And for the broader scholarly community, aggregating flagged patterns across journals can reveal systemic issues—such as recurring stereotypes or exclusionary practices—that merit policy changes.
By turning reviewer comments into data, we can begin to track trends, identify problematic language, and develop guidelines that promote equity. The ultimate goal is a peer‑review ecosystem that values clarity, respect, and diversity as much as it values rigor, ensuring that every scholar’s voice is heard and respected.
The debate around AI in peer review is heating up. Major publishers such as Wiley and Springer‑Nature champion AI, while bodies like the National Science Foundation and the Committee on Publication Ethics (COPE) urge caution. The National Institutes of Health (NIH) explicitly prohibits reviewers from using AI in grant evaluations, citing concerns about transparency and fairness. Yet AI-generated content is flooding the literature, making it harder to distinguish rigorous research from noise. In this environment, the stakes of discursive violence—bias embedded in reviewer language—are higher than ever.
Call to Action
If you’ve ever felt that a review was too harsh, or that a comment seemed to hinge on your perceived identity rather than your work, you’re not alone. I invite you to share your own experiences, join the conversation, and help shape a fairer peer‑review system.
Cite this article in APA as: Coleman, A. S. (2025, October 16). AI-powered peer review: How review reveal can detect bias, improve fairness, and transform scholarly publishing. Information Matters. https://informationmatters.org/2025/10/ai-powered-peer-review-how-review-reveal-can-detect-bias-improve-fairness-and-transform-scholarly-publishing/
Author
-
Anita S. Coleman is the publisher of Infophilia: A Positive Psychology of Information, a weekly publication and lab developing adaptive infophilia, her integrative theory for unifying library and information sciences. From 2015–2025 she led the Anti Racism Digital Library and the International Anti Racism Thesaurus and earlier worked in technology related change management as a librarian (Rancho Santiago), researcher (UCLA, UC Santa Barbara), and LIS faculty (University of Arizona, Tucson). Born in Tamil-Nadu, India, her credentials include an M.L.I.S. from the University of Madras (Dept. of LIS founded by S.R. Ranganathan), an M.S.Ed. in Curriculum & Instruction and Educational Technology from Southern Illinois University Carbondale, and a Ph.D. from the University of Illinois Urbana Champaign. She founded dLIST (Digital Library of Information Science and Technology), the first interdisciplinary open access repository in the field. Named a 2007 “Mover and Shaker” by Library Journal, Coleman has served as an NSF grants reviewer and held roles with ACM, ALA, ASIS&T, ATLA, ISKO, and the Learning Resources Association of the California Community Colleges. Her research on bibliometrics, digital libraries, metadata, evaluation, information behaviors, HCI, EDIA has appeared in venues including JASIST, Journal of Documentation, Knowledge Organization, D Lib Magazine, and Theological Librarianship.
View all posts