Assess Novelty in Academic Research: A Human-AI Collaborative Approach
Assess Novelty in Academic Research: A Human-AI Collaborative Approach
Wenqing Wu, Chengzhi Zhang, Yi Zhao
In academia, one of the key criteria for determining whether a research paper is publishable is its novelty. Novelty means that the paper is bringing something new to the table—new ideas, new methods, or new findings that haven’t been seen before. It’s like asking, “Does this paper tell us something we didn’t already know?” But figuring out whether a paper is truly novel can be tricky. Traditionally, experts in the field have been responsible for making such judgements, but even they have their limitations. Another way is to measure the novelty of a study through unusual combinations of references or journals in its bibliography, but this is not always reliable either. So, can we combine the strengths of humans and machines to improve this process?
Our recent paper, “Automated Novelty Evaluation of Academic Paper: A Collaborative Approach Integrating Human and Large Language Model Knowledge,” explores a new way to evaluate the novelty of academic papers by combining human expertise with the vast knowledge of large language models (LLMs). Imagine having a super-smart assistant who can read and process lots of information, working together with human experts who contribute insights and judgment. Together, they can make a better assessment of whether a paper truly offers something new.
— We asked "Does this paper tell us something we didn't already know?"—
Why Novelty Matters
Imagine you’re reading a book, and you want to know if it’s worth your time. You might ask yourself, “Is this a story I’ve heard before, or does it offer a fresh perspective?” The same question applies in academic research. If a paper merely repeats what others have already said, it is unlikely to be useful. But if it introduces a novel idea or a new method for solving a problem, that’s exciting! It means the progress in our understanding of the world.
The Challenge of Measuring Novelty
Measuring novelty isn’t easy. Experts can sometimes miss things because they can’t know everything in their field. Citation patterns can also be misleading, because sometimes authors cite papers just to demonstrate thoroughness, not because the cited works directly inform their new ideas. Moreover, certain disciplines tend to favor well-established, frequently cited classics, while newer and potentially more innovative research often gets overlooked.
How We Did It
To address this challenge, we combined human knowledge with artificial intelligence. We collected papers from a leading machine learning conference and examined expert reviews. These reviews often include explicit comments about the paper’s novelty. We also used a large language model called ChatGPT to summarize the methods section of each paper. This gave us two different perspectives: one from human experts and one from the AI.
Note: Att-Reduction is Self-Attention Reduction module. PLM is pretrained language model.

Figure 1. Overview of our study.
We then fed both sets of information into a deep learning model to see if it could predict how novel the paper’s methods were. The specific model framework is shown in Figure 1. To make sure our model was learning meaningful information, we designed a special module called the “knowledge-guided fusion module.” This module enables the model to focus on the most important parts of the human and AI inputs, integrating them in a way that makes the most sense.
What We Found
Our experiments showed that combining human and AI knowledge works well. Compared to using either human or AI inputs alone, our integrated approach achieved better performance in predicting whether a paper’s methods were truly novel.
What This Means for the Future
Our research shows that we can use AI to assist us evaluate academic papers in a more effective way. By combining the strengths of humans and machines, we can get a clearer understanding of what’s truly novel and impactful in research. This could be especially useful in fast evolving fields or domains with an overwhelming volume of information.
Practical Implications
While our approach is promising, it’s important to remember that human judgment is still crucial. AI can provide valuable insights, but it can’t replace the expertise and intuition of human reviewers. Our goal is to enhance the efficiency and accuracy of novelty evaluation and not to eliminate the human element in the process.
In the future, we hope to expand our research to additional domains and explore different forms of novelty. We also want to develop better strategies to combine human and AI knowledge, so we can keep improving how we assess research contributions.
In conclusion, our research shows that by working together, humans and AI can provide us a deeper understanding of what’s truly novel in academic research. This collaboration could lead to new discoveries and advancements in many fields, enriching our collective knowledge of the world.
For more on this see:
Wu, W., Zhang, C., & Zhao, Y. (2025). Automated novelty evaluation of academic paper: A collaborative approach integrating human and large language model knowledge. Journal of the Association for Information Science and Technology, 1–18. https://doi.org/10.1002/asi.70005
About the authors
Wenqing Wu is a PhD student at the School of Economics and Management, Nanjing University of Science and Technology in China. His research interests include natural language processing, novelty evaluation of academic papers and peer review text mining.
Chengzhi Zhang is a professor at iSchool of Nanjing University of Science and Technology NJUST. He received PhD degree of Information Science from Nanjing University, China. He has published more than 100 publications, including JASIST, IPM, Aslib JIM, JOI, OIR, SCIM, ACL, NAACL, etc. He serves as Editorial Board Member and Managing Guest Editor for 10 international journals Patterns, IPM, OIR, Aslib JIM, TEL, IDD, NLE, JDIS, DIM, DI, etc. and PC members of several international conferences ACL, IJCAI, EMNLP, NAACL, AACL, IJCNLP, NLPCC, ASIS&T, JCDL, iConference, ISSI, etc. in fields of natural language process and scientometrics. His research fields include information retrieval, information organization, text mining and nature language processing. Currently. He is focusing on scientific text mining, knowledge entity extraction and evaluation, social media mining. He was a visiting scholar in the School of Computing and Information at the University of Pittsburgh and in the Department of Linguistics and Translation at the City University of Hong Kong.
Yi Zhao is a lecturer at the School of Management, Anhui University, China. He holds a PhD in Management from Nanjing University of Science and Technology and was a Visiting Scholar in the Department of Library and Information Science at Yonsei University. He has published more than 10 articles, including JASIST, IPM, JOI, SCIM, TFSC, etc. His research primarily focuses on team science, bibliometrics, and scientific text mining, with a particular interest in exploring the impact of artificial intelligence (AI) on scientific collaboration, gender equality, and scientific evaluation.
Cite this article in APA as: Wu, W., Zhang, C., Zhao, Y. (2025, July 25). Assess novelty in academic research: A human-AI collaborative approach. Information Matters. https://informationmatters.org/2025/07/assess-novelty-in-academic-research-a-human-ai-collaborative-approach/
Authors
-
I am a PhD student at the School of Economics and Management, Nanjing University of Science and Technology in China. My research interests include natural language processing, novelty evaluation of academic papers and peer review text mining.
View all posts -
Chengzhi Zhang is a professor at iSchool of Nanjing University of Science and Technology NJUST. He received PhD degree of Information Science from Nanjing University, China. He has published more than 100 publications, including JASIST, IPM, Aslib JIM, JOI, OIR, SCIM, ACL, NAACL, etc. He serves as Editorial Board Member and Managing Guest Editor for 10 international journals Patterns, IPM, OIR, Aslib JIM, TEL, IDD, NLE, JDIS, DIM, DI, etc. and PC members of several international conferences ACL, IJCAI, EMNLP, NAACL, AACL, IJCNLP, NLPCC, ASIS&T, JCDL, iConference, ISSI, etc. in fields of natural language process and scientometrics. His research fields include information retrieval, information organization, text mining and nature language processing. Currently. He is focusing on scientific text mining, knowledge entity extraction and evaluation, social media mining. He was a visiting scholar in the School of Computing and Information at the University of Pittsburgh and in the Department of Linguistics and Translation at the City University of Hong Kong.
View all posts -
I am a lecturer at the School of Management, Anhui University, China. I hold a Ph.D. in Management from Nanjing University of Science and Technology and was a Visiting Scholar in the Department of Library and Information Science at Yonsei University. My research primarily focuses on team science, bibliometrics, and scientific text mining, with a particular interest in exploring the impact of artificial intelligence (AI) on scientific collaboration, gender equality, and scientific evaluation.
View all posts