Welcome! I'm Bron, an avid explorer of digital technology and its potential to enrich our lives. Today's exciting era offers countless opportunities to advance our careers, from scaling corporate heights to embracing entrepreneurship or even enjoying a leisurely lifestyle. My website aims to be a valuable resource in supporting you on your journey towards success.♡

Supercharging Peer Review: Can AI Replace Human Expertise in Evaluating Academic Journal Articles?

Imagine an artificial intelligence (AI) tool that could read, analyse, and review an academic paper for you! While this may seem like science fiction, advancements in AI technologies are bringing this vision to reality.

Welcome to the brave new world of AI-assisted academic publishing, with Generative Pre-trained Transformers (GPTs) spearheading the revolution.

In this blog post, I explore how custom GPT models could be used to automate and enhance the academic journal article review process. I’ll discuss some of the strengths and limitations of this technology and touch on some implications for researchers, peer reviewers, and journal editors.

If you’d like to test out such a tool, I created one! Try it out (it’s free!) by visiting Journal Article Peer Review Assistant (JAPRA). The tool’s interface will look familiar to anyone who’s used a chatbot before. It’s modelled after ChatGPT, but it’s built to perform the specialised function of reviewing academic papers. It was built using ChatGPT’s GPT Building tool which is available on the paid plan.

To see how it works, upload a journal article (note. ensure you have permission, or that it is publicly available) by clicking the paperclip (upload) icon in the chat box. It will provide a high-level summary of the paper for you. Then, you can pose questions (i.e., type a prompt) just as you would if you were using ChatGPT. For example, you could ask “Can you please critique the methodological approach taken in this paper?” The GPT will provide a summary answer based on its ability to critique the paper.

The Journal Article Peer Review Assistant (JAPRA) has been configured solely for the task of reviewing journal articles, rather than ChatGPT’s more generalist knowledge capabilities.

The tool isn’t perfect, but I’ve found it does a pretty good job, noting that I’ve only used it to critique articles in my field of Business (i.e., I have no idea how it might go at reviewing papers in other disciplines especially those who are quant heavy).

If you’re interested in learning how to build a custom GPT yourself, please let me know you’d like me to write a ‘how to’ blog post about it and I’ll make it happen! Or, you could check out one of the many videos about GPT Builder on YouTube, or check out this (very short) course on Udemy.

What are GPTs?

GPTs are a type of natural language processing AI trained on vast datasets of text data. This allows them to generate human-like writing and engage in dialogue. The ‘pre-trained’ part means they already have base language comprehension before being customised for specific tasks.

Benefits of GPTs for Article Reviewing

For academics, using a custom GPT to review papers before submission might assist in boosting the quality and speed of the reviewing process.

Some key benefits include:

  • Efficiency: GPTs can read and process text thousands of times faster than humans. This enables rapid analysis of your paper.
  • Assessment: GPTs can simultaneously assess language, data, structure, potential impact and more. They might pick up errors you’ve overlooked or be used to critique the article to anticipate some reviewer comments you might wish to address before submission.
  • Accessibility: GPT review tools allow you to analyse the paper to assist you in enhancing it for accessibility and/or non-expert readers.

Despite these advantages, GPTs also come with limitations that you should keep in mind:

  • Lack of Specialised Knowledge: GPTs may miss nuances that a domain expert would likely recognise.
  • Difficulty Interpreting Novel Methods: GPTs are only as good as the information underlying their understanding. One implication of this is if you’re using a novel methodology, or one with little use to date, the GPT may be unfamiliar with the method and therefore not be informed enough to adequately judge its merits.
  • Subtlety and Subjectivity: GPTs may not grasp implied meanings or subjective aspects of papers.

Overall, GPTs can prove excellent at providing a first pass review and identifying areas for improvement. However, human expertise is still vital for deeper, more nuanced analysis.

The ideal approach is combining GPT speed and consistency with human judgment and creativity.

Streamlining the Journal Submission Review Process

For journal editors and peer reviewers, GPTs show promise in dramatically improving the efficiency and consistency of assessing incoming submissions.

Here are some examples wherein GPTs could be used to filter paper submissions, assessing:

  • Scope and relevance to the journal
  • Adherence to submission guidelines
  • Language quality
  • Basic methodology
  • Impact

Using a GPT could automates the initial screening process, allowing editors and reviewers to repurpose some of their time for other tasks.

Integrating Knowledge Bases

A ‘knowledge base’ refers to a structured database that contains a wealth of information related to the task assigned to a GPT. A knowledge based used in the context of a reviewing a journal article submission could include, but not be limited to, previously published articles, citation data, and key research topics. Essentially, a knowledge base could serve as a repository of a collection of knowledge (i.e. documents/files) pertinent to the journal.

Journals could create a knowledge base by compiling their most popular and highly cited articles, along with emerging research trends and seminal works in the field. This database then becomes a valuable resource for understanding the evolving landscape of the discipline and ‘sense-check’ for new submissions to ensure they are in line with what the journal is seeking to publish.

Knowledge Base Considerations for GPT Article Reviews

  1. Fit and Relevance Assessment:
    • When a new article is submitted, a GPT could cross-reference the content of the submission with the journal’s knowledge base.
    • This process might help assess whether the article fits within the thematic and quality parameters that characterise the journal’s most impactful publications.
  2. Predicting Citation Potential:
    • By analysing patterns in the knowledge base, such as topics of highly cited papers, the GPT could estimate the potential citation impact of the submitted article.
    • This prediction could be based on factors like alignment with current research trends, the novelty of the work, or its relevance to ongoing debates in the field and the journal.
  3. Enhanced Editorial Decision-Making:
    • This system could aid editors in making more informed decisions, prioritising articles that not only align with the journal’s scope but also hold promise for significant academic impact.

Importantly, while promising, relying solely on GPT screening does carry risks:

  • Novel approaches may be overlooked if they don’t fit neatly into existing frameworks. Journals should ensure a diversity of ideas.
  • Success metrics like citations take time to accumulate, making them lagging predictors of impact, especially for pioneering research.
  • Bias in AI training may lead to problematic decisions regarding desk rejects.

For maximum benefit, the role of GPTs should focus on assisting, not replacing, human editors and peer reviewers. There’s no substitute for taking the time to thoroughly critique a paper yourself!

Mitigating Dataset Bias

One crucial concern around GPTs is that biases in their training data get propagated into their outputs. I’ve written about this with co-authors in a recent paper in Organizational Dynamics on gender bias in AI-generated data.

For example, a model trained only on English language datasets might have limited utility for reviewing submissions written in other languages or may discriminate against the ideas presented in a paper simply because the paper was written by a non-native English speaker.

To maximise the integrity and quality of GPT-assisted reviews, journals should seek to vet the composition of datasets used in training custom models or find ways to combat those biases unjustly influencing publishing decisions.

Performing bias audits, using techniques like sentiment analysis, can help to surface systematic skews.

Additionally, journals may consider disclosing AI screening processes and/or making those screening models available for academics to use to evaluate the suitability of their submissions.

Striking the Optimal Balance of AI and Human Insight

The rise of AI in academic journal publishing sparks both excitement about new efficiencies and concerns about preserving rigor.

GPT capabilities should be viewed as amplifiers, not replacements, for human expertise.

Like any tool, GPTs can be used wisely or poorly.

Journals that deploy GPTs as initial screeners and advisors while retaining human peer review at the core stand to reap substantial benefits. The most notable benefit being ‘time’. Automating repetitive tasks presents the opportunity for allowing skilled editors and established experts to better leverage their strategic abilities and disciplinary knowledge.

Researchers who submit papers may also be able to gain more rapid self-generated feedback to strengthen their work before submitting their paper to a potentially lengthy review process.

Importantly though, relying wholly on AI review risks squeezing out innovative thinking, nuanced criticism, and knowledge advancements that don’t fit tidy algorithms.

Disciplined governance and continuous tuning will be imperative as these technologies are integrated into the publishing workflow.

The future of academic publishing will undoubtedly involve a symbiotic relationship between human and machine intelligence.

GPTs are powerful research assistants, but human peers remain the source of creative inspiration, constructive debate, and penetrating insights that drive disciplines forward. By embracing the strengths of both, we have an opportunity to foster greater efficiency alongside continued knowledge-generation progress, open discourse, and diversity of thought.

The path ahead will require vigilance, care, and constant reassessment to realise that balanced vision. Despite these hurdles, the possibilities of leveraging the technology make it a journey well worth pursuing.

Journal Article Peer Review Assistant (JAPRA)

If you have a ChatGPT paid account, you can have a go at reviewing an academic journal article with my AI bot!

If you’ve enjoyed this blog post, consider signing up to my newsletter ‘T3’, where I share tips, tricks, and tools on AI + Technology for the higher education sector.

You May Also Like