Javascript is required
Adelman, C., Jenkins, D., & Kemmis, S. (1976). Re‐thinking case study: Notes from the second Cambridge Conference. Camb. J. Educ., 6(3), 139–150. [Google Scholar] [Crossref]
Alberts, B., Hanson, B., & Kelner, K. L. (2008). Reviewing peer review. Science, 321(5885), 15–15. [Google Scholar] [Crossref]
Ambrosino, A., Bellino, E., Cedrini, M., Deleidi, M., & Gahn, S. J. (2025). Introduction to the special issue on the 20th STOREP conference: Rethinking economic policies: The role of the state in the post-Covid-19. Rev. Political Econ., 37(2), 331–333. [Google Scholar] [Crossref]
Arsenault, A. C., Heffernan, A., & Murphy, M. P. (2021). What is the role of graduate student journals in the publish-or-perish academy? Three lessons from three editors-in-chief. Int. Stud., 58(1), 98–115. [Google Scholar] [Crossref]
Beckley, A., Netherton, C., & Singh, S. (2015). Closing the gap through bridges to higher education. In Research and Development in Higher Education: Learning for Life and Work in A Complex World (Vol. 38, pp. 416–435). [Google Scholar]
Bedeian, A. G. (2004). Peer review and the social construction of knowledge in the management discipline. Acad. Manag. Learn. Educ., 3(2), 198–216. [Google Scholar] [Crossref]
Candal-Pedreira, C., Rey-Brandariz, J., Varela-Lema, L., Pérez-Ríos, M., & Ruano-Ravina, A. (2023). Challenges in peer review: How to guarantee the quality and transparency of the editorial process in scientific journals. An. Pediatr. (Engl. Ed.), 99(1), 54–59. [Google Scholar] [Crossref]
De Picker, M. (2020). Rethinking inclusion and disability activism at academic conferences: Strategies proposed by a PhD student with a physical disability. Disabil. Soc., 35(1), 163–167. [Google Scholar] [Crossref]
Gaughf, N. W. & Foster, P. S. (2016). Implementing a centralized institutional peer tutoring program. Educ. Health, 29(2), 148–151. [Google Scholar] [Crossref]
Gottlieb, M., Egan, D. J., Krzyzaniak, S. M., Wagner, J., Weizberg, M., & Chan, T. (2020). Rethinking the approach to continuing professional development conferences in the era of COVID-19. J. Contin. Educ. Health Prof., 40(3), 187–191. [Google Scholar] [Crossref]
Grainger, D. W. (2007). Peer review as professional responsibility: A quality control system only as good as the participants. Biomaterials, 28(34), 5199–5203. [Google Scholar] [Crossref]
Jefferson, T., Alderson, P., Wager, E., & Davidoff, F. (2002). Effects of editorial peer review: A systematic review. JAMA, 287(21), 2784–2786. [Google Scholar] [Crossref]
Köhler, T., González-Morales, M. G., Banks, G. C., O’Boyle, E. H., Allen, J. A., Sinha, R., Woo, S. E., & Gulick, L. M. (2020). Supporting robust, rigorous, and reliable reviewing as the cornerstone of our profession: Introducing a competency framework for peer review. Ind. Organ. Psychol., 13(1), 1–27. [Google Scholar] [Crossref]
Kousha, K. & Thelwall, M. (2024). Artificial intelligence to support publishing and peer review: A summary and review. Learn. Publ., 37(1), 4–12. [Google Scholar] [Crossref]
Lamont, M. (2012). How professors think: Inside the curious world of academic judgment. Reis, 140, 173–184. [Google Scholar]
Liang, W., Izzo, Z., Zhang, Y., Lepp, H., Cao, H., Zhao, X., Chen, L., Ye, H., Liu, S., & Huang, Z. et al. (2024). Monitoring AI-modified content at scale: A case study on the impact of ChatGPT on AI conference peer reviews. arXiv. https://arxiv.org/abs/2403.07183 [Google Scholar]
Lightman, B. (2016). Popularizers, participation and the transformations of nineteenth-century publishing: From the 1860s to the 1880s. Notes Rec. R. Soc. Hist. Sci., 70(4), 343–359. [Google Scholar] [Crossref]
Mah, P. M. (2023). The art of deep learning and natural language processing for emotional sentiment analysis on the academic scholars’ peer review process. In Proceedings of the 24th Annual Conference on Information Technology Education (pp. 186–198). [Google Scholar] [Crossref]
Mah, P. M. (2024). Predicting emotional impact on peer review, peer assessment, and self assessments using deep learning and NLP in STEM education. In International Conference on Computers in Education. [Google Scholar] [Crossref]
Mah, P. M., Skalna, I., & Offiong, U. P. (2022). Virtual monitoring as a digital delivery and assessment impact on students’ learning. In Communications of International Proceedings (Vol. 2022, Issue 3). https://ibimapublishing.com/p-articles/COVID40EDU/2022/3915022/3915022-2.pdf [Google Scholar]
Musselin, C. (2013). How peer review empowers the academic profession and university managers: Changes in relationships between the state, universities and the professoriate. Res. Policy, 42(5), 1165–1173. [Google Scholar]
Olcott IV, C., Mitchell, R. S., Steinberg, G. K., & Zarins, C. K. (2000). Institutional peer review can reduce the risk and cost of carotid endarterectomy. Arch. Surg., 135(8), 939–942. [Google Scholar] [Crossref]
Pol, R. A. V. D., Reid, D. H., & Fuqua, R. W. (1983). Peer training of safety‐related skills to institutional staff: Benefits for trainers and trainees. J. Appl. Behav. Anal., 16(2), 139–156. [Google Scholar] [Crossref]
Saad, A., Jenko, N., Ariyaratne, S., Birch, N., Iyengar, K. P., Davies, A. M., Vaishya, R., & Botchu, R. (2024). Exploring the potential of ChatGPT in the peer review process: An observational study. Diabetes Metab. Syndr. Clin. Res. Rev., 18(2), 102946. [Google Scholar] [Crossref]
Tama, J., Barma, N. H., Durbin, B., Goldgeier, J., & Jentleson, B. W. (2023). Bridging the gap in a changing world: New opportunities and challenges for engaging practitioners and the public. Int. Stud. Perspect., 24(3), 285–307. [Google Scholar] [Crossref]
Tang, M., Ren, P., & Zhao, Z. (2024). Bridging the gap: The role of educational technology in promoting educational equity. Educ. Rev. (USA), 8(8), 1077–1086. [Google Scholar] [Crossref]
Tennant, J. P. (2018). The state of the art in peer review. FEMS Microbiol. Lett., 365(19), fny204. [Google Scholar] [Crossref]
Trueblood, J. S., Allison, D. B., Field, S. M., Fishbach, A., Gaillard, S. D., Gigerenzer, G., Holmes, W. R., Lewandowsky, S., Matzke, D., & Murphy, M. C. et al. (2025). The misalignment of incentives in academic publishing and implications for journal reform. Proc. Natl. Acad. Sci. U.S.A., 122(5), e2401231121. [Google Scholar] [Crossref]
Tufano, R., Dabić, O., Mastropaolo, A., Ciniselli, M., & Bavota, G. (2024). Code review automation: Strengths and weaknesses of the state of the art. IIEEE Trans. Softw. Eng., 50(2), 338–353. [Google Scholar] [Crossref]
Yu, S. & Zhang, L. (2025). The impacts of “publish or perish” on Chinese and Canadian academics. In Portraits of Academic Life in Higher Education (pp. 91–105). Brills. [Google Scholar] [Crossref]
Yuan, N. P., Gaines, T. L., Jones, L. M., Rodriguez, L. M., Hamilton, N., & Kinnish, K. (2016). Bridging the gap between research and practice by strengthening academic-community partnerships for violence research. Psychol. Violence, 6(1), 27–33. [Google Scholar] [Crossref]
Search
Open Access
Research article

AI Meta-Audit Test Case: Impact of PhD Candidates and Postdoctoral Fellows on Publishing Activities via Academic Partnerships for Peer Review Services

Pascal Muam Mah1,2*,
John Muzam3,
Tambi Daniel Mbu4,
Polycap Mudoh5,
Mahamane Moutari Abdou Baoua6,
Lilian Kuyiena Song7,
John Akoko8,
Eric Munyeshuri8,
Janet Awino Okello8,
Selestine John Salema9
1
Department of Information and Communication Technology, AGH University of Krakow, 30-059 Krakow, Poland
2
OPIT-Open Institute of Technology, SWQ 3334 St. Julian’s, Malta
3
Department of Organization and Management, Silesian University of Technology, 44-100 Zabrze, Poland
4
Department of Economics, University of Bamenda, Bambili, Cameroon
5
Department of Political Science and Administration, University of Szczecin, 70‑453 Szczecin, Poland
6
Department of Economics, Djibo Hamani University of Tahoua, 5000 Tahoua, Niger
7
Department of Science of Education, Higher Institute for Professionalism and Excellence, Yaoundé, Cameroon
8
Department of Organization and Management, Silesian University of Technology, 44‑100 Gliwice, Poland
9
Department of Biochemistry, Biophysics and Biotechnology, Jagiellonian University, 30‑387 Kraków, Poland
Education Science and Management
|
Volume 4, Issue 1, 2026
|
Pages 1-19
Received: 01-09-2026,
Revised: 02-25-2026,
Accepted: 03-11-2026,
Available online: 03-18-2026
View Full Article|Download PDF

Abstract:

The peer review process is a significant aspect of academic publishing, which shoulders the responsibility to ensure the quality and integrity of scholarly work. However, challenges such as a lack of formal peer review training, scarcity of reviewers, and bias in the selection process have posed daunting challenges. This paper explored the potential of PhD candidates and postdoctoral fellows as peer reviewers through academic partnerships with publishing houses. “AI meta-audit test case” was employed to analyze “if there exists any publishing activities, in particular peer review services, involved in academic partnerships as well as the impact of PhD candidates and postdoctoral fellows on peer review activities”. Our objective is to evaluate the extent to which PhD candidates and postdoctoral fellows contribute to academic publishing activities through peer review services facilitated by academic partnerships. The project team defined key metrics in a set scope, determined data sources, designed AI-powered analysis approach, proposed hypotheses, identified risks and challenges, and above all, provided evaluation and recommendations. The study highlights the demand for structured peer review training, incentives for early-career researchers, and institutional collaborations to enhance the quality and efficiency of peer review process.

Keywords: Academic peer review, PhD candidates and postdoctoral fellows, Publishing partnerships, AI meta-audit test case, Reviewer

1. Introduction

Academic publishing has undergone remarkable growth, but this progress is accompanied by various challenges, such as the occurrence of retractions, instances of misconduct, and complaints about the peer review process. T​e​n​n​a​n​t​ ​(​2​0​1​8​) provided an analysis of the current peer review system, to outline its benefits, shortcomings, and new developments. The study delved into conventional and open peer review approaches, drawing attention to challenges such as bias and inefficiency, and offered recommendations for improvements that could bolster transparency, accountability, and efficacy in scholarly publishing. While research output is a primary focus of institutions, there is insufficient attention given to the training and recognition of peer reviewers. M​a​h​ ​(​2​0​2​3​) examined the integration of deep learning and natural language processing (NLP) techniques for conducting emotional sentiment analysis in the context of academic peer reviews. This study assessed the influence of sentiment on the outcomes of reviews and introduced AI-based methodologies to evaluate aspects such as reviewers’ bias, tone, and fairness, with the aim of promoting transparency and objectivity in scholarly publishing practices. The lack of structured education in peer reviews led to inconsistencies in evaluations, which were further intensified by dependence on senior researchers. This paper suggested a comprehensive strategy for integrating PhD candidates and postdoctoral fellows into peer review systems through partnerships between universities and publishers, thereby enhancing both the integrity of research and the professional development of early-career researchers.

1.1 Growing Challenges in Academic Publishing

The landscape of academic publishing has been confronting significant challenges that increasingly threaten the integrity of research and its accessibility. Predatory journals exploit authors by levying fees without providing authentic peer review, thereby compromising the standards of scholarship. The mechanism of peer review is overwhelmed by a surge in the number of submissions, leading to delays and variable evaluations.

Access to research remains uneven due to conflicts between paywalled materials and the high costs associated with open access (OA). In addition, the rise of plagiarism, manipulation of data, and practices of unethical authorship, which undermine trust in the system, are becoming more prevalent. Finally, a metrics-oriented culture pressures researchers to publish more frequently often at the cost of quality, and promotes questionable practices. These issues collectively call for urgent reforms in the systems of academic publishing. T​r​u​e​b​l​o​o​d​ ​e​t​ ​a​l​.​ ​(​2​0​2​5​) investigated the detrimental effects of misaligned incentives in academic publishing, particularly the focus on prestige and metrics, which compromised the quality of research. They advocated comprehensive reforms in journals to realign these incentives with the principles of scientific integrity, thereby fostering transparency, replication, and significant scholarly contributions. Besides, Y​u​ ​&​ ​Z​h​a​n​g​ ​(​2​0​2​5​) investigated the impact of the “publish or perish” culture on academics in both China and Canada. Their findings indicated that this culture induced stress, diminished the quality of research, and led to inequities. Although both nations experience pressure to publish, the institutional policies and cultural contexts result in varied academic experiences and coping mechanisms.

The landscape of academic publishing is under considerable strain due to various issues, such as incidents of retractions, unethical academic behavior, peer review disputes, and the emergence of predatory publishing. A systematic review by J​e​f​f​e​r​s​o​n​ ​e​t​ ​a​l​.​ ​(​2​0​0​2​) examined the impact of editorial peer review on the quality of publications. Their research indicated that there was limited empirical evidence supporting the effectiveness of this process, and revealing biases and inconsistencies. They called for more stringent research efforts to evaluate and improve the practices of peer review in the realm of academic publishing. Despite extensive conversations about these challenges, there has been little effective action to enhance the peer review process. While there is growing institutional emphasis on teaching, mentoring, and career opportunities, the critical function of peer review remains largely undervalued and insufficiently evolved. The challenges of peer review were explored by A​l​b​e​r​t​s​ ​e​t​ ​a​l​.​ ​(​2​0​0​8​), who identified issues such as bias, inefficiency, and the imperative for reform. They advocated a more rigorous evaluation framework, greater transparency, and innovative strategies to improve both the reliability and fairness of the peer review system in scientific publishing. Maintaining the integrity of scholarly work relies heavily on peer review, which assesses, validates, and refines contributions to knowledge. Despite the importance of academic integrity, there are no formal training courses provided at universities to effectively prepare students for the evaluation of academic contributions. This gap leads to the question related to students’ career development, “Shouldn’t peer review training be a priority for academic institutions?”

2. Literature Review

The process of peer review, serving as a quality and credibility gatekeeper, is indispensable to academic publishing. Research has emphasized the shortage of available reviewers; in addition, seasoned reviewers are facing an escalating workload leading to delays. Publons, an initiative by Web of Science (WoS), has sought to acknowledge the contributions of peer reviewers; however, there is still a notable deficiency in organized training programs. Compensation-based peer review systems have resulted in inequalities, benefitting established researchers at the expense of those in the early stages of their careers. The rise of open-access publishing has further complicated the review landscape, with certain journals placing greater emphasis on institutional affiliations rather than the quality of the content.

K​o​u​s​h​a​ ​&​ ​T​h​e​l​w​a​l​l​ ​(​2​0​2​4​) provided a comprehensive review of the function of artificial intelligence in the fields of publishing and peer review. They pointed out the potential of AI to improve efficiency, recognize biases, and refine the quality of manuscripts. Their discussion included ethical considerations, existing limitations, and future prospects for the application of AI in academic communication. In an observational study, S​a​a​d​ ​e​t​ ​a​l​.​ ​(​2​0​2​4​) examined the role of ChatGPT in the peer review process. They analyzed its effectiveness in evaluating manuscripts, spotting errors, and delivering feedback. Despite the promising potential of AI to enhance efficiency, challenges related to accuracy, bias, and ethical issues continue to be significant concerns. The research conducted by L​i​a​n​g​ ​e​t​ ​a​l​.​ ​(​2​0​2​4​) delved into the influence of ChatGPT on peer reviews in AI conferences, thus offering a large-scale analysis of AI-modified content. They assessed the ramifications of AI-generated reviews on the quality of evaluation, the presence of bias, and consistency, while also shedding light on the obstacles in detecting AI-assisted reviews and the importance of fairness in the peer review process. T​u​f​a​n​o​ ​e​t​ ​a​l​.​ ​(​2​0​2​4​) explored the benefits and drawbacks of automated code review tools. Their research reviewed current strategies and measured their success in identifying issues, enhancing the quality of software, and reducing the workload for developers. While automation offers significant improvements in efficiency, it faces challenges in managing complex code patterns and ensuring the accuracy of the review process. M​a​h​ ​(​2​0​2​4​) analyzed the emotional ramifications of peer review, peer assessment, and self-assessment in the context of STEM education, via employing deep learning and NLP techniques. This study aims to predict emotional responses, assess the trends of sentiment, and promote fairness in evaluations, ultimately offering insights to enhance the processes of feedback within educational frameworks.

2.1 Demand for Institutionalized Peer Review Training

Academic rankings significantly impact universities, faculty advancement, and student opportunities, yet the importance of peer review is often understated and not fully appreciated in the academic sphere. While initiatives such as Publons (by WoS) have made headway in highlighting the contributions of peer reviewers, there is a need for broader initiatives. By establishing formal partnerships between universities and publishing houses, PhD candidates and postdoctoral fellows could engage more directly in the peer review process, thereby enhancing the quality and sustainability of the review ecosystem.

G​a​u​g​h​f​ ​&​ ​F​o​s​t​e​r​ ​(​2​0​1​6​) analyzed the rollout of a centralized institutional peer tutoring program. They pointed out its positive effects on student learning, academic performance, and engagement, while also recognizing the obstacles related to coordination, tutor preparation, and the sustainability of the program within educational frameworks. The research conducted by P​o​l​ ​e​t​ ​a​l​.​ ​(​1​9​8​3​) focused on peer training aimed at enhancing safety-related skills among institutional staff. The mutual benefits for trainers and trainees were highlighted, encompassing improved skill acquisition, retention, and workplace safety. This study emphasized the effectiveness of peer training in promoting a nurturing and effective learning environment. The research conducted by O​l​c​o​t​t​ ​I​V​ ​e​t​ ​a​l​.​ ​(​2​0​0​0​) focused on the role of institutional peer review in shaping the outcomes of carotid endarterectomy. Their analysis revealed that peer review could effectively lower surgical risks and costs through improved decision making, standardization of procedures, and increased safety of patients, thus highlighting its critical role in quality assurance for vascular surgical practices.

2.2 Bridging the Gap: Partnerships between Universities and Publishing

The role as a peer reviewer could have the opportunity to witness the significant pressure exerted by editorial boards, particularly adhering to deadlines. During the course of PhD or postdoctoral journeys, student researchers could perform peer reviews offered by numerous academic fields and departments on an informal basis. Nevertheless, the lack of formal peer review training programs in universities indicates a deficiency in the commitment to transparent academic publishing, despite the fact that faculty and students face mounting pressure to publish their works.

Y​u​a​n​ ​e​t​ ​a​l​.​ ​(​2​0​1​6​) addressed the need to enhance academic-community collaborations to effectively connect research with practical applications in violence studies. They pointed out the value of joint efforts in refining data collection, intervention strategies, and implementation of policies, while emphasizing the importance of integrating community perspectives to achieve more effective and relevant outcomes in violence prevention research. Research conducted by T​a​m​a​ ​e​t​ ​a​l​.​ ​(​2​0​2​3​) delved into the changing opportunities and challenges of connecting academia, policymakers, and the public in international studies. They underscored the critical need for interdisciplinary collaboration, effective communication, and engagement approaches to strengthen the effectiveness of a policy and improve public understanding in a swiftly transforming global landscape. M​a​h​ ​e​t​ ​a​l​.​ ​(​2​0​2​2​)’s study explored the role of virtual monitoring systems in shaping digital teaching delivery and student evaluation. The analysis focused on the effects of the effectiveness, engagement, and performance of learning, thus underlining how technology-enhanced monitoring contributes to better academic outcomes and promotes adaptive and data-driven educational frameworks. B​e​c​k​l​e​y​ ​e​t​ ​a​l​.​ ​(​2​0​1​5​) investigated the “Bridges to Higher Education” initiative, which aimed to address educational disparities via collaborative programs. Their analysis focused on methods to enhance access, engagement, and achievement for underrepresented students, highlighting the importance of lifelong learning and preparedness for the workforce within a multifaceted educational environment. The research conducted by T​a​n​g​ ​e​t​ ​a​l​.​ ​(​2​0​2​4​) examined in detail how educational technology could enhance educational equity by broadening access, personalizing learning pathways, and improving the distribution of resources. They identified challenges, including the digital divide and barriers to effective implementation, while also emphasizing the transformative potential of technology in addressing educational gaps across various student populations.

A well-defined partnership for peer review between universities and publishing entities would serve the interests of all stakeholders. By embedding peer review training within doctoral and postdoctoral programs, universities would ensure that their students acquire crucial skills in evaluation. Consequently, publishing houses would benefit from a consistent source of qualified reviewers.

2.3 Publishing Issues and Their Leading Academic Challenges

The following paragraphs explain some of the most pressing challenges leading to issues of academic publishing:

Gap between academic institutions and publishing houses: A notable tension is developing between academic institutions and commercial publishing entities. Academic institutions prioritize OA, transparency, and the spread of knowledge, while publishers are more focused on profit-driven strategies, i.e., frequently implementing paywalls and high article processing charges (APCs). This misalignment results in barriers to equitable access, strains library budgets, and diminishes the global visibility of institutional research, particularly in regions with limited funding. Despite being the main creators of content, academics often have to “buy back” their own research or deal with issues of accessibility, which reinforce a structural dependency that institutions are increasingly challenging through mandates, preprint repositories, and open-access negotiations.

Predatory journals and conferences: Predatory journals and conferences take advantage of the academic imperative to publish by imposing fees without adhering to appropriate peer review or editorial standards. They frequently present themselves as legitimate entities yet disseminate subpar or unverified research. This situation particularly misleads early-career researchers, undermines authentic scholarship, and squanders institutional resources, ultimately damaging the credibility and advancement of academic fields.

Overload and quality of peer reviews: The process of peer review plays a crucial role in upholding the quality of research; however, the influx of submissions has placed an excessive burden on reviewers. Numerous scholars encounter a rise in requests without adequate acknowledgment or compensation, leading to hasty or cursory reviews. This situation undermines the precision, equity, and promptness of feedback, thus threatening the dependability of published research and diminishing academic integrity.

Access and affordability (OA vs. paywalls): Academic knowledge frequently remains inaccessible due to paywalls, which restrict access for scholars lacking substantial institutional funding. Although OA seeks to democratize information, the imposition of high APCs transfers financial responsibilities to authors. This situation fosters inequality, particularly for researchers situated in low- and middle-income areas, and prompts inquiries regarding the sustainability and equity of academic publishing frameworks.

Plagiarism and research integrity: The growing competition and mounting pressures in the academic field have led to a significant rise in cases of plagiarism, data manipulation, and authorship disputes. These breaches of integrity severely damage the trustworthiness of scholarly works, harm reputations, and waste valuable resources in peer review and publishing processes. It is imperative to maintain high ethical standards to preserve credibility and promote responsible advancement of knowledge. This can be effectively achieved through academic collaborations between publishers and educational institutions.

Recruitment and employment conditions: PhD candidates and postdoctoral researchers experience significant pressure to publish, as their career advancement is largely contingent upon the production of numerous influential papers. Academic institutions require ongoing research output to improve their standings and draw in funding, which compels postdoctoral researchers into a cycle of temporary contracts and elevated performance expectations.

Although the academic framework depends on their contributions, postdoctoral researchers frequently encounter a lack of career stability and acknowledgment without their publishing input, resulting in a disparity between their essential role and their unstable employment circumstances. International students encounter the most pronounced pressure to publish in order to secure a position at the university.

Prestigious indexed journals vs. non-indexed journals in respect of OA charge ratio: Once journals achieve esteemed indexing recognition such as Scopus, WoS, and Directory of OA Journals, they frequently transition to a profit-oriented business model by substantially raising APCs. This change emphasizes revenue generation over the accessibility of research. As APCs escalate, occasionally surpassing $5k per article, early-career researchers, particularly those from economically disadvantaged areas, find themselves excluded from engaging in significant academic discussions.

Table 1 presents the estimated APCs for prestigious indexed journals compared with non-indexed journals. Indexers are crucial in determining the legitimacy of journals. Consequently, they ought to implement limits or standards on permissible APCs to guarantee that indexing serves as a symbol of quality rather than a means to monetized exclusivity. In the absence of regulation, indexing may inadvertently exacerbate inequity, allowing only financially secure authors to gain visibility, which hinders diversity, global representation, and the fundamental academic objective of disseminating inclusive knowledge.

Table 1. Estimates of article processing charge (APC) for prestigious indexed journals vs. non-indexed journals: open access (OA) charge ratio (capped at $4.5k)

Publisher

Indexing (Scopus, WoS)

APC

(USD)

OA Ratio

Remarks

Springer Nature Portfolio

Indexed

2k–4.5k

8:1

General Springer journals; wide coverage, hybrid/Gold OA.

Springer Nature (Nature-branded)

Indexed

3k–4.5k

10:1

High-impact titles like Nature, Nature Communications.

Elsevier

Indexed

2k–4.5k

8:1

OA options like Cell Reports; hybrid OA.

Wiley

Indexed

2k–4.5k

7:1

Broad subject areas; hybrid OA.

Taylor & Francis

Indexed

2k–4.5k

6:1

Humanities and social sciences; APC varies.

Oxford Univ. Press

Indexed

1.5k–4.5k

5:1

Strong in law, medicine, humanities.

Cambridge Univ. Press

Indexed

1.2k–3.8k

4:1

Hybrid and full OA journals.

SAGE

Indexed

1.8k–4.5k

6:1

Health/social science focus.

IEEE

Indexed

1.8k–2.5k

5:1

Computer science and engineering.

MDPI

Indexed

1.2k–2.3k

3:1

Rapid peer review; broad topics.

Frontiers

Indexed

2k–4.5k

5:1

Fully OA; community review.

Hindawi (Wiley)

Indexed

1.5k–3k

4:1

Affordable OA; fast publication.

ACM journals

Indexed

600–2.5k

4:1

ACM Open; discounts for SIG members.

MIT Press journals

Indexed

300–2k

3:1

OA in niche areas; supported by institutions.

IoP Publishing

Indexed

1.5k–3k

4:1

Physics, materials science, engineering.

Inderscience

Indexed (hybrid)

800–2.5k

3:1

Offers both subscription and OA; technology/business.

Emerald

Partly indexed

1.3k–2.5k

3:1

Management/social science journals.

IGI Global

Mixed

1.2k–2.5k

3:1

OA in tech/education; also book chapters.

Univ. Press journals

Mixed/non-indexed

100–800

2:1

OA supported by universities.

Local/national journals

Non-indexed

50–300

1:1

Free or nominal APCs; regional scope.

Note: WoS = Web of Science.

3. Applied Method and Materials

In order to tackle these challenges, the implementation of a systematic peer review framework was proposed to incorporate PhD candidates and postdoctoral researchers, to be guided by faculty mentors. This framework encompasses several key components.

3.1 A Curriculum for Peer Review Model Introduced for Better Quality Control
  • Formal training: The inclusion of peer review courses in PhD and postdoctoral programs is a necessary step for universities to take.
  • Institutional partnerships: Journals and higher education institutions ought to collaborate to implement structured peer reviews for those in the early stages of their research careers.
  • Supervised peer review: PhD candidates and postdoctoral scholars ought to engage in peer review activities while being mentored by senior academic professionals.
  • Recognition and incentives: Institutions must recognize and integrate the role played by the contributions of peer review in their academic promotion and ranking systems.
  • Two-tier review system: Initial assessments carried out by doctoral candidates and postdoctoral fellows should be followed up by validation from experts in the field.
3.2 A Two-Tiered Peer Review Model for Better Quality Control

To improve the effectiveness of peer review and address bias, the introduction of a two-tiered system could be considered:

(1) Academic peer review by PhD candidates and postdocs: Guided by faculty mentorship, these scholars will engage in preliminary assessments within their disciplines, to ensure that the evaluations are of high quality and conducted by experts in the subject matter.

(2) Expert advisory review: External independent experts, not affiliated with academic institutions, would conduct secondary assessments to guarantee objectivity and enhance transparency.

3.3 Journal Rankings Based on Quality Peer Review of Published Manuscripts under Its Partner Institutions

Prominent indexing organizations such as WoS, Scopus, Digital Object Identifier (DOI), and Scimago have the capacity to assess scientific quality, detect instances of misconduct, and evaluate journals according to their compliance with ethical review standards established by affiliated institutions. Universities that participate in the publication of fraudulent review feedback should have a direct influence on the journals they collaborate with and should be subject to penalties, including the termination of partnerships with esteemed publishers.

3.4 Academic Rankings Based on Quality Peer Review of Published Manuscripts under Its Watch

The Higher Education ministry should take responsibility for monitoring the scientific quality of publications, investigating any misconduct, and evaluating the articles that universities approve for dissemination, to ensure the institutions adhere to ethical review protocols. Universities found to be involved in the fraudulent endorsement of manuscripts or dishonest review processes should face consequences, such as being barred from partnerships with reputable publishers and restricted from applying for national grants.

4. AI Meta Audit Test Case

4.1 Source of Data

The dataset for this heatmap was constructed based on evaluations of publishers in categories such as peer review partnerships, early-career contributions, review quality, and ethical concerns. Partial data source and a completed data table were converted into Figure 1. The data sources include:

(1) Reports of journal publishers: Evaluation data from publishers such as Springer, Elsevier, Wiley-Blackwell, etc.

(2) Publons/ORCID data: Peer review activity and engagement rates.

(3) Institutional reports: Universities’ internal assessments of journal collaborations.

(4) Responses to surveys: Peer reviewers’ opinions on transparency, efficiency, and fairness.

(5) Text mining & NLP analysis: Extracting key phrases from journal review systems using:

  • Topic modeling (Latent dirichlet allocation and non-negative matrix factorization); and

  • Predictive analytics for detection of bias.

Figure 1. Scores of publishers across multiple evaluation criteria and categories
4.2 Normalization (Min-Max Scaling)

To standardize the scores, Min-Max scaling was applied:

$X^{\prime}=\frac{X-X_{\min }}{X_{\max }-X_{\min }}$
(1)

where, X is the original score; Xmin and Xmax are the minimum and maximum scores in the dataset.

4.2.1 Mean and variance

To compute the statistical summary, the mean (µ) and variance (σ2) were calculated:

$\mu=\frac{1}{N} \sum_{i=1}^N X_i$
(2)
$\sigma^2=\frac{1}{N} \sum_{i=1}^N\left(X_i-\mu\right)^2$
(3)

where, µ is the mean score for a given category, and σ2 is the variance.

4.2.2 Correlation between categories and publishers

To evaluate the correlation between different categories and publishers, Pearson’s correlation coefficient was used:

$r=\frac{\sum\left(X_i-\bar{X}\right)\left(Y_i-\bar{Y}\right)}{\sqrt{\sum\left(X_i-\bar{X}\right)^2 \sum\left(Y_i-\bar{Y}\right)^2}}$
(4)

where, X and Y are the different publishers’ scores; and are the respective means.

The values in Figure 1 represent categorical scores assigned to different publishers across various criteria of evaluation. More details of evaluation can be found in the supplementary file.

Figure 1 illustrates the features of academic publishers by categorizing them into different evaluation criteria with respective scores. It offers a clear and color-coded visualization of data trends, which facilitates comparative analysis and supports decision making regarding academic collaborations and the impact of peer reviews.

5. Results

This section divides the methods of analyses into two stages, i.e. theoretical analysis and computational analysis. The theoretical path provides proposals to solve the challenges confronted by peer review. This section also provides AI Meta Audit Test Case: Analyses of Results.

The introduction of an organized peer review partnership would help:

(1) Address the scarcity of reviewers: By engaging PhD candidates and postdoctoral associates, the number of potential reviewers would be broadened.

(2) Improve review quality: Enhancing the accuracy and reliability of peer reviews could be achieved through the establishment of training and mentorship initiatives for faculty members.

(3) Ensure fairer publishing: Minimizing reliance on fee-based peer review mechanisms would contribute to a more balanced environment within academic publishing.

(4) Foster career growth: Researchers in the early stages of their careers would acquire significant experience and acknowledgment for their contributions.

5.1 Overcoming Challenges of Peer Review Compensation

An additional concern involves the monetization of peer review, as it often leads to a preference for experienced reviewers rather than early-career scholars. Some publishers have developed compensation frameworks that offer:

  • Discounted publication fees (e.g., 50% off for reviewers);
  • Review-for-publication exchanges. For instance, reviewing two articles earns a free publication slot;
  • Expedited peer review services for premium fees; and
  • Recognition for renowned conference peer review activities.

These frameworks create a skewed advantage for veteran researchers, sidelining early-career scholars. The development of organized partnerships between universities and publishing entities could facilitate just compensation for PhD candidates and postdoctoral fellows, thus allowing them to acquire significant experience while mitigating financial pressures, particularly in countries with restricted PhD funding opportunities.

5.2 Institutional Impact on Peer Review

The expansion of open-access publishing has introduced a notable issue: The presence of institutional bias in the peer review process. Certain journals may give preference to manuscripts based on the authors’ institutional ties rather than the actual quality of the research. By integrating universities into the peer review system, institutions could formulate credibility-based rankings that are in line with the standards of academic publishing, similar to the methodologies used for ranking universities.

6. Results of Analyzes from AI Meta Audit Test Case

The data illustrating the performance of academic publishers, based on various evaluation criteria in peer review, was displayed in the bar charts shown in Figure 2.

A different structure was utilized to analyze the information. The data produces several plots, each featuring unique colors for improved comparison, thereby providing deeper insights into journal collaborations, efficiency, quality, and inclusivity of review, as well as trends in acceptance and rejection, to support informed decision making.

Figure 2. Results of analyzes from the ai meta-audit test case

7. Testing of Hypotheses

This is to evaluate the extent to which PhD candidates and postdoctoral fellows contribute to academic publishing activities through peer review services facilitated by academic partnerships.

7.1 Scope and Scores of Publishers

Table 2 represents the scores assigned to different publishers based on various evaluation criteria.

7.2 Hypotheses and Representation of Basic Formulae

Each hypothesis was evaluated based on the aggregated scores from the Table 2.

Table 2. Scores of different publishers based on different evaluation criteria

Criteria

Springer

Harvard Edu. Press

Elsevier

Wiley-Blackwell

MDPI

Emeralds

Sage

MIT Press

Taylor & Francis

Oxford Uni. Press

Identification of peer review partnerships

10

10

9

8

7

6

6

5

8

7

Assessment of peer review contributions

9

9

9

8

7

7

5

8

7

6

Measurement of review quality and efficiency

9

7

10

8

8

8

4

5

8

7

Impact on journal acceptance/rejection

10

6

9

8

7

6

6

9

7

5

Review engagement rate

9

7

10

8

8

7

7

6

8

7

Turnaround time

9

8

9

8

7

6

6

8

7

6

Quality of reviews

9

3

10

8

7

6

6

5

8

7

Trends of acceptance/rejection

10

4

9

8

7

6

6

5

7

7

Diversity & inclusion

9

3

10

8

7

6

6

5

7

6

7.3 $\mathrm{H}_1$: PhD Candidates and Postdoctoral Fellows Help Improve the Speed and Efficiency of Peer Review

PhD candidates and postdoctoral fellows contribute to the efficiency of peer review by reducing the turnaround time. To evaluate this, we calculated the average Turnaround Time score across different publishers.

$S_{e f f i c i e n c y}=\frac{\sum_{i=1}^N { Score }_{{TurnaroundTime }, i}}{N}$
(5)

where:

  • Sefficiency represents the overall efficiency score;

  • ScoreTurnaroundTime denotes the turnaround time score for the i-th publisher;

  • N is the total number of publishers (10).

$S_{e f f i c i e n c y}=\frac{(9+8+9+8+7+6+6+8+7+6)}{10}=7.4$
(6)

Interpretation: An average score of 7.4 indicated that early-career researchers were contributing positively to the efficiency of peer review. The relatively high score suggests that involving PhD candidates and postdoctoral fellows could lead to a faster review process.

7.4 $\mathrm{H}_2$: Academic Partnerships Help Enhance the Inclusivity of Peer Review

Academic partnerships play a crucial role in improving inclusivity by engaging a diverse group of early-career reviewers. To assess this aspect, the Diversity and Inclusion scores were averaged across publishers using the general scoring formula given in Eq. 5.

$S_{inclusivity}=\frac{(9+3+10+8+7+6+6+5+7+6)}{10}=6.7$
(7)

Interpretation: An average score of 6.7 showed that academic partnerships moderately enhanced inclusivity. However, significant variations existed among publishers (e.g., Elsevier scored 10, while Harvard Edu Press scored only 3). This suggests that while partnerships promote diversity, their effectiveness depends on the specific policies of each publisher.

7.5 $\mathrm{H}_3$: Early-Career Reviewers Provide Comparable or Superior Review Quality to Senior Reviewers

Early-career researchers, including PhD candidates and postdoctoral fellows, are increasingly involved in the peer review process. To evaluate whether their contributions are associated with comparable or improved review quality, the Quality of Reviews scores were averaged across publishers according to the general formulation in Eq. 5.

$S_{quality}=\frac{(9+3+10+8+7+6+6+5+8+7)}{10}=6.9$
(8)

Interpretation: An average score of 6.9 suggested that early-career reviewers delivered review quality that was, on average, similar to or slightly below senior reviewers. The presence of high scores (e.g., 10 for Elsevier) supports the argument that early-career researchers can provide high-quality reviews.

7.6 $\mathrm{H}_4$: Involvement of PhD Candidates/Postdoctoral Fellows Influences Patterns of Acceptance and Rejection

PhD candidates and postdoctoral fellows may have a distinct approach to evaluating manuscripts, potentially impacting trends of acceptance and rejection. To examine this possibility, the Acceptance/Rejection Trends scores were averaged across publishers using Eq. 5.

$S_{acceptance}=\frac{(10+4+9+8+7+6+6+5+7+7)}{10}=6.9$
(9)

Interpretation: An average score of 6.9 indicated that the involvement of PhD candidates and postdoctoral fellows played a measurable role in the patterns of manuscript acceptance and rejection. The variation in scores suggests that different publishers exhibit different tendencies in how they incorporate early-career reviewers into decision-making processes.

Based on the statistical results,

  • Peer review efficiency (H1) was relatively high (7.4), indicating that early-career researchers contributed effectively to the speed of peer review.

  • Inclusivity (H2) was lower (6.7), suggesting that academic partnerships could be enhanced to improve diversity.

  • Review quality (H3) and acceptance/rejection trends (H4) both scored 6.9, showing that early-career reviewers performed comparably to senior reviewers.

These results supported the hypothesis that PhD candidates and postdoctoral fellows had a positive impact on academic publishing through contributions to peer review.

8. Hypothetical Evaluations

The hypotheses explored the impact of PhD candidates and postdoctoral fellows on the efficiency, inclusivity, and decision making of peer review. They assessed how academic partnerships enhanced review quality, contributions of early-career researchers, and their influence on acceptance/rejection rates. Understanding these factors helps optimize processes of peer review, to ensure fairness, speed, and quality in academic publishing while addressing biases and promoting diversity within scholarly evaluation systems. Figure 3 presents the hypotheses and the corresponding scores for each publisher.

Testing of hypotheses revealed that PhD candidates and postdoctoral fellows could enhance the efficiency, inclusivity, and quality of peer review. Early-career reviewers provide competitive assessments and academic partnerships to improve diversity. These insights help optimize peer review, reduce biases as well as promoting fair and high-quality academic publishing practices.

Figure 3. Hypotheses and scores per publisher

9. Discussion

The following steps constitute the discussion in the current study for interested academicians, researchers, students, editors, and administrators.

9.1 Peer Review as a Recognized Academic Profession

Similar to the other competencies involved in academic writing and research, peer review should be recognized as a professional skill. Meanwhile, journal editors frequently engaging in writing and research are professionals who determine the outcomes of submitted papers. Universities should take a more active role in the peer review process by training their PhD candidates and postdoctoral fellows to serve as academic reviewers.

K​ö​h​l​e​r​ ​e​t​ ​a​l​.​ ​(​2​0​2​0​) proposed a peer review competency framework to enhance rigor and reliability in industrial and organizational psychology, thus emphasizing the training of reviewers. L​a​m​o​n​t​ ​(​2​0​1​2​) examined academic decision-making, in order to reveal biases, networks, and institutional norms in research assessment and peer review. G​r​a​i​n​g​e​r​ ​(​2​0​0​7​) framed peer review as a professional duty, in order to stress competence, ethics, and accountability. M​u​s​s​e​l​i​n​ ​(​2​0​1​3​) explored the role of peer review in university governance, to balance institutional autonomy and accountability. Furthermore, B​e​d​e​i​a​n​ ​(​2​0​0​4​) analyzed the influence of peer review on the construction of knowledge in management studies, to expose biases and power dynamics. Together, these studies highlighted the impact of peer review on research quality, academic norms, and institutional governance.

9.2 Rethinking Peer Review for Conferences

Academic conferences typically employ a distinct peer review model compared to academic journals. Many of these conferences utilize a committee-based approach for peer review, and the submission fees they collect contribute to the financial support of the event. However, it is important to note that conference papers generally receive less thorough examination than journal articles, which are subject to more stringent evaluations and tend to have higher rates of retraction.

A​d​e​l​m​a​n​ ​e​t​ ​a​l​.​ ​(​1​9​7​6​) critically assessed the methodology of case study, to advocate stronger validity, generalizability, and researchers’ contributions to educational research. A​m​b​r​o​s​i​n​o​ ​e​t​ ​a​l​.​ ​(​2​0​2​5​) analyzed post-COVID-19 economic policies, to emphasize government intervention, resilience, and sustainable recovery. G​o​t​t​l​i​e​b​ ​e​t​ ​a​l​.​ ​(​2​0​2​0​) explored the impact of COVID-19 on conferences related to professional development, in order to promote virtual and hybrid models for accessibility and flexibility. D​e​ ​P​i​c​k​e​r​ ​(​2​0​2​0​) discussed inclusion and disability activism in academia, to highlight structural reforms for equitable participation. Collectively, these studies underscored the necessity of methodological rigor, adaptive policies, technological innovation, and inclusivity in research, policymaking, and academic engagement.

To enhance the peer review process for conferences, a partnership between universities and publishers could be beneficial. By involving academic institutions in the review process, conference organizers could work alongside universities to uphold rigorous peer review standards, similar to those applied in journal publications.

9.3 Professionalizing Editorial Roles in Publishing

University recruitment teams are known for their careful selection of students based on academic qualifications, while corporate HR specialists evaluate employees according to the needs of their organizations. In a parallel manner, publishing companies should implement a systematic strategy for selecting reviewers by collaborating with universities to identify qualified PhD candidates and postdoctoral fellows.

A​r​s​e​n​a​u​l​t​ ​e​t​ ​a​l​.​ ​(​2​0​2​1​) explored the importance of journal papers to graduate students in academia, so as to highlight challenges and benefits like academic development and visibility. C​a​n​d​a​l​-​P​e​d​r​e​i​r​a​ ​e​t​ ​a​l​.​ ​(​2​0​2​3​) emphasized the need for quality and transparency in peer review, thus advocating better training and unbiased decision making for reviewers. L​i​g​h​t​m​a​n​ ​(​2​0​1​6​) examined the popularity of science publishing in the 19th century, to demonstrate how editors and writers expanded public engagement and reshaped science communication. Together, these studies underscored the significance of transparency, inclusivity, and accessibility in academic publishing, thus reinforcing the need for structural improvements to maintain the integrity and effectiveness of scholarly dissemination.

A number of universities already engage in partnerships with publishers for various publishing models, including open-access, subscription-based, and gold-standard approaches. These established collaborations could be leveraged to create structured peer review services, thus promoting a more transparent and just academic publishing system.

10. AI Meta-Audit Engine

This section examines the evaluation of academic knowledge using an AI meta-audit engine based on the following themes:

  • Academic closed-loop structure for the AI meta-audit workflow;
  • Multi-Stakeholder AI meta-audit workflow;
  • AI meta-audit flow;
  • Academic Assessment Scorecard; and
  • AI meta-audit risk signals.
10.1 Academic Closed-Loop Structure for AI Meta-Audit Workflow

Figure 4 outlines a circular and closed-loop structure for the AI meta-audit workflow, emphasizing that academic evaluation is a perpetual process rather than a one-time occurrence.

At the top, Academic Context + Evidence represents the inputs: institutional goals, policies, manuscripts, data, and review records. These components feed into the center, the AI meta-audit engine, which operates as the main processing entity. The engine carries out integrity checks, fairness analyses, quality control, and verifications of compliance.

On the right, Scoring & Explainability converts the analysis into transparent metrics, weighted scores, and rationales. At the bottom-left, Decision Actions translate the findings into operational steps such as acceptance, revision, rejection, or escalation.

Figure 4. Academic context + evidence

Finally, the cycle leads to Audit Outputs, generating reports and recommendations that guide future evidence and policy revisions. The circular arrows indicate feedback and learning, thereby ensuring governance, accountability, and sustained enhancement of academic quality.

10.2 Workflow of Multi-Stakeholder AI Meta-Audit

Figure 5 represents a multi-stakeholder AI meta-audit workflow that integrates evidence from all levels of the scholarly publishing ecosystem prior to the commencement of automated evaluation. It expands upon the previous single-source model by introducing governance layers from publishers and editorial boards.

First, Academic Context + Evidence comprises manuscripts, datasets, methodologies, and supervision records. Following that, Publisher Context + Evidence contributes policies, ethical guidelines, transparency standards, and compliance metrics. Subsequently, Editorial Board Context + Evidence adds peer-review logs, integrity checks for reviewers, and editorial decisions.

Figure 5. Academic context-publishers-editorials + evidence

All three streams in Figure 5 converge into the AI meta-audit engine, which carries out integrity verification, fairness testing, and quality control. The engine outputs, Scoring & Explainability metrics, support Decision Actions (accept, revise, reject, and escalate). Audit Outputs provide reports, alerts, and recommendations. In essence, the flow illustrates a system of layered accountability, where mentorship, editorial governance, and publisher oversight collectively enhance the reliability of research and the measurable performance of science.

10.3 AI Meta-Audit Flow

The Input Check Test Score Decide Report model structures an AI-assisted audit into clear and accountable phases. Input gathers context and evidence. Check verifies quality, integrity, and fairness. Test uses scenario-based and adversarial probes to challenge reliability. Score applies weighted rubrics with explainable rationales and uncertainty flags. Decide translates findings into actions such as accept, revise, reject, or escalate. Finally, Report provides transparent summaries, visual scorecards, risks, and recommendations, ensuring consistent governance, reproducibility, and reliable decision-making across academic and organizational evaluations at scale.

Academic Context + Evidence AI Meta-Audit Engine (Scoring & Explainability + Decision Actions) Audit Outputs.

Figure 6 presents a conceptual workflow diagram for an AI-based meta-audit system intended for the evaluation of academic or research outputs. It showcases a well-organized pipeline in which inputs are analyzed by a research team composed of students, research advisors, and an AI engine; they are subsequently tested, scored, and translated into decisions and final audit reports for global ranking.

Figure 6. AI meta-audit flow
10.4 Academic Assessment Scorecard

The radar (spider) chart named AI Meta-Audit Academic Assessment Scorecard (Example) visualizes multidimensional evaluation outcomes across six criteria on a scale from 0 to 100: Research Rigor, Teaching Evidence, Policy Compliance, Integrity, Equity/Fairness, and Impact & Outcomes. Research Rigor and Impact reflect the highest performance, while Integrity is comparatively lower.

The shaded polygon in Figure 7 highlights strengths, gaps, and balance among the dimensions, hence providing rapid comparative insights. It communicates a transparent and explainable scoring system that supports evidence-based academic auditing, benchmarking, governance compliance, and focused improvement decisions within an AI-assisted evaluation framework.

Mentorship and ranking impact: Figure 7 depicts how a systematic approach to student-supervisor mentorship bolsters integrity in both research and teaching practices.

Increased scores in integrity, compliance, and rigor indicate ethical guidance, reproducibility, and responsible conduct. These advancements elevate departmental quality, enhance university performance metrics, and contribute to a stronger institutional reputation, ultimately impacting national competitiveness and promoting the rise of global university rankings through sustained academic excellence.

Figure 7. AI meta-audit radar
10.5 AI Meta-Audit Risk Signals

The AI meta-audit risk signals heatmap provides a visual framework for the assessment of research integrity and scientific quality in the peer review conducted by PhD candidates and postdoctoral fellows under the supervision of academic mentors. By monitoring risk indicators related to authorship, data availability, reproducibility, and ethical review practices, the heatmap identifies weaknesses at an early stage and supports corrective guidance. The aggregated signals inform departmental performance metrics, strengthen responsible scholarship, and translate the quality of supervised research into quantifiable outputs that contribute to institutional benchmarking, university rankings, and ultimately enhance global scientific competitiveness and impact.

Figure 8 presents a heatmap of risk signals, which encapsulates potential issues related to research integrity across various audit dimensions. The rows illustrate eight test cases (TC1–TC8), which encompass ghost authorship, absent data/code, irreproducibility, manipulation of reviews, stacking of citations, undisclosed conflicts of interest, cloning of journals, and signals from predatory venues. The columns are aligned with governance areas: authorship, availability of data, reproducibility, integrity of peer review, hygiene of citations, and conflicts of interest.

The color intensity varies from low (dark/blue) to high (yellow), with numerical scores ranging from 0 to 100 that signify the severity of risk. Clusters of high risk are evident for missing data/code, irreproducible findings, and signals from predatory venues, whereas certain areas such as the reproducibility of review manipulation, exhibit minimal concern. This heatmap facilitates the swift identification of vulnerabilities, prioritization of inquiries, and informed decision making for mitigation within an AI-enhanced academic audit framework.

Figure 8. AI meta-audit risk signals heatmap

11. Conclusions

Academic publishing is fundamentally reliant on peer review, yet there is a notable lack of formal training for this process within universities. By integrating PhD candidates and postdoctoral fellows into organized peer review systems through institutional collaborations, we could enhance the integrity of research, address the shortage of reviewers, and promote fairer opportunities in academic publishing. It is imperative for universities, publishing companies, and indexing platforms to collaborate in creating sustainable peer review partnerships that benefit both reviewers and the wider scholarly community. The academic publishing industry should recognize the significant role that PhD candidates and postdoctoral fellows play in the peer review process. Establishing partnerships between universities and publishers would not only improve the quality of reviews but also provide essential experience to early-career researchers, leading to a more equitable publishing environment. By professionalizing the peer review process, academia could ensure the transparency and accountability in the academic publishing industry.

Author Contributions

Conceptualization, P.M.M.; methodology, P.M.M.; software, P.M.M., J.M., T.D.M., P.M., M.M.A.B., L.K.S., J.A., E.M., J.A.O., & S.J.S.; validation, P.M.M.; formal analysis, P.M.M.; investigation, P.M.M.; resources, P.M.M., J.M., T.D.M., P.M., M.M.A.B., L.K.S., J.A., E.M., J.A.O., & S.J.S.; data curation, P.M.M.; writing—original draft preparation, P.M.M., J.M., T.D.M., P.M., M.M.A.B., L.K.S., J.A., E.M., J.A.O., & S.J.S.; writing—review and editing, P.M.M.; visualization, P.M.M.; supervision, P.M.M.; project administration, P.M.M., J.M., T.D.M., P.M., M.M.A.B., L.K.S., J.A., E.M., J.A.O., & S.J.S. All authors have read and agreed to the published version of the manuscript.

Data Availability

The results of analysis come from the information obtained from the data source: https://publons.com/static/Publons-Global-State-Of-Peer-Review-2018.pdf?utm_source=chatgpt.com. The accumulation of sourcing datasets can be found in Table A1 in Appendix: AI meta-audit test case query questionnaire form.

Acknowledgments

This study was supported, edited, and formatted by a team of researchers from Inventive Creativity Foundation: https://inventivecreativity.org/.

Conflicts of Interest

The authors declare no conflict of interest.

References
Adelman, C., Jenkins, D., & Kemmis, S. (1976). Re‐thinking case study: Notes from the second Cambridge Conference. Camb. J. Educ., 6(3), 139–150. [Google Scholar] [Crossref]
Alberts, B., Hanson, B., & Kelner, K. L. (2008). Reviewing peer review. Science, 321(5885), 15–15. [Google Scholar] [Crossref]
Ambrosino, A., Bellino, E., Cedrini, M., Deleidi, M., & Gahn, S. J. (2025). Introduction to the special issue on the 20th STOREP conference: Rethinking economic policies: The role of the state in the post-Covid-19. Rev. Political Econ., 37(2), 331–333. [Google Scholar] [Crossref]
Arsenault, A. C., Heffernan, A., & Murphy, M. P. (2021). What is the role of graduate student journals in the publish-or-perish academy? Three lessons from three editors-in-chief. Int. Stud., 58(1), 98–115. [Google Scholar] [Crossref]
Beckley, A., Netherton, C., & Singh, S. (2015). Closing the gap through bridges to higher education. In Research and Development in Higher Education: Learning for Life and Work in A Complex World (Vol. 38, pp. 416–435). [Google Scholar]
Bedeian, A. G. (2004). Peer review and the social construction of knowledge in the management discipline. Acad. Manag. Learn. Educ., 3(2), 198–216. [Google Scholar] [Crossref]
Candal-Pedreira, C., Rey-Brandariz, J., Varela-Lema, L., Pérez-Ríos, M., & Ruano-Ravina, A. (2023). Challenges in peer review: How to guarantee the quality and transparency of the editorial process in scientific journals. An. Pediatr. (Engl. Ed.), 99(1), 54–59. [Google Scholar] [Crossref]
De Picker, M. (2020). Rethinking inclusion and disability activism at academic conferences: Strategies proposed by a PhD student with a physical disability. Disabil. Soc., 35(1), 163–167. [Google Scholar] [Crossref]
Gaughf, N. W. & Foster, P. S. (2016). Implementing a centralized institutional peer tutoring program. Educ. Health, 29(2), 148–151. [Google Scholar] [Crossref]
Gottlieb, M., Egan, D. J., Krzyzaniak, S. M., Wagner, J., Weizberg, M., & Chan, T. (2020). Rethinking the approach to continuing professional development conferences in the era of COVID-19. J. Contin. Educ. Health Prof., 40(3), 187–191. [Google Scholar] [Crossref]
Grainger, D. W. (2007). Peer review as professional responsibility: A quality control system only as good as the participants. Biomaterials, 28(34), 5199–5203. [Google Scholar] [Crossref]
Jefferson, T., Alderson, P., Wager, E., & Davidoff, F. (2002). Effects of editorial peer review: A systematic review. JAMA, 287(21), 2784–2786. [Google Scholar] [Crossref]
Köhler, T., González-Morales, M. G., Banks, G. C., O’Boyle, E. H., Allen, J. A., Sinha, R., Woo, S. E., & Gulick, L. M. (2020). Supporting robust, rigorous, and reliable reviewing as the cornerstone of our profession: Introducing a competency framework for peer review. Ind. Organ. Psychol., 13(1), 1–27. [Google Scholar] [Crossref]
Kousha, K. & Thelwall, M. (2024). Artificial intelligence to support publishing and peer review: A summary and review. Learn. Publ., 37(1), 4–12. [Google Scholar] [Crossref]
Lamont, M. (2012). How professors think: Inside the curious world of academic judgment. Reis, 140, 173–184. [Google Scholar]
Liang, W., Izzo, Z., Zhang, Y., Lepp, H., Cao, H., Zhao, X., Chen, L., Ye, H., Liu, S., & Huang, Z. et al. (2024). Monitoring AI-modified content at scale: A case study on the impact of ChatGPT on AI conference peer reviews. arXiv. https://arxiv.org/abs/2403.07183 [Google Scholar]
Lightman, B. (2016). Popularizers, participation and the transformations of nineteenth-century publishing: From the 1860s to the 1880s. Notes Rec. R. Soc. Hist. Sci., 70(4), 343–359. [Google Scholar] [Crossref]
Mah, P. M. (2023). The art of deep learning and natural language processing for emotional sentiment analysis on the academic scholars’ peer review process. In Proceedings of the 24th Annual Conference on Information Technology Education (pp. 186–198). [Google Scholar] [Crossref]
Mah, P. M. (2024). Predicting emotional impact on peer review, peer assessment, and self assessments using deep learning and NLP in STEM education. In International Conference on Computers in Education. [Google Scholar] [Crossref]
Mah, P. M., Skalna, I., & Offiong, U. P. (2022). Virtual monitoring as a digital delivery and assessment impact on students’ learning. In Communications of International Proceedings (Vol. 2022, Issue 3). https://ibimapublishing.com/p-articles/COVID40EDU/2022/3915022/3915022-2.pdf [Google Scholar]
Musselin, C. (2013). How peer review empowers the academic profession and university managers: Changes in relationships between the state, universities and the professoriate. Res. Policy, 42(5), 1165–1173. [Google Scholar]
Olcott IV, C., Mitchell, R. S., Steinberg, G. K., & Zarins, C. K. (2000). Institutional peer review can reduce the risk and cost of carotid endarterectomy. Arch. Surg., 135(8), 939–942. [Google Scholar] [Crossref]
Pol, R. A. V. D., Reid, D. H., & Fuqua, R. W. (1983). Peer training of safety‐related skills to institutional staff: Benefits for trainers and trainees. J. Appl. Behav. Anal., 16(2), 139–156. [Google Scholar] [Crossref]
Saad, A., Jenko, N., Ariyaratne, S., Birch, N., Iyengar, K. P., Davies, A. M., Vaishya, R., & Botchu, R. (2024). Exploring the potential of ChatGPT in the peer review process: An observational study. Diabetes Metab. Syndr. Clin. Res. Rev., 18(2), 102946. [Google Scholar] [Crossref]
Tama, J., Barma, N. H., Durbin, B., Goldgeier, J., & Jentleson, B. W. (2023). Bridging the gap in a changing world: New opportunities and challenges for engaging practitioners and the public. Int. Stud. Perspect., 24(3), 285–307. [Google Scholar] [Crossref]
Tang, M., Ren, P., & Zhao, Z. (2024). Bridging the gap: The role of educational technology in promoting educational equity. Educ. Rev. (USA), 8(8), 1077–1086. [Google Scholar] [Crossref]
Tennant, J. P. (2018). The state of the art in peer review. FEMS Microbiol. Lett., 365(19), fny204. [Google Scholar] [Crossref]
Trueblood, J. S., Allison, D. B., Field, S. M., Fishbach, A., Gaillard, S. D., Gigerenzer, G., Holmes, W. R., Lewandowsky, S., Matzke, D., & Murphy, M. C. et al. (2025). The misalignment of incentives in academic publishing and implications for journal reform. Proc. Natl. Acad. Sci. U.S.A., 122(5), e2401231121. [Google Scholar] [Crossref]
Tufano, R., Dabić, O., Mastropaolo, A., Ciniselli, M., & Bavota, G. (2024). Code review automation: Strengths and weaknesses of the state of the art. IIEEE Trans. Softw. Eng., 50(2), 338–353. [Google Scholar] [Crossref]
Yu, S. & Zhang, L. (2025). The impacts of “publish or perish” on Chinese and Canadian academics. In Portraits of Academic Life in Higher Education (pp. 91–105). Brills. [Google Scholar] [Crossref]
Yuan, N. P., Gaines, T. L., Jones, L. M., Rodriguez, L. M., Hamilton, N., & Kinnish, K. (2016). Bridging the gap between research and practice by strengthening academic-community partnerships for violence research. Psychol. Violence, 6(1), 27–33. [Google Scholar] [Crossref]
Appendix

Table A1. AI meta-audit test case query questionnaire form

AI Meta Audit Test Case

Impact of PhD and Postdoc Fellows on Publishing Activities via Academic Partnerships for Peer Review Services

Statistical data according to levels (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

1. Objective

To evaluate the extent to which AI Meta Audit on PhD and postdoc fellows contribute to academic publishing activities through peer review services facilitated by academic partnerships.

2. Scope

Springer

Harvard Edu Press

Elsevier

Wiley-Blackwell

MDPI

Emeralds

Sage

MIT Press

Taylor & F

Oxford Uni Press

Identification of peer review academic partnerships involving PhD and Postdoc fellows.

10

10

9

8

7

6

6

5

8

7

Assessment of peer review contributions from early-career researchers.

9

9

9

8

7

7

5

8

7

6

Measurement of quality, efficiency, and inclusivity in peer review processes.

9

7

10

8

8

8

4

5

8

7

Impact on journal acceptance/rejection rates, turnaround time, and review quality.

10

6

9

8

7

6

6

9

7

5

3. Key metrics (Publon, WoS, ORCIDs)

Springer

Harvard Edu Press

Elsevier

Wiley-Blackwell

MDPI

Emeralds

Sage

MIT Press

Taylor & F

Oxford Uni Press

Review engagement rate: Number of peer reviews completed per PhD/Postdoc fellow.

9

7

10

8

8

7

7

6

8

7

Turnaround time: Average review duration compared to senior reviewers.

9

8

9

8

7

6

6

8

7

6

Quality of reviews: Editor rating and author feedback on review quality.

9

3

10

8

7

6

6

5

8

7

Acceptance/rejection trends: Correlation between early-career reviewer involvement and decision outcomes.

10

4

9

8

7

6

6

5

7

7

Diversity and inclusion: Representation of PhD/Postdoc fellows across disciplines and demographics.

9

3

10

8

7

6

6

5

7

6

4. Data sources (journals)

Springer

Harvard Edu Press

Elsevier

Wiley-Blackwell

MDPI

Emeralds

Sage

MIT Press

Taylor & F

Oxford Uni Press

Journal databases and peer review platforms.

10

4

10

8

8

7

7

6

8

7

ResearchGate, ORCID, Publons (reviewer recognition data).

9

8

9

8

7

6

6

9

7

6

Institutional partnerships and funding reports.

10

9

10

8

8

7

7

6

8

7

Survey responses from editors, reviewers, and authors.

9

3

9

8

7

6

6

5

7

6

5. AI-powered analysis
approach

Springer

Harvard Edu Press

Elsevier

Wiley-Blackwell

MDPI

Emeralds

Sage

MIT Press

Taylor & F

Oxford Uni Press

NLP Keywords analysis: Evaluating review Entity connections and constructiveness.

9

6

10

8

8

7

7

6

8

7

Topic modelling: Identifying key themes in peer reviews by early-career researchers.

9

7

9

8

7

6

6

8

7

6

Predictive analytics: Forecasting review efficiency and impact based on past data.

10

4

10

8

8

7

7

6

8

7

Bias detection: Examining disparities in acceptance rates based on reviewer seniority.

9

3

9

8

7

6

6

9

7

6

6. Risks and challenges (specialty, type of journal, place of publication ResearchGate)

Springer

Harvard Edu Press

Elsevier

Wiley-Blackwell

MDPI

Emeralds

Sage

MIT Press

Taylor & F

Oxford Uni Press

Bias in reviewer selection: Institutional biases affecting opportunities for early-career researchers.

9

5

9

8

7

6

6

5

7

6

Ethical concerns: Potential conflicts of interest or reviewer inexperience.

10

7

10

8

8

7

7

6

8

7

Data gaps: Limited access to confidential peer review data.

9

7

9

8

7

6

6

5

7

6

7. Evaluation & ecommendations (ResearchGate)

Springer

Harvard Edu Press

Elsevier

Wiley-Blackwell

MDPI

Emeralds

Sage

MIT Press

Taylor & F

Oxford Uni Press

Strengthen academic partnerships to provide structured training for early-career reviewers.

9

3

10

8

8

7

7

6

8

7

Encourage journals to implement double-blind peer review to mitigate bias.

9

3

9

8

7

6

6

5

7

6

Utilize AI-driven reviewer assignment to balance expertise and diversity.

10

3

10

8

8

7

7

6

8

7

Develop recognition frameworks (e.g., Publons credits) to motivate PhD/Postdoc participation.

9

3

9

8

7

6

6

5

7

6

What is the hypothesis based on each score above?

6. Hypotheses

Springer

Harvard Edu Press

Elsevier

Wiley-Blackwell

MDPI

Emeralds

Sage

MIT Press

Taylor & F

Oxford Uni Press

H1: PhD and Postdoc fellows improve the speed and efficiency of peer review.

9

7

10

8

8

7

7

6

8

7

H2: Academic partnerships enhance the inclusivity of peer review.

9

8

9

8

7

6

6

8

7

6

H3: Early-career reviewers provide comparable or superior review quality to senior reviewers.

9

3

10

8

7

6

6

5

8

7

H4: Involvement of PhD/Postdoc fellows influence acceptance and rejection patterns.

10

4

9

8

7

6

6

5

7

7


Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Mah, P. M., Muzam, J., Mbu, T. D., Mudoh, P., Baoua, M. M. A., Song, L. K., Akoko, J., Munyeshuri, E., Okello, J. A., & Salema, S. J. (2026). AI Meta-Audit Test Case: Impact of PhD Candidates and Postdoctoral Fellows on Publishing Activities via Academic Partnerships for Peer Review Services. Educ. Sci. Manag., 4(1), 1-19. https://doi.org/10.56578/esm040101
P. M. Mah, J. Muzam, T. D. Mbu, P. Mudoh, M. M. A. Baoua, L. K. Song, J. Akoko, E. Munyeshuri, J. A. Okello, and S. J. Salema, "AI Meta-Audit Test Case: Impact of PhD Candidates and Postdoctoral Fellows on Publishing Activities via Academic Partnerships for Peer Review Services," Educ. Sci. Manag., vol. 4, no. 1, pp. 1-19, 2026. https://doi.org/10.56578/esm040101
@research-article{Mah2026AIMT,
title={AI Meta-Audit Test Case: Impact of PhD Candidates and Postdoctoral Fellows on Publishing Activities via Academic Partnerships for Peer Review Services},
author={Pascal Muam Mah and John Muzam and Tambi Daniel Mbu and Polycap Mudoh and Mahamane Moutari Abdou Baoua and Lilian Kuyiena Song and John Akoko and Eric Munyeshuri and Janet Awino Okello and Selestine John Salema},
journal={Education Science and Management},
year={2026},
page={1-19},
doi={https://doi.org/10.56578/esm040101}
}
Pascal Muam Mah, et al. "AI Meta-Audit Test Case: Impact of PhD Candidates and Postdoctoral Fellows on Publishing Activities via Academic Partnerships for Peer Review Services." Education Science and Management, v 4, pp 1-19. doi: https://doi.org/10.56578/esm040101
Pascal Muam Mah, John Muzam, Tambi Daniel Mbu, Polycap Mudoh, Mahamane Moutari Abdou Baoua, Lilian Kuyiena Song, John Akoko, Eric Munyeshuri, Janet Awino Okello and Selestine John Salema. "AI Meta-Audit Test Case: Impact of PhD Candidates and Postdoctoral Fellows on Publishing Activities via Academic Partnerships for Peer Review Services." Education Science and Management, 4, (2026): 1-19. doi: https://doi.org/10.56578/esm040101
MAH P M, MUZAM J, MBU T D, et al. AI Meta-Audit Test Case: Impact of PhD Candidates and Postdoctoral Fellows on Publishing Activities via Academic Partnerships for Peer Review Services[J]. Education Science and Management, 2026, 4(1): 1-19. https://doi.org/10.56578/esm040101
cc
©2026 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.