AI Meta-Audit Test Case: Impact of PhD Candidates and Postdoctoral Fellows on Publishing Activities via Academic Partnerships for Peer Review Services
Abstract:
The peer review process is a significant aspect of academic publishing, which shoulders the responsibility to ensure the quality and integrity of scholarly work. However, challenges such as a lack of formal peer review training, scarcity of reviewers, and bias in the selection process have posed daunting challenges. This paper explored the potential of PhD candidates and postdoctoral fellows as peer reviewers through academic partnerships with publishing houses. “AI meta-audit test case” was employed to analyze “if there exists any publishing activities, in particular peer review services, involved in academic partnerships as well as the impact of PhD candidates and postdoctoral fellows on peer review activities”. Our objective is to evaluate the extent to which PhD candidates and postdoctoral fellows contribute to academic publishing activities through peer review services facilitated by academic partnerships. The project team defined key metrics in a set scope, determined data sources, designed AI-powered analysis approach, proposed hypotheses, identified risks and challenges, and above all, provided evaluation and recommendations. The study highlights the demand for structured peer review training, incentives for early-career researchers, and institutional collaborations to enhance the quality and efficiency of peer review process.
1. Introduction
Academic publishing has undergone remarkable growth, but this progress is accompanied by various challenges, such as the occurrence of retractions, instances of misconduct, and complaints about the peer review process. Tennant (2018) provided an analysis of the current peer review system, to outline its benefits, shortcomings, and new developments. The study delved into conventional and open peer review approaches, drawing attention to challenges such as bias and inefficiency, and offered recommendations for improvements that could bolster transparency, accountability, and efficacy in scholarly publishing. While research output is a primary focus of institutions, there is insufficient attention given to the training and recognition of peer reviewers. Mah (2023) examined the integration of deep learning and natural language processing (NLP) techniques for conducting emotional sentiment analysis in the context of academic peer reviews. This study assessed the influence of sentiment on the outcomes of reviews and introduced AI-based methodologies to evaluate aspects such as reviewers’ bias, tone, and fairness, with the aim of promoting transparency and objectivity in scholarly publishing practices. The lack of structured education in peer reviews led to inconsistencies in evaluations, which were further intensified by dependence on senior researchers. This paper suggested a comprehensive strategy for integrating PhD candidates and postdoctoral fellows into peer review systems through partnerships between universities and publishers, thereby enhancing both the integrity of research and the professional development of early-career researchers.
The landscape of academic publishing has been confronting significant challenges that increasingly threaten the integrity of research and its accessibility. Predatory journals exploit authors by levying fees without providing authentic peer review, thereby compromising the standards of scholarship. The mechanism of peer review is overwhelmed by a surge in the number of submissions, leading to delays and variable evaluations.
Access to research remains uneven due to conflicts between paywalled materials and the high costs associated with open access (OA). In addition, the rise of plagiarism, manipulation of data, and practices of unethical authorship, which undermine trust in the system, are becoming more prevalent. Finally, a metrics-oriented culture pressures researchers to publish more frequently often at the cost of quality, and promotes questionable practices. These issues collectively call for urgent reforms in the systems of academic publishing. Trueblood et al. (2025) investigated the detrimental effects of misaligned incentives in academic publishing, particularly the focus on prestige and metrics, which compromised the quality of research. They advocated comprehensive reforms in journals to realign these incentives with the principles of scientific integrity, thereby fostering transparency, replication, and significant scholarly contributions. Besides, Yu & Zhang (2025) investigated the impact of the “publish or perish” culture on academics in both China and Canada. Their findings indicated that this culture induced stress, diminished the quality of research, and led to inequities. Although both nations experience pressure to publish, the institutional policies and cultural contexts result in varied academic experiences and coping mechanisms.
The landscape of academic publishing is under considerable strain due to various issues, such as incidents of retractions, unethical academic behavior, peer review disputes, and the emergence of predatory publishing. A systematic review by Jefferson et al. (2002) examined the impact of editorial peer review on the quality of publications. Their research indicated that there was limited empirical evidence supporting the effectiveness of this process, and revealing biases and inconsistencies. They called for more stringent research efforts to evaluate and improve the practices of peer review in the realm of academic publishing. Despite extensive conversations about these challenges, there has been little effective action to enhance the peer review process. While there is growing institutional emphasis on teaching, mentoring, and career opportunities, the critical function of peer review remains largely undervalued and insufficiently evolved. The challenges of peer review were explored by Alberts et al. (2008), who identified issues such as bias, inefficiency, and the imperative for reform. They advocated a more rigorous evaluation framework, greater transparency, and innovative strategies to improve both the reliability and fairness of the peer review system in scientific publishing. Maintaining the integrity of scholarly work relies heavily on peer review, which assesses, validates, and refines contributions to knowledge. Despite the importance of academic integrity, there are no formal training courses provided at universities to effectively prepare students for the evaluation of academic contributions. This gap leads to the question related to students’ career development, “Shouldn’t peer review training be a priority for academic institutions?”
2. Literature Review
The process of peer review, serving as a quality and credibility gatekeeper, is indispensable to academic publishing. Research has emphasized the shortage of available reviewers; in addition, seasoned reviewers are facing an escalating workload leading to delays. Publons, an initiative by Web of Science (WoS), has sought to acknowledge the contributions of peer reviewers; however, there is still a notable deficiency in organized training programs. Compensation-based peer review systems have resulted in inequalities, benefitting established researchers at the expense of those in the early stages of their careers. The rise of open-access publishing has further complicated the review landscape, with certain journals placing greater emphasis on institutional affiliations rather than the quality of the content.
Kousha & Thelwall (2024) provided a comprehensive review of the function of artificial intelligence in the fields of publishing and peer review. They pointed out the potential of AI to improve efficiency, recognize biases, and refine the quality of manuscripts. Their discussion included ethical considerations, existing limitations, and future prospects for the application of AI in academic communication. In an observational study, Saad et al. (2024) examined the role of ChatGPT in the peer review process. They analyzed its effectiveness in evaluating manuscripts, spotting errors, and delivering feedback. Despite the promising potential of AI to enhance efficiency, challenges related to accuracy, bias, and ethical issues continue to be significant concerns. The research conducted by Liang et al. (2024) delved into the influence of ChatGPT on peer reviews in AI conferences, thus offering a large-scale analysis of AI-modified content. They assessed the ramifications of AI-generated reviews on the quality of evaluation, the presence of bias, and consistency, while also shedding light on the obstacles in detecting AI-assisted reviews and the importance of fairness in the peer review process. Tufano et al. (2024) explored the benefits and drawbacks of automated code review tools. Their research reviewed current strategies and measured their success in identifying issues, enhancing the quality of software, and reducing the workload for developers. While automation offers significant improvements in efficiency, it faces challenges in managing complex code patterns and ensuring the accuracy of the review process. Mah (2024) analyzed the emotional ramifications of peer review, peer assessment, and self-assessment in the context of STEM education, via employing deep learning and NLP techniques. This study aims to predict emotional responses, assess the trends of sentiment, and promote fairness in evaluations, ultimately offering insights to enhance the processes of feedback within educational frameworks.
Academic rankings significantly impact universities, faculty advancement, and student opportunities, yet the importance of peer review is often understated and not fully appreciated in the academic sphere. While initiatives such as Publons (by WoS) have made headway in highlighting the contributions of peer reviewers, there is a need for broader initiatives. By establishing formal partnerships between universities and publishing houses, PhD candidates and postdoctoral fellows could engage more directly in the peer review process, thereby enhancing the quality and sustainability of the review ecosystem.
Gaughf & Foster (2016) analyzed the rollout of a centralized institutional peer tutoring program. They pointed out its positive effects on student learning, academic performance, and engagement, while also recognizing the obstacles related to coordination, tutor preparation, and the sustainability of the program within educational frameworks. The research conducted by Pol et al. (1983) focused on peer training aimed at enhancing safety-related skills among institutional staff. The mutual benefits for trainers and trainees were highlighted, encompassing improved skill acquisition, retention, and workplace safety. This study emphasized the effectiveness of peer training in promoting a nurturing and effective learning environment. The research conducted by Olcott IV et al. (2000) focused on the role of institutional peer review in shaping the outcomes of carotid endarterectomy. Their analysis revealed that peer review could effectively lower surgical risks and costs through improved decision making, standardization of procedures, and increased safety of patients, thus highlighting its critical role in quality assurance for vascular surgical practices.
The role as a peer reviewer could have the opportunity to witness the significant pressure exerted by editorial boards, particularly adhering to deadlines. During the course of PhD or postdoctoral journeys, student researchers could perform peer reviews offered by numerous academic fields and departments on an informal basis. Nevertheless, the lack of formal peer review training programs in universities indicates a deficiency in the commitment to transparent academic publishing, despite the fact that faculty and students face mounting pressure to publish their works.
Yuan et al. (2016) addressed the need to enhance academic-community collaborations to effectively connect research with practical applications in violence studies. They pointed out the value of joint efforts in refining data collection, intervention strategies, and implementation of policies, while emphasizing the importance of integrating community perspectives to achieve more effective and relevant outcomes in violence prevention research. Research conducted by Tama et al. (2023) delved into the changing opportunities and challenges of connecting academia, policymakers, and the public in international studies. They underscored the critical need for interdisciplinary collaboration, effective communication, and engagement approaches to strengthen the effectiveness of a policy and improve public understanding in a swiftly transforming global landscape. Mah et al. (2022)’s study explored the role of virtual monitoring systems in shaping digital teaching delivery and student evaluation. The analysis focused on the effects of the effectiveness, engagement, and performance of learning, thus underlining how technology-enhanced monitoring contributes to better academic outcomes and promotes adaptive and data-driven educational frameworks. Beckley et al. (2015) investigated the “Bridges to Higher Education” initiative, which aimed to address educational disparities via collaborative programs. Their analysis focused on methods to enhance access, engagement, and achievement for underrepresented students, highlighting the importance of lifelong learning and preparedness for the workforce within a multifaceted educational environment. The research conducted by Tang et al. (2024) examined in detail how educational technology could enhance educational equity by broadening access, personalizing learning pathways, and improving the distribution of resources. They identified challenges, including the digital divide and barriers to effective implementation, while also emphasizing the transformative potential of technology in addressing educational gaps across various student populations.
A well-defined partnership for peer review between universities and publishing entities would serve the interests of all stakeholders. By embedding peer review training within doctoral and postdoctoral programs, universities would ensure that their students acquire crucial skills in evaluation. Consequently, publishing houses would benefit from a consistent source of qualified reviewers.
The following paragraphs explain some of the most pressing challenges leading to issues of academic publishing:
Gap between academic institutions and publishing houses: A notable tension is developing between academic institutions and commercial publishing entities. Academic institutions prioritize OA, transparency, and the spread of knowledge, while publishers are more focused on profit-driven strategies, i.e., frequently implementing paywalls and high article processing charges (APCs). This misalignment results in barriers to equitable access, strains library budgets, and diminishes the global visibility of institutional research, particularly in regions with limited funding. Despite being the main creators of content, academics often have to “buy back” their own research or deal with issues of accessibility, which reinforce a structural dependency that institutions are increasingly challenging through mandates, preprint repositories, and open-access negotiations.
Predatory journals and conferences: Predatory journals and conferences take advantage of the academic imperative to publish by imposing fees without adhering to appropriate peer review or editorial standards. They frequently present themselves as legitimate entities yet disseminate subpar or unverified research. This situation particularly misleads early-career researchers, undermines authentic scholarship, and squanders institutional resources, ultimately damaging the credibility and advancement of academic fields.
Overload and quality of peer reviews: The process of peer review plays a crucial role in upholding the quality of research; however, the influx of submissions has placed an excessive burden on reviewers. Numerous scholars encounter a rise in requests without adequate acknowledgment or compensation, leading to hasty or cursory reviews. This situation undermines the precision, equity, and promptness of feedback, thus threatening the dependability of published research and diminishing academic integrity.
Access and affordability (OA vs. paywalls): Academic knowledge frequently remains inaccessible due to paywalls, which restrict access for scholars lacking substantial institutional funding. Although OA seeks to democratize information, the imposition of high APCs transfers financial responsibilities to authors. This situation fosters inequality, particularly for researchers situated in low- and middle-income areas, and prompts inquiries regarding the sustainability and equity of academic publishing frameworks.
Plagiarism and research integrity: The growing competition and mounting pressures in the academic field have led to a significant rise in cases of plagiarism, data manipulation, and authorship disputes. These breaches of integrity severely damage the trustworthiness of scholarly works, harm reputations, and waste valuable resources in peer review and publishing processes. It is imperative to maintain high ethical standards to preserve credibility and promote responsible advancement of knowledge. This can be effectively achieved through academic collaborations between publishers and educational institutions.
Recruitment and employment conditions: PhD candidates and postdoctoral researchers experience significant pressure to publish, as their career advancement is largely contingent upon the production of numerous influential papers. Academic institutions require ongoing research output to improve their standings and draw in funding, which compels postdoctoral researchers into a cycle of temporary contracts and elevated performance expectations.
Although the academic framework depends on their contributions, postdoctoral researchers frequently encounter a lack of career stability and acknowledgment without their publishing input, resulting in a disparity between their essential role and their unstable employment circumstances. International students encounter the most pronounced pressure to publish in order to secure a position at the university.
Prestigious indexed journals vs. non-indexed journals in respect of OA charge ratio: Once journals achieve esteemed indexing recognition such as Scopus, WoS, and Directory of OA Journals, they frequently transition to a profit-oriented business model by substantially raising APCs. This change emphasizes revenue generation over the accessibility of research. As APCs escalate, occasionally surpassing $5k per article, early-career researchers, particularly those from economically disadvantaged areas, find themselves excluded from engaging in significant academic discussions.
Table 1 presents the estimated APCs for prestigious indexed journals compared with non-indexed journals. Indexers are crucial in determining the legitimacy of journals. Consequently, they ought to implement limits or standards on permissible APCs to guarantee that indexing serves as a symbol of quality rather than a means to monetized exclusivity. In the absence of regulation, indexing may inadvertently exacerbate inequity, allowing only financially secure authors to gain visibility, which hinders diversity, global representation, and the fundamental academic objective of disseminating inclusive knowledge.
Publisher | Indexing (Scopus, WoS) | APC (USD) | OA Ratio | Remarks |
Springer Nature Portfolio | Indexed | 2k–4.5k | 8:1 | General Springer journals; wide coverage, hybrid/Gold OA. |
Springer Nature (Nature-branded) | Indexed | 3k–4.5k | 10:1 | High-impact titles like Nature, Nature Communications. |
Elsevier | Indexed | 2k–4.5k | 8:1 | OA options like Cell Reports; hybrid OA. |
Wiley | Indexed | 2k–4.5k | 7:1 | Broad subject areas; hybrid OA. |
Taylor & Francis | Indexed | 2k–4.5k | 6:1 | Humanities and social sciences; APC varies. |
Oxford Univ. Press | Indexed | 1.5k–4.5k | 5:1 | Strong in law, medicine, humanities. |
Cambridge Univ. Press | Indexed | 1.2k–3.8k | 4:1 | Hybrid and full OA journals. |
SAGE | Indexed | 1.8k–4.5k | 6:1 | Health/social science focus. |
IEEE | Indexed | 1.8k–2.5k | 5:1 | Computer science and engineering. |
MDPI | Indexed | 1.2k–2.3k | 3:1 | Rapid peer review; broad topics. |
Frontiers | Indexed | 2k–4.5k | 5:1 | Fully OA; community review. |
Hindawi (Wiley) | Indexed | 1.5k–3k | 4:1 | Affordable OA; fast publication. |
ACM journals | Indexed | 600–2.5k | 4:1 | ACM Open; discounts for SIG members. |
MIT Press journals | Indexed | 300–2k | 3:1 | OA in niche areas; supported by institutions. |
IoP Publishing | Indexed | 1.5k–3k | 4:1 | Physics, materials science, engineering. |
Inderscience | Indexed (hybrid) | 800–2.5k | 3:1 | Offers both subscription and OA; technology/business. |
Emerald | Partly indexed | 1.3k–2.5k | 3:1 | Management/social science journals. |
IGI Global | Mixed | 1.2k–2.5k | 3:1 | OA in tech/education; also book chapters. |
Univ. Press journals | Mixed/non-indexed | 100–800 | 2:1 | OA supported by universities. |
Local/national journals | Non-indexed | 50–300 | 1:1 | Free or nominal APCs; regional scope. |
3. Applied Method and Materials
In order to tackle these challenges, the implementation of a systematic peer review framework was proposed to incorporate PhD candidates and postdoctoral researchers, to be guided by faculty mentors. This framework encompasses several key components.
- Formal training: The inclusion of peer review courses in PhD and postdoctoral programs is a necessary step for universities to take.
- Institutional partnerships: Journals and higher education institutions ought to collaborate to implement structured peer reviews for those in the early stages of their research careers.
- Supervised peer review: PhD candidates and postdoctoral scholars ought to engage in peer review activities while being mentored by senior academic professionals.
- Recognition and incentives: Institutions must recognize and integrate the role played by the contributions of peer review in their academic promotion and ranking systems.
- Two-tier review system: Initial assessments carried out by doctoral candidates and postdoctoral fellows should be followed up by validation from experts in the field.
To improve the effectiveness of peer review and address bias, the introduction of a two-tiered system could be considered:
(1) Academic peer review by PhD candidates and postdocs: Guided by faculty mentorship, these scholars will engage in preliminary assessments within their disciplines, to ensure that the evaluations are of high quality and conducted by experts in the subject matter.
(2) Expert advisory review: External independent experts, not affiliated with academic institutions, would conduct secondary assessments to guarantee objectivity and enhance transparency.
Prominent indexing organizations such as WoS, Scopus, Digital Object Identifier (DOI), and Scimago have the capacity to assess scientific quality, detect instances of misconduct, and evaluate journals according to their compliance with ethical review standards established by affiliated institutions. Universities that participate in the publication of fraudulent review feedback should have a direct influence on the journals they collaborate with and should be subject to penalties, including the termination of partnerships with esteemed publishers.
The Higher Education ministry should take responsibility for monitoring the scientific quality of publications, investigating any misconduct, and evaluating the articles that universities approve for dissemination, to ensure the institutions adhere to ethical review protocols. Universities found to be involved in the fraudulent endorsement of manuscripts or dishonest review processes should face consequences, such as being barred from partnerships with reputable publishers and restricted from applying for national grants.
4. AI Meta Audit Test Case
The dataset for this heatmap was constructed based on evaluations of publishers in categories such as peer review partnerships, early-career contributions, review quality, and ethical concerns. Partial data source and a completed data table were converted into Figure 1. The data sources include:
(1) Reports of journal publishers: Evaluation data from publishers such as Springer, Elsevier, Wiley-Blackwell, etc.
(2) Publons/ORCID data: Peer review activity and engagement rates.
(3) Institutional reports: Universities’ internal assessments of journal collaborations.
(4) Responses to surveys: Peer reviewers’ opinions on transparency, efficiency, and fairness.
(5) Text mining & NLP analysis: Extracting key phrases from journal review systems using:
Topic modeling (Latent dirichlet allocation and non-negative matrix factorization); and
Predictive analytics for detection of bias.

To standardize the scores, Min-Max scaling was applied:
where, X is the original score; Xmin and Xmax are the minimum and maximum scores in the dataset.
To compute the statistical summary, the mean (µ) and variance (σ2) were calculated:
where, µ is the mean score for a given category, and σ2 is the variance.
To evaluate the correlation between different categories and publishers, Pearson’s correlation coefficient was used:
where, X and Y are the different publishers’ scores; and are the respective means.
The values in Figure 1 represent categorical scores assigned to different publishers across various criteria of evaluation. More details of evaluation can be found in the supplementary file.
Figure 1 illustrates the features of academic publishers by categorizing them into different evaluation criteria with respective scores. It offers a clear and color-coded visualization of data trends, which facilitates comparative analysis and supports decision making regarding academic collaborations and the impact of peer reviews.
5. Results
This section divides the methods of analyses into two stages, i.e. theoretical analysis and computational analysis. The theoretical path provides proposals to solve the challenges confronted by peer review. This section also provides AI Meta Audit Test Case: Analyses of Results.
The introduction of an organized peer review partnership would help:
(1) Address the scarcity of reviewers: By engaging PhD candidates and postdoctoral associates, the number of potential reviewers would be broadened.
(2) Improve review quality: Enhancing the accuracy and reliability of peer reviews could be achieved through the establishment of training and mentorship initiatives for faculty members.
(3) Ensure fairer publishing: Minimizing reliance on fee-based peer review mechanisms would contribute to a more balanced environment within academic publishing.
(4) Foster career growth: Researchers in the early stages of their careers would acquire significant experience and acknowledgment for their contributions.
An additional concern involves the monetization of peer review, as it often leads to a preference for experienced reviewers rather than early-career scholars. Some publishers have developed compensation frameworks that offer:
- Discounted publication fees (e.g., 50% off for reviewers);
- Review-for-publication exchanges. For instance, reviewing two articles earns a free publication slot;
- Expedited peer review services for premium fees; and
- Recognition for renowned conference peer review activities.
These frameworks create a skewed advantage for veteran researchers, sidelining early-career scholars. The development of organized partnerships between universities and publishing entities could facilitate just compensation for PhD candidates and postdoctoral fellows, thus allowing them to acquire significant experience while mitigating financial pressures, particularly in countries with restricted PhD funding opportunities.
The expansion of open-access publishing has introduced a notable issue: The presence of institutional bias in the peer review process. Certain journals may give preference to manuscripts based on the authors’ institutional ties rather than the actual quality of the research. By integrating universities into the peer review system, institutions could formulate credibility-based rankings that are in line with the standards of academic publishing, similar to the methodologies used for ranking universities.
6. Results of Analyzes from AI Meta Audit Test Case
The data illustrating the performance of academic publishers, based on various evaluation criteria in peer review, was displayed in the bar charts shown in Figure 2.
A different structure was utilized to analyze the information. The data produces several plots, each featuring unique colors for improved comparison, thereby providing deeper insights into journal collaborations, efficiency, quality, and inclusivity of review, as well as trends in acceptance and rejection, to support informed decision making.

7. Testing of Hypotheses
This is to evaluate the extent to which PhD candidates and postdoctoral fellows contribute to academic publishing activities through peer review services facilitated by academic partnerships.
Table 2 represents the scores assigned to different publishers based on various evaluation criteria.
Each hypothesis was evaluated based on the aggregated scores from the Table 2.
Criteria | Springer | Harvard Edu. Press | Elsevier | Wiley-Blackwell | MDPI | Emeralds | Sage | MIT Press | Taylor & Francis | Oxford Uni. Press |
Identification of peer review partnerships | 10 | 10 | 9 | 8 | 7 | 6 | 6 | 5 | 8 | 7 |
Assessment of peer review contributions | 9 | 9 | 9 | 8 | 7 | 7 | 5 | 8 | 7 | 6 |
Measurement of review quality and efficiency | 9 | 7 | 10 | 8 | 8 | 8 | 4 | 5 | 8 | 7 |
Impact on journal acceptance/rejection | 10 | 6 | 9 | 8 | 7 | 6 | 6 | 9 | 7 | 5 |
Review engagement rate | 9 | 7 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 |
Turnaround time | 9 | 8 | 9 | 8 | 7 | 6 | 6 | 8 | 7 | 6 |
Quality of reviews | 9 | 3 | 10 | 8 | 7 | 6 | 6 | 5 | 8 | 7 |
Trends of acceptance/rejection | 10 | 4 | 9 | 8 | 7 | 6 | 6 | 5 | 7 | 7 |
Diversity & inclusion | 9 | 3 | 10 | 8 | 7 | 6 | 6 | 5 | 7 | 6 |
PhD candidates and postdoctoral fellows contribute to the efficiency of peer review by reducing the turnaround time. To evaluate this, we calculated the average Turnaround Time score across different publishers.
where:
Sefficiency represents the overall efficiency score;
ScoreTurnaroundTime denotes the turnaround time score for the i-th publisher;
N is the total number of publishers (10).
Interpretation: An average score of 7.4 indicated that early-career researchers were contributing positively to the efficiency of peer review. The relatively high score suggests that involving PhD candidates and postdoctoral fellows could lead to a faster review process.
Academic partnerships play a crucial role in improving inclusivity by engaging a diverse group of early-career reviewers. To assess this aspect, the Diversity and Inclusion scores were averaged across publishers using the general scoring formula given in Eq. 5.
Interpretation: An average score of 6.7 showed that academic partnerships moderately enhanced inclusivity. However, significant variations existed among publishers (e.g., Elsevier scored 10, while Harvard Edu Press scored only 3). This suggests that while partnerships promote diversity, their effectiveness depends on the specific policies of each publisher.
Early-career researchers, including PhD candidates and postdoctoral fellows, are increasingly involved in the peer review process. To evaluate whether their contributions are associated with comparable or improved review quality, the Quality of Reviews scores were averaged across publishers according to the general formulation in Eq. 5.
Interpretation: An average score of 6.9 suggested that early-career reviewers delivered review quality that was, on average, similar to or slightly below senior reviewers. The presence of high scores (e.g., 10 for Elsevier) supports the argument that early-career researchers can provide high-quality reviews.
PhD candidates and postdoctoral fellows may have a distinct approach to evaluating manuscripts, potentially impacting trends of acceptance and rejection. To examine this possibility, the Acceptance/Rejection Trends scores were averaged across publishers using Eq. 5.
Interpretation: An average score of 6.9 indicated that the involvement of PhD candidates and postdoctoral fellows played a measurable role in the patterns of manuscript acceptance and rejection. The variation in scores suggests that different publishers exhibit different tendencies in how they incorporate early-career reviewers into decision-making processes.
Based on the statistical results,
Peer review efficiency (H1) was relatively high (7.4), indicating that early-career researchers contributed effectively to the speed of peer review.
Inclusivity (H2) was lower (6.7), suggesting that academic partnerships could be enhanced to improve diversity.
Review quality (H3) and acceptance/rejection trends (H4) both scored 6.9, showing that early-career reviewers performed comparably to senior reviewers.
These results supported the hypothesis that PhD candidates and postdoctoral fellows had a positive impact on academic publishing through contributions to peer review.
8. Hypothetical Evaluations
The hypotheses explored the impact of PhD candidates and postdoctoral fellows on the efficiency, inclusivity, and decision making of peer review. They assessed how academic partnerships enhanced review quality, contributions of early-career researchers, and their influence on acceptance/rejection rates. Understanding these factors helps optimize processes of peer review, to ensure fairness, speed, and quality in academic publishing while addressing biases and promoting diversity within scholarly evaluation systems. Figure 3 presents the hypotheses and the corresponding scores for each publisher.
Testing of hypotheses revealed that PhD candidates and postdoctoral fellows could enhance the efficiency, inclusivity, and quality of peer review. Early-career reviewers provide competitive assessments and academic partnerships to improve diversity. These insights help optimize peer review, reduce biases as well as promoting fair and high-quality academic publishing practices.

9. Discussion
The following steps constitute the discussion in the current study for interested academicians, researchers, students, editors, and administrators.
Similar to the other competencies involved in academic writing and research, peer review should be recognized as a professional skill. Meanwhile, journal editors frequently engaging in writing and research are professionals who determine the outcomes of submitted papers. Universities should take a more active role in the peer review process by training their PhD candidates and postdoctoral fellows to serve as academic reviewers.
Köhler et al. (2020) proposed a peer review competency framework to enhance rigor and reliability in industrial and organizational psychology, thus emphasizing the training of reviewers. Lamont (2012) examined academic decision-making, in order to reveal biases, networks, and institutional norms in research assessment and peer review. Grainger (2007) framed peer review as a professional duty, in order to stress competence, ethics, and accountability. Musselin (2013) explored the role of peer review in university governance, to balance institutional autonomy and accountability. Furthermore, Bedeian (2004) analyzed the influence of peer review on the construction of knowledge in management studies, to expose biases and power dynamics. Together, these studies highlighted the impact of peer review on research quality, academic norms, and institutional governance.
Academic conferences typically employ a distinct peer review model compared to academic journals. Many of these conferences utilize a committee-based approach for peer review, and the submission fees they collect contribute to the financial support of the event. However, it is important to note that conference papers generally receive less thorough examination than journal articles, which are subject to more stringent evaluations and tend to have higher rates of retraction.
Adelman et al. (1976) critically assessed the methodology of case study, to advocate stronger validity, generalizability, and researchers’ contributions to educational research. Ambrosino et al. (2025) analyzed post-COVID-19 economic policies, to emphasize government intervention, resilience, and sustainable recovery. Gottlieb et al. (2020) explored the impact of COVID-19 on conferences related to professional development, in order to promote virtual and hybrid models for accessibility and flexibility. De Picker (2020) discussed inclusion and disability activism in academia, to highlight structural reforms for equitable participation. Collectively, these studies underscored the necessity of methodological rigor, adaptive policies, technological innovation, and inclusivity in research, policymaking, and academic engagement.
To enhance the peer review process for conferences, a partnership between universities and publishers could be beneficial. By involving academic institutions in the review process, conference organizers could work alongside universities to uphold rigorous peer review standards, similar to those applied in journal publications.
University recruitment teams are known for their careful selection of students based on academic qualifications, while corporate HR specialists evaluate employees according to the needs of their organizations. In a parallel manner, publishing companies should implement a systematic strategy for selecting reviewers by collaborating with universities to identify qualified PhD candidates and postdoctoral fellows.
Arsenault et al. (2021) explored the importance of journal papers to graduate students in academia, so as to highlight challenges and benefits like academic development and visibility. Candal-Pedreira et al. (2023) emphasized the need for quality and transparency in peer review, thus advocating better training and unbiased decision making for reviewers. Lightman (2016) examined the popularity of science publishing in the 19th century, to demonstrate how editors and writers expanded public engagement and reshaped science communication. Together, these studies underscored the significance of transparency, inclusivity, and accessibility in academic publishing, thus reinforcing the need for structural improvements to maintain the integrity and effectiveness of scholarly dissemination.
A number of universities already engage in partnerships with publishers for various publishing models, including open-access, subscription-based, and gold-standard approaches. These established collaborations could be leveraged to create structured peer review services, thus promoting a more transparent and just academic publishing system.
10. AI Meta-Audit Engine
This section examines the evaluation of academic knowledge using an AI meta-audit engine based on the following themes:
- Academic closed-loop structure for the AI meta-audit workflow;
- Multi-Stakeholder AI meta-audit workflow;
- AI meta-audit flow;
- Academic Assessment Scorecard; and
- AI meta-audit risk signals.
Figure 4 outlines a circular and closed-loop structure for the AI meta-audit workflow, emphasizing that academic evaluation is a perpetual process rather than a one-time occurrence.
At the top, Academic Context + Evidence represents the inputs: institutional goals, policies, manuscripts, data, and review records. These components feed into the center, the AI meta-audit engine, which operates as the main processing entity. The engine carries out integrity checks, fairness analyses, quality control, and verifications of compliance.
On the right, Scoring & Explainability converts the analysis into transparent metrics, weighted scores, and rationales. At the bottom-left, Decision Actions translate the findings into operational steps such as acceptance, revision, rejection, or escalation.

Finally, the cycle leads to Audit Outputs, generating reports and recommendations that guide future evidence and policy revisions. The circular arrows indicate feedback and learning, thereby ensuring governance, accountability, and sustained enhancement of academic quality.
Figure 5 represents a multi-stakeholder AI meta-audit workflow that integrates evidence from all levels of the scholarly publishing ecosystem prior to the commencement of automated evaluation. It expands upon the previous single-source model by introducing governance layers from publishers and editorial boards.
First, Academic Context + Evidence comprises manuscripts, datasets, methodologies, and supervision records. Following that, Publisher Context + Evidence contributes policies, ethical guidelines, transparency standards, and compliance metrics. Subsequently, Editorial Board Context + Evidence adds peer-review logs, integrity checks for reviewers, and editorial decisions.

All three streams in Figure 5 converge into the AI meta-audit engine, which carries out integrity verification, fairness testing, and quality control. The engine outputs, Scoring & Explainability metrics, support Decision Actions (accept, revise, reject, and escalate). Audit Outputs provide reports, alerts, and recommendations. In essence, the flow illustrates a system of layered accountability, where mentorship, editorial governance, and publisher oversight collectively enhance the reliability of research and the measurable performance of science.
The Input → Check → Test → Score → Decide → Report model structures an AI-assisted audit into clear and accountable phases. Input gathers context and evidence. Check verifies quality, integrity, and fairness. Test uses scenario-based and adversarial probes to challenge reliability. Score applies weighted rubrics with explainable rationales and uncertainty flags. Decide translates findings into actions such as accept, revise, reject, or escalate. Finally, Report provides transparent summaries, visual scorecards, risks, and recommendations, ensuring consistent governance, reproducibility, and reliable decision-making across academic and organizational evaluations at scale.
Academic Context + Evidence → AI Meta-Audit Engine → (Scoring & Explainability + Decision Actions) →Audit Outputs.
Figure 6 presents a conceptual workflow diagram for an AI-based meta-audit system intended for the evaluation of academic or research outputs. It showcases a well-organized pipeline in which inputs are analyzed by a research team composed of students, research advisors, and an AI engine; they are subsequently tested, scored, and translated into decisions and final audit reports for global ranking.

The radar (spider) chart named AI Meta-Audit Academic Assessment Scorecard (Example) visualizes multidimensional evaluation outcomes across six criteria on a scale from 0 to 100: Research Rigor, Teaching Evidence, Policy Compliance, Integrity, Equity/Fairness, and Impact & Outcomes. Research Rigor and Impact reflect the highest performance, while Integrity is comparatively lower.
The shaded polygon in Figure 7 highlights strengths, gaps, and balance among the dimensions, hence providing rapid comparative insights. It communicates a transparent and explainable scoring system that supports evidence-based academic auditing, benchmarking, governance compliance, and focused improvement decisions within an AI-assisted evaluation framework.
Mentorship and ranking impact: Figure 7 depicts how a systematic approach to student-supervisor mentorship bolsters integrity in both research and teaching practices.
Increased scores in integrity, compliance, and rigor indicate ethical guidance, reproducibility, and responsible conduct. These advancements elevate departmental quality, enhance university performance metrics, and contribute to a stronger institutional reputation, ultimately impacting national competitiveness and promoting the rise of global university rankings through sustained academic excellence.

The AI meta-audit risk signals heatmap provides a visual framework for the assessment of research integrity and scientific quality in the peer review conducted by PhD candidates and postdoctoral fellows under the supervision of academic mentors. By monitoring risk indicators related to authorship, data availability, reproducibility, and ethical review practices, the heatmap identifies weaknesses at an early stage and supports corrective guidance. The aggregated signals inform departmental performance metrics, strengthen responsible scholarship, and translate the quality of supervised research into quantifiable outputs that contribute to institutional benchmarking, university rankings, and ultimately enhance global scientific competitiveness and impact.
Figure 8 presents a heatmap of risk signals, which encapsulates potential issues related to research integrity across various audit dimensions. The rows illustrate eight test cases (TC1–TC8), which encompass ghost authorship, absent data/code, irreproducibility, manipulation of reviews, stacking of citations, undisclosed conflicts of interest, cloning of journals, and signals from predatory venues. The columns are aligned with governance areas: authorship, availability of data, reproducibility, integrity of peer review, hygiene of citations, and conflicts of interest.
The color intensity varies from low (dark/blue) to high (yellow), with numerical scores ranging from 0 to 100 that signify the severity of risk. Clusters of high risk are evident for missing data/code, irreproducible findings, and signals from predatory venues, whereas certain areas such as the reproducibility of review manipulation, exhibit minimal concern. This heatmap facilitates the swift identification of vulnerabilities, prioritization of inquiries, and informed decision making for mitigation within an AI-enhanced academic audit framework.

11. Conclusions
Academic publishing is fundamentally reliant on peer review, yet there is a notable lack of formal training for this process within universities. By integrating PhD candidates and postdoctoral fellows into organized peer review systems through institutional collaborations, we could enhance the integrity of research, address the shortage of reviewers, and promote fairer opportunities in academic publishing. It is imperative for universities, publishing companies, and indexing platforms to collaborate in creating sustainable peer review partnerships that benefit both reviewers and the wider scholarly community. The academic publishing industry should recognize the significant role that PhD candidates and postdoctoral fellows play in the peer review process. Establishing partnerships between universities and publishers would not only improve the quality of reviews but also provide essential experience to early-career researchers, leading to a more equitable publishing environment. By professionalizing the peer review process, academia could ensure the transparency and accountability in the academic publishing industry.
Conceptualization, P.M.M.; methodology, P.M.M.; software, P.M.M., J.M., T.D.M., P.M., M.M.A.B., L.K.S., J.A., E.M., J.A.O., & S.J.S.; validation, P.M.M.; formal analysis, P.M.M.; investigation, P.M.M.; resources, P.M.M., J.M., T.D.M., P.M., M.M.A.B., L.K.S., J.A., E.M., J.A.O., & S.J.S.; data curation, P.M.M.; writing—original draft preparation, P.M.M., J.M., T.D.M., P.M., M.M.A.B., L.K.S., J.A., E.M., J.A.O., & S.J.S.; writing—review and editing, P.M.M.; visualization, P.M.M.; supervision, P.M.M.; project administration, P.M.M., J.M., T.D.M., P.M., M.M.A.B., L.K.S., J.A., E.M., J.A.O., & S.J.S. All authors have read and agreed to the published version of the manuscript.
The results of analysis come from the information obtained from the data source: https://publons.com/static/Publons-Global-State-Of-Peer-Review-2018.pdf?utm_source=chatgpt.com. The accumulation of sourcing datasets can be found in Table A1 in Appendix: AI meta-audit test case query questionnaire form.
This study was supported, edited, and formatted by a team of researchers from Inventive Creativity Foundation: https://inventivecreativity.org/.
The authors declare no conflict of interest.
Table A1. AI meta-audit test case query questionnaire form
AI Meta Audit Test Case | Impact of PhD and Postdoc Fellows on Publishing Activities via Academic Partnerships for Peer Review Services | Statistical data according to levels (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10) | |||||||||
1. Objective | To evaluate the extent to which AI Meta Audit on PhD and postdoc fellows contribute to academic publishing activities through peer review services facilitated by academic partnerships. | ||||||||||
2. Scope | Springer | Harvard Edu Press | Elsevier | Wiley-Blackwell | MDPI | Emeralds | Sage | MIT Press | Taylor & F | Oxford Uni Press | |
Identification of peer review academic partnerships involving PhD and Postdoc fellows. | 10 | 10 | 9 | 8 | 7 | 6 | 6 | 5 | 8 | 7 | |
Assessment of peer review contributions from early-career researchers. | 9 | 9 | 9 | 8 | 7 | 7 | 5 | 8 | 7 | 6 | |
Measurement of quality, efficiency, and inclusivity in peer review processes. | 9 | 7 | 10 | 8 | 8 | 8 | 4 | 5 | 8 | 7 | |
Impact on journal acceptance/rejection rates, turnaround time, and review quality. | 10 | 6 | 9 | 8 | 7 | 6 | 6 | 9 | 7 | 5 | |
3. Key metrics (Publon, WoS, ORCIDs) | Springer | Harvard Edu Press | Elsevier | Wiley-Blackwell | MDPI | Emeralds | Sage | MIT Press | Taylor & F | Oxford Uni Press | |
Review engagement rate: Number of peer reviews completed per PhD/Postdoc fellow. | 9 | 7 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
Turnaround time: Average review duration compared to senior reviewers. | 9 | 8 | 9 | 8 | 7 | 6 | 6 | 8 | 7 | 6 | |
Quality of reviews: Editor rating and author feedback on review quality. | 9 | 3 | 10 | 8 | 7 | 6 | 6 | 5 | 8 | 7 | |
Acceptance/rejection trends: Correlation between early-career reviewer involvement and decision outcomes. | 10 | 4 | 9 | 8 | 7 | 6 | 6 | 5 | 7 | 7 | |
Diversity and inclusion: Representation of PhD/Postdoc fellows across disciplines and demographics. | 9 | 3 | 10 | 8 | 7 | 6 | 6 | 5 | 7 | 6 | |
4. Data sources (journals) | Springer | Harvard Edu Press | Elsevier | Wiley-Blackwell | MDPI | Emeralds | Sage | MIT Press | Taylor & F | Oxford Uni Press | |
Journal databases and peer review platforms. | 10 | 4 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
ResearchGate, ORCID, Publons (reviewer recognition data). | 9 | 8 | 9 | 8 | 7 | 6 | 6 | 9 | 7 | 6 | |
Institutional partnerships and funding reports. | 10 | 9 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
Survey responses from editors, reviewers, and authors. | 9 | 3 | 9 | 8 | 7 | 6 | 6 | 5 | 7 | 6 | |
5. AI-powered analysis | Springer | Harvard Edu Press | Elsevier | Wiley-Blackwell | MDPI | Emeralds | Sage | MIT Press | Taylor & F | Oxford Uni Press | |
NLP Keywords analysis: Evaluating review Entity connections and constructiveness. | 9 | 6 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
Topic modelling: Identifying key themes in peer reviews by early-career researchers. | 9 | 7 | 9 | 8 | 7 | 6 | 6 | 8 | 7 | 6 | |
Predictive analytics: Forecasting review efficiency and impact based on past data. | 10 | 4 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
Bias detection: Examining disparities in acceptance rates based on reviewer seniority. | 9 | 3 | 9 | 8 | 7 | 6 | 6 | 9 | 7 | 6 | |
6. Risks and challenges (specialty, type of journal, place of publication ResearchGate) | Springer | Harvard Edu Press | Elsevier | Wiley-Blackwell | MDPI | Emeralds | Sage | MIT Press | Taylor & F | Oxford Uni Press | |
Bias in reviewer selection: Institutional biases affecting opportunities for early-career researchers. | 9 | 5 | 9 | 8 | 7 | 6 | 6 | 5 | 7 | 6 | |
Ethical concerns: Potential conflicts of interest or reviewer inexperience. | 10 | 7 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
Data gaps: Limited access to confidential peer review data. | 9 | 7 | 9 | 8 | 7 | 6 | 6 | 5 | 7 | 6 | |
7. Evaluation & ecommendations (ResearchGate) | Springer | Harvard Edu Press | Elsevier | Wiley-Blackwell | MDPI | Emeralds | Sage | MIT Press | Taylor & F | Oxford Uni Press | |
Strengthen academic partnerships to provide structured training for early-career reviewers. | 9 | 3 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
Encourage journals to implement double-blind peer review to mitigate bias. | 9 | 3 | 9 | 8 | 7 | 6 | 6 | 5 | 7 | 6 | |
Utilize AI-driven reviewer assignment to balance expertise and diversity. | 10 | 3 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
Develop recognition frameworks (e.g., Publons credits) to motivate PhD/Postdoc participation. | 9 | 3 | 9 | 8 | 7 | 6 | 6 | 5 | 7 | 6 | |
What is the hypothesis based on each score above? | |||||||||||
6. Hypotheses | Springer | Harvard Edu Press | Elsevier | Wiley-Blackwell | MDPI | Emeralds | Sage | MIT Press | Taylor & F | Oxford Uni Press | |
H1: PhD and Postdoc fellows improve the speed and efficiency of peer review. | 9 | 7 | 10 | 8 | 8 | 7 | 7 | 6 | 8 | 7 | |
H2: Academic partnerships enhance the inclusivity of peer review. | 9 | 8 | 9 | 8 | 7 | 6 | 6 | 8 | 7 | 6 | |
H3: Early-career reviewers provide comparable or superior review quality to senior reviewers. | 9 | 3 | 10 | 8 | 7 | 6 | 6 | 5 | 8 | 7 | |
H4: Involvement of PhD/Postdoc fellows influence acceptance and rejection patterns. | 10 | 4 | 9 | 8 | 7 | 6 | 6 | 5 | 7 | 7 | |
