EUROPEAN JOURNAL OF SPECIAL NEEDS EDUCATION, cilt.40, sa.5, ss.809-823, 2025 (SSCI, Scopus)
Peer review plays a pivotal role in validating research and upholding academic excellence. However, it grapples with problems like reviewer reluctance, variable review durations and extended publication decision timelines. Artificial intelligence (AI), particularly ChatGPT, holds promise in augmenting the peer review process by enhancing efficiency and objectivity. This study compared peer reviews by human experts and ChatGPT for 18 single-case research design (SCRD) manuscripts in special education and psychology. Human reviewers and ChatGPT were evaluated for concordance in manuscript quality assessments and publication decisions. Findings reveal substantial agreement, suggesting ChatGPT's potential to assist when guided by structured rubrics. However, low agreement in publication recommendations highlights the nuanced nature of these decisions, influenced by subjectivity, domain expertise and contextual understanding. This underscores the necessity of a balanced approach to leverage AI's strengths while respecting human expertise in peer review practices. Implications for practice and recommendation for future research were provided.