Comparing ChatGPT and human expertise: exploring new avenues in peer review of single-case experimental research


Rakap S., Balikci S., Gülboy E.

EUROPEAN JOURNAL OF SPECIAL NEEDS EDUCATION, cilt.40, sa.5, ss.809-823, 2025 (SSCI, Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 40 Sayı: 5
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1080/08856257.2025.2533555
  • Dergi Adı: EUROPEAN JOURNAL OF SPECIAL NEEDS EDUCATION
  • Derginin Tarandığı İndeksler: Social Sciences Citation Index (SSCI), Scopus, Academic Search Premier, ASSIA, FRANCIS, Periodicals Index Online, EBSCO Education Source, Education Abstracts, Educational research abstracts (ERA), EMBASE, ERIC (Education Resources Information Center), Index Islamicus, Linguistics & Language Behavior Abstracts, Psycinfo
  • Sayfa Sayıları: ss.809-823
  • Ondokuz Mayıs Üniversitesi Adresli: Evet

Özet

Peer review plays a pivotal role in validating research and upholding academic excellence. However, it grapples with problems like reviewer reluctance, variable review durations and extended publication decision timelines. Artificial intelligence (AI), particularly ChatGPT, holds promise in augmenting the peer review process by enhancing efficiency and objectivity. This study compared peer reviews by human experts and ChatGPT for 18 single-case research design (SCRD) manuscripts in special education and psychology. Human reviewers and ChatGPT were evaluated for concordance in manuscript quality assessments and publication decisions. Findings reveal substantial agreement, suggesting ChatGPT's potential to assist when guided by structured rubrics. However, low agreement in publication recommendations highlights the nuanced nature of these decisions, influenced by subjectivity, domain expertise and contextual understanding. This underscores the necessity of a balanced approach to leverage AI's strengths while respecting human expertise in peer review practices. Implications for practice and recommendation for future research were provided.