European annals of dental sciences (Online), cilt.52, sa.2, ss.97-102, 2025 (TRDizin)
Purpose: Artificial intelligence (AI)-enabled systems such as ChatGPT provide benefits in the field of dentistry in many areas, such as patient education, counseling, appointment management, and professional development. The correct and effective use of such technologies can improve the experience of both patients and dentists. This study aimed to determine the accuracy and readability of ChatGPT responses to common patient questions about general dentistry. Materials and Methods: The most frequently asked questions by patients were collected using web-based tools. The ability to provide accurate and relevant information was determined subjectively by two observers using a 5-point Likert scale and objectively by comparing the responses with the Clinical Practice Guidelines and Dental Evidence published by the American Dental Association (ADA) and the literature. Readability was assessed using Simple Measure of Gobbledygook (SMOG), Flesch-Kincaid Grade Level (FKGL), and Flesch Reading Ease Score (FRES). Results: ChatGPT produced responses above the recommended level for the average patient (SMOG: 17.91; FRES: 43.98; FKGL: 10.29). The mean Likert score was 4.55, indicating that most responses were correct except for minor inaccuracies or missing information. FKGL and FRES readability scores correspond to a difficult reading level for patients seeking answers to general dental questions. Conclusions: ChatGPT has the potential to be a beneficial and decision-supportive tool for patients. However, ChatGPT should not replace dentists because incorrect and/or incomplete answers can negatively impact patient care.