Debate is occurring about which result interpretation aides focused on examining the experimental effect should be used in single-subject experimental research. In this study, we examined seven nonoverlap methods and compared results using each method to judgments of two visual analysts. The data sources for the present study were 36 studies focused on naturalistic instruction interventions for young children with disabilities and 222 A-B graphs available in these studies. Two visual analysts made judgments about whether a functional relationship was evident in 222 A-B graphs. A graphing program was used to derive data for calculating each of the nonoverlap methods. Results showed that (a) estimates of experimental effect varied across the seven nonoverlap methods and (b) nonoverlap methods corresponding most closely with visual analysts' judgments of a change in data patterns differed from the nonoverlap methods that corresponded most closely with visual analysts' judgments of no change in data patterns. Based on findings from this study, we discuss considerations for selection and use of nonoverlap methods as result interpretation aides in single-subject experimental research and offer directions for future research.