Maggie Dochnal
If you are neurodivergent (like me), then participating in cognitive neuroscience and psychology experiments can be a very unpleasant experience. In my first semester at George Washington University, I signed up for a study where I had to perform a distance perception task: I would be shown pairs of dots and judge whether they were farther apart than the last pair. I sat in a sterile room in my school’s psychology department building for what felt like hours, staring at the same stimuli over and over and over again. At some point, I zoned out and carelessly answered on autopilot until I finished the last trial. The experience was so negative that, two years later, I still have not participated in another experiment. My experience is by no means uncommon. The history of cognitive psychology is riddled with experiment designs that create a painfully dull and boring experience for participants (Chandler et al., 2020; Siritzky et al., 2023). However, if you are neurodivergent or score high on measures of attentional deficit, then you may feel this effect differently from neurotypical participants. It can almost feel as if the experiments are designed to weed you out. With roughly 6% of American adults being diagnosed with attention-deficit/hyperactivity disorder (ADHD; Staley et al., 2024) and 2% being diagnosed with autism spectrum disorder (ASD; Dietz et al., 2020), which factors of experiment design drive this exclusion of neurodivergent participants and how may these issues be amended?
Neurodivergent exclusion in experiment design
Cognitive psychology experiments tend to be long. While most studies break up their trials into multiple blocks to prevent fatigue from affecting outcomes, the sheer number of trials can still negatively impact a participant’s concentration (Sirtizky et al., 2023). Disengaged participants can fall into the trap of carelessly completing trials to get the experience over with which significantly damages the quality of the data collected (Chandler et al., 2020). To remedy this issue, experiments often utilize data cleaning to remove data from unattentive participants. Common data cleaning practices include removing trials where participants were too fast, removing participants whose accuracy fell below a given range, or removing participants who failed attention checks: trials or survey items meant to flag inattentive participants (Grady et al., 2022; Siritzky et al., 2023; Vaknin et al., 2023).
Disengagement stemming from long study procedures may even cause participants to remove themselves from a study. Hoerger (2024) found that drop-out rates can be as high as 2% of participants per every 100 survey items. Neurodivergent participants, especially those with ADHD or ASD, are more likely to be negatively affected by experimental procedure length (Ptacek et al., 2019; Siritzky et al., 2023). This problem is made even worse when stimuli are repetitive and testing environments are dull and unstimulating. While there is little data about participant dropout specifically among neurodivergent participants, the presence of neurodivergent symptoms likely exacerbates the effects of an adverse experiment environment. This does not even account for neurodivergent prospective participants like me, who are dissuaded from participating in research because of uncomfortable environments.
Neurodivergent exclusion in sampling
Even when neurodivergent participants complete all of their trials and survey items attentively, their data may still be removed from the dataset due to selective sampling where members of a sample may be selected at the researcher’s discretion (McIntyre, 1952). This process involves the removal of participants from a sample after data is collected. It is common for cognitive psychology studies to remove participants with traits that may disrupt data collection, such as having a native language other than English, having a visual or cognitive impairment, or even being left-handed (Stoianov et al., 2024). Sometimes included in this list are participants with ASD, as was the case with Stoianov et al. (2024), although it is less common for participants with ADHD or obsessive-compulsive disorder to be excluded at this stage.
Significance
This kind of bias is called systematic bias: the participants who are removed from a dataset because they failed an attention check, dropped out, or did not meet the researcher’s inclusion criteria are not evenly distributed across the population (Siritzky et al., 2023). Because of the nature of their symptoms, neurodivergent participants are disproportionately likely to be removed from the dataset, and the generalizability of many cognitive psychology studies suffers because of it (Chandler et al., 2020). The reason I am only able to say “likely” is that many researchers do not report neurodiversity demographic data, especially for participants who are excluded or who dropped out (Siritzky et al., 2023). It is for this reason that Siritzky et al. (2023) described this as a “shadow” bias underlying the field, one that researchers can make inferences about but cannot be understood in its full scope because of a lack of information.
Toward more inclusive experiment design
The first step toward a more inclusive experiment design for neurodivergent participants is for researchers to report demographic data about participants who are removed from the dataset because of failed attention checks or poor performance. Even if selective sampling methods can be exclusive, researchers very consistently state for which characteristics they are selecting which allows for more grounded, accurate critique.
Another step researchers can take toward making their designs more inclusive is to utilize virtual data collection methods such as Amazon Mechanical Turk (Mturk) and Prolific for behavioral studies (Siritzky et al., 2023). While there are criticisms to be made about data quality when using these platforms with high rates of participant exclusion for Mturk workers relative to Prolific (Vaknin et al., 2023), participating in experiments remotely may give participants a greater sense of agency with regards to pacing and environment (Siritzky et al., 2023).
Conclusion
Traditional experimental designs and sampling practices in cognitive psychology create systematic biases against neurodivergent participants, harming the generalizability of findings to a neurodiverse population. To help make designs more inclusive toward neurodivergent participants, researchers can report demographic data for excluded participants and diversify data collection methods to include remote data collection and self-paced trials.
References
Chandler, J., Sisso, I., & Shapiro, D. (2020). Participant carelessness and fraud: Consequences for clinical research and potential solutions. Journal of Abnormal Psychology, 129(1), 49–55.
Dietz, P. M., Rose, C. E., McArthur, D., & Maenner, M. (2020). National and state estimates of adults with autism spectrum disorder. Journal of Autism and Developmental Disorders, 50(12), 4258–4266. https://doi.org/10.1007/s10803-020-04494-4
Grady, J. N., Cox, P. H., Nag, S., & Mitroff, S. R. (2022). Conscientiousness protects visual search performance from the impact of fatigue. Cognitive Research: Principles and Implications, 7(1), 56. https://doi.org/10.1186/s41235-022-00410-9
McIntyre, G. (1952). A method for unbiased selective sampling, using ranked sets. Australian Journal of Agricultural Research, 3(4), 385. https://doi.org/10.1071/AR9520385
Ptacek, R., Weissenberger, S., Braaten, E., Klicperova-Baker, M., Goetz, M., Raboch, J., Vnukova, M., & Stefano, G. B. (2019). Clinical implications of the perception of time in attention deficit hyperactivity disorder (ADHD): A review. Medical Science Monitor, 25, 3918–3924. https://doi.org/10.12659/MSM.914225
Siritzky, E. M., Cox, P. H., Nadler, S. M., Grady, J. N., Kravitz, D. J., & Mitroff, S. R. (2023). Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples. Cognitive Research: Principles and Implications, 8(1), 66. https://doi.org/10.1186/s41235-023-00520-y
Staley, B. S., Robinson, L. R., Claussen, A. H., Katz, S. M., Danielson, M. L., Summers, A. D., Farr, S. L., Blumberg, S. J., & Tinker, S. C. (2024). Attention-deficit/hyperactivity disorder diagnosis, treatment, and telehealth use in adults — National Center for Health Statistics Rapid Surveys System, United States, October–November 2023. MMWR. Morbidity and Mortality Weekly Report, 73. https://doi.org/10.15585/mmwr.mm7340a1
Stoianov, D., Kemp, N., Wegener, S., & Beyersmann, E. (2024). Emojis and affective priming in visual word recognition. Cognition and Emotion, 1–15. https://doi.org/10.1080/02699931.2024.2402492
Vaknin, D., Raz-Groman, Z., Scheuer, A., & Sadeh, T. (2023). Contextual reinstatement affects semantic organization. Frontiers in Psychology, 14, 1199039. https://doi.org/10.3389/fpsyg.2023.1199039
Comments