Emily Simon, Lauren Oey, Crystal Lee, T. Florian Jaeger, and Xin Xie
Technology has made the world an increasingly interconnected sphere– one in which conversations can occur seamlessly while speakers sit oceans apart. However, with increasing globalization comes increasing demands on listeners to comprehend extensive variability in speech, particularly that of foreign-accented speakers. Nevertheless, evidence suggests that listeners rapidly adapt to accented speech – across varying speaker background, differing intelligibility and relatively brief exposures (Clarke & Garrett, 2004; Bradlow & Bent, 2008; Sidaras et al., 2009). After further exposure, listeners can generalize such adaptation to novel speakers with whom a listener has not previously interacted (Bradlow & Bent, 2008; Baese-Berk et al., 2013). The scope of this generalization, as well as its underlying mechanism, are still unknown. This is largely due to the inherent difficulties of measuring variability within and across speakers quantitatively. We examined generalizability of adaptation to accented speech in cases of exposure to multiple foreign accents. Using an online crowdsourcing paradigm, we will measure listener’s transcription accuracy after exposure to accented speech to assess generalized adaptation ability. During Exposure, listeners are assigned to one of three listening conditions; either 5 speakers of native English, 5 speakers of Mandarin-accented, or 5 speakers of varying language backgrounds (Korean, Thai, Hindi, Russian and Mandarin). After exposure, all listeners will be tested on a novel speaker of a familiar accent, and critically, a novel speaker of a novel accent. We hypothesize that transcription accuracy of novel foreign-accented utterances will be greatest in the case that listeners are exposed to the most systematic variability in accented speech. Under this assumption, we predict that listeners exposed to multiple foreign accents will perform best when tested on a novel accent.