RYSLING COLLOQUIUM
This Thursday, March 2nd, Amanda Rysling (UMass Amherst) will be giving a colloquium talk at 1:30pm in Humanities 2, Room 259. Her talk is entitled “Preferential early attribution in segmental perception” and the abstract is given below.
Recognizing the speech we hear as the sounds of the languages we speak requires solving a parsing problem: mapping from the acoustic input we receive to the sounds and words we recognize as our language. The way that listeners do this impacts the phonologies of the world’s languages.
Most work on segmental perception has focused on how listeners successfully disentangle the effects of segmental coarticulation. An assumption of this literature is that listeners almost always attribute the acoustic products of articulation to the sounds whose articulation created those products. As a result, listeners usually judge two successive phones to be maximally distinct from each other in clear listening conditions. Few studies (Fujimura, Macchi, & Streeter, 1978; Kingston & Shinya, 2003; Repp, 1983) have examined cases in which listeners seem to systematically “mis-parse” (Ohala, 1981; et seq.), hearing two sounds in a row as similar to each other, and apparently failing to disentangle the blend of their production. I advance the hypothesis that listeners default to attributing incoming acoustic material to the first of two phones in a sequence, even when that material includes the products of the second phone’s articulation. I report studies which show that listeners persist in attributing the acoustic products of a second sound’s articulation to a first sound even when the signal conveys early explicit evidence about the identity of that second sound. Thus, in cases in which listeners could have leveraged articulatory information to begin disentangling the first sound from the second, they did not do so. I argue that this behavior arises from a domain-general perceptual bias to construe temporally distributed input as evidence of one event, rather than two.
These results support a new conceptualization of the segmental parsing problem. Since listeners necessarily perceive events in the world at a delay from when those events occurred, it may be adaptive to attribute the incoming signal to an earlier speech sound when no other determining information is available. There are cases in which listeners do not disentangle the coarticulated acoustics of two sequential sounds, because they are not compelled to do so. Finally, I argue that this has affected the phonologies of the world’s languages, resulting in, for example, predominantly regressive assimilation of major place features.