Mark your calendars for this year’s edition of Linguistics at Santa Cruz (LASC), the annual UCSC linguistics research conference at which second- and third-year graduate students present their research. The all-day event will take place on Saturday, March 18th, in Hum 1, Room 210. Eight talks are on the slate this year, covering a diverse range of subfields, including syntax, semantics, pragmatics, morphophonology, psycholinguistics, and numerous combinations therein, and spanning such languages as English, Cebuano, Georgian, Estonian, Chamorro, Hawai’i Creole, and Japanese. This year’s Distinguished Alumnus Lecture be given by Kyle Rawlins (Johns Hopkins), entitled “Unary ‘or'”. The full program can be found here. Don’t miss it!
This Tuesday, March 7th, there will be a colloquium talk given by Jennifer Smith (UNC), at 1:30pm in Hum 1, Room 210. Her talk is entitled “Unpacking the asymmetries in category-specific phonology,” and the abstract is given below:
Lexical category (N, A, V) has long been important for morphology and syntax. However, it turns out that even phonological phenomena—processes or phonotactics—sometimes apply differently to words of different lexical categories.
A typological survey of languages with category-specific phonology finds two striking asymmetries. First, category-specific phonology is skewed toward prosodic phenomena (accent, tone, word shape) rather than segmental or featural phenomena. Second, there is a hierarchy of phonological privilege N > A > V, where ‘privilege’ essentially means the ability to support greater phonological complexity.
This talk presents results from both formal phonological analysis and experimental-phonology data, arguing for the view that the skew toward prosodic phenomena comes about through extragrammatical factors (such as acquisition and diachronic change), while the hierarchy of privilege is a linguistic universal (though a ‘soft’ one that can be overcome in the face of data). Category-specific phonology has implications for theories of positional privilege in phonology; approaches to the phonology/morphosyntax interface; the formal modeling of markedness scales in natural language; and the investigation of learning biases in language acquisition and their effect on diachrony and typology.
Next week, on Wednesday, March 15th, Ivy Sichel will be giving this quarter’s Distinguished Faculty Lecture for Stevenson College. The talk begins at 4:30pm in the Stevenson Fireside Lounge, with a reception to follow. Her lecture is entitled “Ideology and Identity in the Revival of Spoken Hebrew”, and you can read more about it in the blurb below.
The revival of Spoken Hebrew took place in Palestine in the early 20th century, and is often seen as a historically unique example of successful language revival. In this talk I suggest that Hebrew is also exemplary, of the ways in which our languages speak through us. What is special about Hebrew is that key properties of the revival process – its rapidity and recency – make it possible to track mechanisms by which broader ideologies (of nation, ancestry, class, gender, etc.) come to be embedded in the languages we speak. The talk will focus on East-West diasporic dynamics in the negotiation of accent for the new spoken Hebrew, and on the shifting values of authenticity and sincerity in the construction of the new native-born speech style.
This Thursday, March 2nd, Amanda Rysling (UMass Amherst) will be giving a colloquium talk at 1:30pm in Humanities 2, Room 259. Her talk is entitled “Preferential early attribution in segmental perception” and the abstract is given below.
Recognizing the speech we hear as the sounds of the languages we speak requires solving a parsing problem: mapping from the acoustic input we receive to the sounds and words we recognize as our language. The way that listeners do this impacts the phonologies of the world’s languages.
Most work on segmental perception has focused on how listeners successfully disentangle the effects of segmental coarticulation. An assumption of this literature is that listeners almost always attribute the acoustic products of articulation to the sounds whose articulation created those products. As a result, listeners usually judge two successive phones to be maximally distinct from each other in clear listening conditions. Few studies (Fujimura, Macchi, & Streeter, 1978; Kingston & Shinya, 2003; Repp, 1983) have examined cases in which listeners seem to systematically “mis-parse” (Ohala, 1981; et seq.), hearing two sounds in a row as similar to each other, and apparently failing to disentangle the blend of their production. I advance the hypothesis that listeners default to attributing incoming acoustic material to the first of two phones in a sequence, even when that material includes the products of the second phone’s articulation. I report studies which show that listeners persist in attributing the acoustic products of a second sound’s articulation to a first sound even when the signal conveys early explicit evidence about the identity of that second sound. Thus, in cases in which listeners could have leveraged articulatory information to begin disentangling the first sound from the second, they did not do so. I argue that this behavior arises from a domain-general perceptual bias to construe temporally distributed input as evidence of one event, rather than two.
These results support a new conceptualization of the segmental parsing problem. Since listeners necessarily perceive events in the world at a delay from when those events occurred, it may be adaptive to attribute the incoming signal to an earlier speech sound when no other determining information is available. There are cases in which listeners do not disentangle the coarticulated acoustics of two sequential sounds, because they are not compelled to do so. Finally, I argue that this has affected the phonologies of the world’s languages, resulting in, for example, predominantly regressive assimilation of major place features.
The program for the next SALT (Semantics and Linguistic Theory) meeting is live, and Pranav Anand and alumnus Chris Barker (now Professor and Chair of Linguistics at NYU) are among the four invited speakers for the meeting. SALT 27 is to be hosted by the Linguistics Department of the University of Maryland, College Park, and it will take place on May 12 through May 14, 2017. For more info, see the website here.
Are you interested in pursuing a career in computational linguistics or thinking about applying your linguistic skills in the tech industry? If so, come to the Computational Linguistics/Linguistics in the High Tech Field career workshop!
When: Thursday, March 2nd, 5:00pm-6:30pm
Where: Linguistics Common Room (Stevenson 249)
Light refreshments will be provided.
Join us this Tuesday, February 21st, for a colloquium talk by Sam Zukoff (MIT), at 1:30pm in Hum 2, room 259. His talk is entitled “Stress Restricts Reduplication” and the abstract is given below:
This paper considers the typology of reduplicant shape, and argues that a system with freelyrankable templatic constraints on reduplicant size/shape over-generates. A survey of Australian languages with quantity insensitive left-to-right alternating cyclic stress systems finds that monosyllabic prefixal reduplicants are not attested; all prefixal partial reduplication patterns in such languages are disyllabic. The disyllabic pattern allows for complete satisfaction of all otherwise undominated stress constraints, whereas any monosyllabic reduplicant would induce violation of one of these constraints. The typological absence of the monosyllabic pattern in these languages thus follows only if templatic constraints (“Reduplicant Size”) must be subordinated to otherwise undominated stress constraints (“Stress Requirements”). This is captured through a meta-ranking condition on the phonological grammar: StressReq >> RedSize (S>>R). The paper further explores how this meta-ranking is compatible with prosodically variable yet predictable reduplicant shape in Ponapean, and an apparently problematic case of monosyllabic reduplication in Ngan’gityemerri which turns out to be the exception that proves the rule.