Saturday, 07 November 2015

Department of Linguistics and Cognitive Science

University of Delaware

❊   Program  ❊   Logistics  ❊   Abstracts  ❊   Past NECPHONs   ❊

Announcements of Opportunities for Researchers in Computational Phonology


11:00 LUNCH (provided to visiting researchers) @ the department at 125 E Main Street
11:40 Move to PURNELL 118 for talks (see logistics below)
13:30 BREAK
  • Adam Jardine (UD)
  • Learning local constraints over autosegmental representations
  • slides (pdf)
  • Richard Futrell, Adam Albright, Peter Graff*, and Timothy J. O'Donnell (MIT, *Intel Corporation)
  • A Generative Interpretation of Feature Hierarchies
15:00 BREAK
  • Aleksei Nazarov and Joe Pater (UMass)               (CANCELLED)
  • Learning opacity in a stratal version of MaxEnt
  • Jeffrey Heinz, Hyun Jin Hwangbo, and Adam Jardine (UD)
  • Some implications of representing gradual oppositions directly
  • slides (pdf)
17:00 BREAK
  • Alexandra Nyman and Bruce Tesar (Rutgers)
  • Learning underlying forms in Basic CV Syllable Theory
  • slides (pdf)
  • Ryan Cotterell, Nanyun Peng and Jason Eisner (JHU)
  • Modeling Word Forms Using Latent Underlying Morphs and Phonology
18:30 BIZZ-NISS followed by informal DINNER gatherings

Back to the top


NECPHON begins at 12 noon and will take place in PURNELL 118 at the University of Delaware.

Beginning at 11am, a casual lunch will be provided to the visiting researchers at the Department of Linguistics and Cognitive Science, whose address is 125 E. Main Street. This webpage explains how to find the department. Around 11:40am or so, we should head over to Purnell which is about a 10 minute walk away. This map shows the two locations.

walking map for NECPHON 2015

For visitors, here is information describing how to travel to the University of Delaware by car, plane, or train, and local lodging options.

Back to the top


Aletheia Cui and Charles Yang (Penn)

Minimal pairs and vowel categories

An important first step in language acquisition is the discovery of the vowel categories. Much current work in modeling vowel categorization assumes that vowels follow Gaussian distributions. We evaluate the Gaussian assumption and present results from a non-parametric model of vowel category acquisition. We apply a multivariate normality test to the vowel tokens from four corpora of spoken English: Hillenbrand et al. (1995), the TIMIT corpus, the SCOTUS corpus, and infant-directed speech from the CHILDES corpus. The results show that the Gaussian assumption is not always valid. We propose a non-parametric model of vowel categorization using minimal pairs. The model has a limited memory buffer that retains recently presented tokens. When newly drawn token forms a minimal pair with a token in the buffer, the model assimilates the token to the closest cluster if the token does not contrast with the most recent word assimilated to that cluster. Otherwise, a new cluster is created. When learning stops, the tokens are classified to the closest category. Pairwise precision, recall and F-score are calculated to evaluate the performance of the learning outcome. Results are compared to classification based on the true centers and that based on a K-means model.

Back to the top

Kasia Hitczenko (UMD)

Modeling adaptation to a novel accent

Listeners adapt to novel accents quickly and effortlessly. Various hypotheses have been put forth about how people deal with unfamiliar speech. On the one hand, it has been suggested that listeners 'expand' their phonemic categories when exposed to a novel accent, generally allowing more variability in how a particular sound can be pronounced. This is contrasted with a strategy in which listeners 'shift' their categories and only accept deviations in the direction of the accent. One major piece of evidence against 'expansion' comes from Maye et al. (2008), which showed that after exposure to an accent with lowered vowels, participants were more likely to accept non-words that were lowered versions of real words in a lexical decision task, but not raised versions of real words. In this work, we apply the ideal adaptor model from Kleinschmidt & Jaeger (2015) to this data to reexamine which processes are involved in accent adaptation. We compare three models of adaptation: one in which category representations are shifted, one in which they are expanded, and one in which they are both shifted and expanded. We show that a model in which categories are both shifted and expanded is best able to capture data reported in Maye et al. (2008).

Back to the top

Itamar Kastner and Frans Adriaans (NYU)

Language-specific representations: learning segmentation and coocurrence restrictions in Arabic

The problem of segmenting speech into words has received much attention in the computational modeling literature. Yet it is not clear to what extent the segmentation mechanism differs across languages, nor is it well understood to what extent the segmented proto-lexicon aids in learning phonological patterns. We hypothesized that acquiring Arabic---where morphology is built around consonantal roots---is facilitated by dividing the input into consonants and vowels. Simulations comparing consonant-only representations and "full" (consonant + vowel) representations in English and Arabic quantified how useful consonant representations are for segmentation and phonological learning. In Experiment 1, we found that a consonant-only representation aided segmentation in Arabic but hampered segmentation of English. Experiment 2 tested the emergence of a phonological restriction against homorganic consonant pairs in Arabic ("OCP-Place"). Our results suggest that for a child learning a Semitic language, separating consonants from vowels is beneficial for segmentation and phonological learning.

Back to the top

Gaja Jarosz (UMass)

Learning Opaque and Transparent Process Interactions in Harmonic Serialism

There are two long-standing hypotheses based on language change regarding the relative naturalness and learnability of process interactions. One (Kiparsky 1971) posits a preference for transparent over opaque process interactions, and another (Kiparsky 1968) posits a preference for interactions that give rise to maximal utilization of the individual processes. However, the learning principles and theoretical assumptions that might give rise to such preferences are not well understood, and the potential impact on phonological theory of these central debates has been limited by a lack of explicit computational models capable of learning opaque interactions and making precise and testable predictions for language acquisition and change. Building on recent developments in phonological theory and learnability that enable the modeling of opaque interactions in Harmonic Serialism (Serial Markedness Reduction; Jarosz 2014b) and the learning of hidden structure in phonology (Expectation Driven Learning; Jarosz 2015), respectively, this paper presents initial modeling results comparing the relative learnability of four basic types of process interactions: bleeding, feeding, counterfeeding and counterbleeding. The overall findings reveal an intricate sensitivity to multiple factors that give rise to preferences for transparency in some situations and maximal utilization in other contexts. This talk presents an analysis of these results.

Back to the top
Last modified: Mon Nov 9 15:19:15 EST 2015