Hello there!
My name is Yongqing Ye [joŋtɕʰiŋ je], which means "evergreen foliage". I also go by Anthea [ˈæn.θi.ə]. I am a computational linguist and speech scientist, employing both behavioral and computational methods to explore human phonological knowledge and its intersections with phonetics and performance systems.
I work with both experimental and corpus-based data, using evidence from multiple languages to better understand speech perception. My current research develops formal models of subsegmental speech perception, with a particular focus on how listeners process the temporal dynamics of vowel nasalization.
To investigate this, I train acoustic models on data from languages that vary in their nasalization patterns. Building on this foundation, I develop Bayesian comparative models and hypothesis-testing models to explore the mechanisms of speech perception. These models reveal how different sources of knowledge, such as acoustic cues, phoneme likelihood, and underspecification, work togther to shape listeners’ decisions.
The broader aim is to create explicit computational models of the acoustic and auditory processes underlying perception. Such models make it possible to clearly define and evaluate the assumptions, mechanisms, and predictions that inform our phonological and perceptual theories.
I received my Ph.D. in linguistics from the Department of Linguistics, Languages and Cultures at Michigan State University. I primarily worked with Karthik Durvasula. I was also advised by Betsy Sneller, Suzanne Wagner, Silvina Bongiovanni and Yen-Hwei Lin.

When I am not doing linguistics, I enjoy hiking, rock climbing, archery and going to music festivals.

Recent posts