Hello there!
My name is Yongqing Ye [joŋtɕʰiŋ je] (meaning "evergreen foliage"). I am a computational linguist and speech scientist, employing both behavioral and computational methods to explore human phonological knowledge and its intersections with phonetics and performance systems.
I work with both experimental and corpus data. My current research focuses on developing formal models to investigate the temporal dynamics of vowel nasalization perception. Specifically, I have been training acoustic models with data from a range of languages with varying nasalization patterns, including Hindi, Contemporary American English, and Peninsular Spanish. I build Bayesian (comparative) and hypothesis-testing perceptual models to explore the mechanisms of perception and how different sources of knowledge, such as acoustic cues, phoneme likelihood, and underspecification, influence listeners' decision-making during vowel nasalization. The goal is to create explicit computational models of the acoustic/auditory processes involved in perception and the perceptual computations applied to these acoustic models, allowing us to clearly define, articulate, and evaluate the assumptions, decisions, and predictions that shape our phonological and perceptual theories.
I am currently a PhD candidate in the Department of Linguistics, Languages and Cultures at Michigan State University. My main advisor is Karthik Durvasula. I am also advised by Betsy Sneller, Suzanne Wagner, Silvina Bongiovanni and Yen-Hwei Lin.
When I am not doing linguistics, I enjoy hiking, rock climbing, and going to Ren Fairs. I also started doing archery in the past year.
My name is Yongqing Ye [joŋtɕʰiŋ je] (meaning "evergreen foliage"). I am a computational linguist and speech scientist, employing both behavioral and computational methods to explore human phonological knowledge and its intersections with phonetics and performance systems.
I work with both experimental and corpus data. My current research focuses on developing formal models to investigate the temporal dynamics of vowel nasalization perception. Specifically, I have been training acoustic models with data from a range of languages with varying nasalization patterns, including Hindi, Contemporary American English, and Peninsular Spanish. I build Bayesian (comparative) and hypothesis-testing perceptual models to explore the mechanisms of perception and how different sources of knowledge, such as acoustic cues, phoneme likelihood, and underspecification, influence listeners' decision-making during vowel nasalization. The goal is to create explicit computational models of the acoustic/auditory processes involved in perception and the perceptual computations applied to these acoustic models, allowing us to clearly define, articulate, and evaluate the assumptions, decisions, and predictions that shape our phonological and perceptual theories.
I am currently a PhD candidate in the Department of Linguistics, Languages and Cultures at Michigan State University. My main advisor is Karthik Durvasula. I am also advised by Betsy Sneller, Suzanne Wagner, Silvina Bongiovanni and Yen-Hwei Lin.
When I am not doing linguistics, I enjoy hiking, rock climbing, and going to Ren Fairs. I also started doing archery in the past year.