Leiden University logo.

nl en

Tutorials

Speech Prosody 2024 includes three exciting tutorials. Information on how to register for tutorials will follow later.

Description:

Several theories of prosody use dynamical systems analysis, the mathematical theory used in many scientific investigations of natural phenomena. Examples of the use of dynamical systems theory in prosody are Byrd and Saltzman’s 𝛑-gesture theory of boundary lengthening, speech-timing and prosodic hierarchy theories of O’Dell and Saltzman’s, Goldsmith and Prince’s theories of stress assignment, and theories of prosodic prominence advanced by Roessig, Mücke, Grice, Iskarous, Steffman, and Cole. Unfortunately, the language of dynamical systems is not usually part of the phonetician and phonologist’s training, which makes these theories inaccessible, despite their importance in creating a
unifying framework for many aspects of prosody and being able to precisely state hypotheses and predictions about of prosodic structure. This tutorial will start with a from scratch exposition to the concepts of dynamical systems theory, and will proceed to discuss a few examples of the uses of this theory for describing prosodic structure. This part will be for 2 hours. As an option, there will be one additional hour to show simple python scripts for performing simulations of the theories.

Description:

This tutorial introduces multidimensional functional PCA and landmark registration. These two techniques enable the researcher to perform effective statistical analysis of time-varying contours originating from acoustic or articulatory measurements of speech production, such as F0, intensity, formants, EMA contours, etc.

Compared to popular methods like GAMs (Generalised Additive Models), the techniques presented in this tutorial expand the possibilities of statistical analysis of contours in two important directions. First, they allow for the modelling of bundles of contours jointly, such as
pairs of F0 and intensity contours measured on the same speech material, capturing their patterns of co-variation. Second, they allow for the integration of information about the position of segmental or syllabic boundaries along (bundles of) contours. In this way one can
capture patterns of co-variation between, for example, the shape of a pitch accent and the duration of the syllable it is associated with.

The tutorial offers both theory and practice. The theory essentially provides the foundations for the interpretation of graphical and numerical results and is conveyed by means of examples. Mathematical formalisms are discussed, but prior knowledge is not required. The practical part illustrates the application of functional PCA and landmark registration to a set of artificial and real data sets chosen to expose the participant to diverse scenarios.

The code to run statistical models is written in R and based on a few libraries available on CRAN (fda, funData, MFPCA) as well as one written by the presenter (landmarkregUtils). Due to time limitations, the practical part of the workshop is 'hands-off', i.e. the code will be
executed live by the presenter. However, the material used in the tutorial will be available at:
https://github.com/uasolo/FPCA-phonetics-workshop
so that participants can run the code on the example data or on their own data after the tutorial. Participants are expected to possess basic knowledge of R and the tidyverse ecosystem (dplyr, ggplot2, etc.), as well as of linear regression.

Description:

Generalized linear mixed-effects and generalized additive models (GLMMs and GAMs) are popular and powerful tools for analyzing linguistic data. They allow for accounting for intra-individual variation, and for the inclusion of intra- and extralinguistic covariates of known relevance. Yet, researchers may often be interested in explaining variation that has not yet been accounted for by the model. For example, researchers may want to explore whether there are subgroups of
observations that show different patterns of association, or different patterns of growth over time.

GLMM trees (Fokkema et al., 2018) allow for detecting such subgroups in GLMMs. The algorithm is based on model-based recursive partitioning (MOB; Zeileis et al., 2008), a framework for subgroup detection in a wide range of parametric models. The rationale of MOB - and thereby GLMM trees - is that a global model may not fit all observations equally well. If additional covariates are available, these can be used for identifying subgroups with better-fitting, subgroup-specific models. As such,
GLMM trees can identify predictors and moderators in an exploratory manner, from a potentially large number of intra- and extralinguistic covariates. The method has been implemented in opensource R package glmertree. Since their introduction, GLMM trees have been fruitfully applied in a range of linguistic studies. We are currently extending the GLMM tree approach to GAMs, in order to allow for the detection of subgroups with differently-shaped curves.


Format: This tutorial will be combine lecture and practical, with ample room for discussion and questions. In the lectures, the GLMM tree model and algorithm will be explained, empirical examples will be presented and the extension to GAMs will be introduced. In the practical, participants gain hands-on experience with fitting GLMM and GAM trees in R, for which example datasets and code will be provided.

This website uses cookies.