- Thursday, July 13th 2023 at 16:00 - 17:00 UK (Other timezones)
- General participation info | Participate online | + Phone in Meeting ID: 972 4297 4350 Passcode: 205293 Find your local number: https://ucl.zoom.us/u/aedyEiW1A6
Large-scale, transformer-based language models (LLMs) have proven their capacity to generate highly realistic text, successfully expressing a wide spectrum of sentiments, from the obvious—like valence—to more nuanced emotions such as determination and admiration. In this talk, I’ll present a novel method for analyzing LLM representations, exploring its potential to understand the inner sentimental dynamics within single sentences. This method extends traditional sentiment analysis by training models to predict the quantiles of distributions of end sentiment ratings. After establishing that predictors of distributions of valence, determination, admiration, anxiety and annoyance are well calibrated, I’ll provide examples of using the predicted distributions for analyzing sentences, illustrating, for instance, how even ordinary conjunctions (e.g., “but”) can dramatically shift the emotional trajectory of an utterance. The discussion will then move to exploiting these distributional predictions to generate text targeting specific affective quantiles, in a method that relates to long-run risk-sensitive choice, and how inverting this generative method can reveal individual preferences for specific quantiles relative to the general population. To conclude, potential extensions for analyzing dysfunctional thought will be suggested, along with other prospective applications in the realm of psychiatry.
Research Scientist at Hume AI
Visiting Scholar at the Max Planck Institute for Biological Cybernetics.