• Tuesday, February 4th 2020 at 16:00 - 17:00 UK (Other timezones)
  • General participation info   |   Participate online   |   + Phone in United States (Toll Free): 1 877 309 2073 United States: +1 (571) 317-3129 Australia (Toll Free): 1 800 193 385 Australia: +61 2 8355 1020 Austria (Toll Free): 0 800 202148 Belgium (Toll Free): 0 800 78884 Canada (Toll Free): 1 888 455 1389 Denmark (Toll Free): 8090 1924 France (Toll Free): 0 805 541 047 Germany (Toll Free): 0 800 184 4222 Greece (Toll Free): 00 800 4414 3838 Hungary (Toll Free): (06) 80 986 255 Iceland (Toll Free): 800 9869 Ireland (Toll Free): 1 800 946 538 Israel (Toll Free): 1 809 454 830 Italy (Toll Free): 800 793887 Japan (Toll Free): 0 120 663 800 Luxembourg (Toll Free): 800 22104 Netherlands (Toll Free): 0 800 020 0182 New Zealand (Toll Free): 0 800 47 0011 Norway (Toll Free): 800 69 046 Poland (Toll Free): 00 800 1213979 Portugal (Toll Free): 800 819 575 Spain (Toll Free): 800 900 582 Sweden (Toll Free): 0 200 330 905 Switzerland (Toll Free): 0 800 740 393 United Kingdom (Toll Free): 0 800 169 0432 Access Code: 731-636-357

Valence is a fundamental concept in the learning and decision-making literature. However, in the reinforcement learning context, different facets of this psychological concept are often confounded and/or ill-defined. In the present talk we theoretically propose, mathematically formalize and experimentally investigate two different facets of valence: ‘contextual’ and ‘informational’. Combining imaging and modeling techniques in a series of studies, we found that these different aspects have dissociable computational and neural correlates, and exert powerful effects on human learning behavior. We conclude by showing results from a recent online experiments feature large scale testing and assessment of psychiatric traits.

 

 

 

 

 

 

Stefano Palminteri
Head of Human Reinforcement Learning Team
Laboratoire de Neurosciences Cognitives & Computationnelles
École normale supérieure
Département d’études cognitives

Stefano Palminteri – State-dependent valuation and confirmatory biases in learning from rewards and punishments