• Monday, February 22nd 2016 at 17:00 - 18:00 UK (Other timezones)
  • General participation info   |   Participate online   |   + Phone in United States: +1 (571) 317-3129 Australia (toll-free): 1 800 193 385 Australia: +61 2 8355 1020 Austria (toll-free): 0 800 202148 Belgium (toll-free): 0 800 78884 Canada (toll-free): 1 888 455 1389 Denmark (toll-free): 8090 1924 France (toll-free): 0 805 541 047 Germany (toll-free): 0 800 184 4222 Israel (toll-free): 1 809 454 830 Italy (toll-free): 800 793887 Netherlands (toll-free): 0 800 020 0182 New Zealand (toll-free): 0 800 47 0011 Norway (toll-free): 800 69 046 Poland (toll-free): 00 800 1213979 Portugal (toll-free): 800 819 575 Spain (toll-free): 800 900 582 Switzerland (toll-free): 0 800 740 393 United Kingdom (toll-free): 0 800 169 0432 United States (toll-free): 1 877 309 2073 Access Code: 731-636-357 Audio PIN: Shown after joining the meeting Meeting ID: 731-636-357

We will continue last month’s discussion but focus on how to establish the robustness of  modelling approaches and how to choose among them. This includes the following issues:

  • Inference validation: what common standards should be set on the tools used for inference? Should this include inference of covariates?
  • Should an attempt be made to provide one single inference machinery for all tasks, or should ‘tasks’ be ‘packages’ that include a purpose-built and tested inference machinery?
  • Model validation: what standards should be met on surrogate data generated from the model; and on real experimental data? How should different approximations to model evidence be treated? Should we instead rely on version of cross-validation?
  • Between-subject variation: should variation in how individuals solve the task be captured by nesting models (assuming individuals use a mixtures of strategies) or random effects (assuming individuals employ on fixed strategy)?

During the last meeting, a series of tasks were very briefly presented, including:

A few key features to judge and choose between the tasks were highlighted as:

– Relevance of main construct examined
– Ease of Performance
– Ease of Understanding
– Ease of Implementation
– Probabilistic vs deterministic task structure
– Deception
– Ease of manipulation
– Translatability to Animal Research
– Pre-existing Data on Target Populations
– Pre-existing Data on Test/Re-Test
– Model Approaches
– Model Stability
– Model Parameters
– Evidence for Model superiority (over simple summary statistics)
– Can test-retest effects be modelled as learning?

Assessing model robustness

Leave a Reply