The Model-based Neuroscience Summer School will be followed by a 3-day Model-based Neuroscience BrainHack.

Dates: August 12 – August 14, 2019

Location: Amsterdam

Costs: € 100

The BrainHack offers participants the opportunity to apply the analysis techniques covered in the Model-based Neuroscience Summer School to their own data under the guidance of experts. Applicants are required to provide a one-page explanation of the project they would like to bring to the BrainHack. If you are interested in joining the BrainHack but do not have data of your own, you may apply as a team member, stating your technical expertise, interests, and familiarity with various data, tools, and software. Space is however limited, and priority will be given to people who bring a project.

As described below, we offer three streams: Behavioral modeling, Imaging neuroscience, and Model-based neuroscience. The goals of the BrainHack are to 1) familiarize researchers with the techniques from the Summer School in a practical, applied context; 2) foster interactions between scientists from different backgrounds with an interest in formally modeling brain and behavior; 3) share data, expertise, and try out new ideas!

Participating experts

Birte Forstmann (University of Amsterdam)

Dora Matzke (University of Amsterdam)

Andrew Heathcote (University of Tasmania)

Pierre-Louis Bazin (University of Amsterdam)

Steven Miletic (University of Amsterdam)

Udo Boehm (University of Amsterdam)

Bernadette van Wijk (University of Amsterdam)


For questions and information regarding the BrainHack, please contact Mark Zrubka at

BrainHack streams

Behavioral modeling stream

This stream will focus on applying evidence-accumulation models to response-time data.

  • We welcome data sets from standard tasks requiring choices among two or more options (e.g., lexical decision or random dot motion task) and conflict tasks (e.g., Stroop, Simon, or Flanker task) that can be analyzed using standard evidence-accumulation models, such as the diffusion decision model (limited to binary choices), the linear ballistic accumulator, and the lognormal race.
  • A key enabler of successful modeling is sufficient data quality (Kolossa & Kopp, 2018; Smith & Little, 2018), which is primarily determined not by the number of subjects per experimental group (N) but by the number of trials for each participant in each within-subject condition (n). The required number of trials depends on the type and complexity of the model, where some models require higher quality data than others and complex designs require more trials overall. It is impossible to definitely say what number is sufficient in general, although we will teach you simulation techniques that can be used in the planning stage to provide guidance for particular design/model combinations. For instance, smaller n’s are OK if only a few model parameters differ over conditions. As a rule of thumb n = 50 is a minimum per within-subject condition, and n > 100 is desirable. The hierarchical models we teach can help support smaller n’s, but these require a sufficient number of participants (N > 30 is advisable).
  • We will use the Dynamic Models of Choice software (Heathcote et al, 2018) for model fitting.
  • The description of your data set should include the following information: Research questions, task, design, including the number of trials (n) and the number of participants (N), and type of model you are interested in applying and (if known) your thoughts on which model parameters map to which manipulations and how you see that as addressing your research questions.  

Imaging neuroscience stream

This stream will focus on neuroimaging data and tools to combine with cognitive models. Techniques to assess data quality, combine measurements of different types or define structures and nodes of interest for more abstract modeling will be explored.

  • We welcome data sets of functional MRI, EEG, MEG, brain connectivity or microstructure that would form neural measures to correlate with behavior variability.
  • As in the behavioral stream, high-quality data is key, and also where many of the interesting challenges reside. Large numbers of task repetitions, multiple types of contrasts, high spatial and temporal resolution all make brain data more interesting. Knowing in advance about your data, its size, processing needs, etc. will also help ensure we have the ability to handle it.

Model-based neuroscience stream

This stream will focus on modeling both response-time and neural data. When it comes to linking the behavioral and brain data, please note that at present we are limited to the assessment of pairwise correlations between person-level (not so trial-by-trial!) neural measures and model parameters using plausible values (Ly et al., 2018).

  • We welcome data sets that jointly fulfill the requirements for the Behavioral modeling and Imaging neuroscience stream.
  • The description of your data set should include the same information as for the behavioral modeling stream and the Imaging neuroscience stream. Also, please describe how you think the neural data might be related to the behavioral model mechanisms and parameters.


Heathcote, A ., Lin, Y. S., Reynolds, A., Strickland, L., Gretton, M. & Matzke, D. (in press). Dynamic models of choice. Behavior Research Methods.

Kolossa, A., & Kopp, B. (2018). Data quality over data quantity in computational cognitive neuroscience. NeuroImage, 172, 775–785.

Ly, A., Boehm, U., Heathcote, A., Turner, B.M., Forstmann, B., Marsman, M. & Matzke, D. (2018). A flexible and efficient hierarchical Bayesian approach to the exploration of individual differences in cognitive-model-based neuroscience. In A.A. Moustafa (Ed.) Computational models of brain and behavior (pp. 467-480). Wiley Blackwell.

Smith, P. L., & Little, D. R. (2018). Small is beautiful: In defense of the small-N design. Psychonomic Bulletin & Review, 25(6), 2083–2101.

search previous next tag category expand menu location phone mail time cart zoom edit close