Organised by Alain Content and Axel Cleeremans (CRCN, Université libre de Bruxelles, BE).
09:00 Introduction by Axel Cleeremans
09:15 Learning via instructions
Jan De Houwer (Ghent University, BE)
Learning can be defined as the impact of regularities in the environment on behavior (De Houwer et al., 2013, PB&R). This definition highlights that behavior might be a function also of verbal instructions about regularities. I summarize research demonstrating the impact of instructions about regularities in the presence of one stimulus (nonassociative learning), regularities in the presence of two stimuli (classical conditioning), and regularities in the presence of behavior and stimuli (operant conditioning). Although the effect of instructions about regularities (e.g., “the tone will be followed by a shock”) are often similar to the effects of the actual events that are described by those instructions (e.g., an actual tone followed by an actual shock), there are also findings showing that experiencing the events that constitute the regularity can add to the effects of instructions about the regularity. These findings set the stage for new research about the unique effects of actual experience and ways to super-charge instructions so that they can mimic those unique effects.
09:45 Statistical learning and visual word processing: Development and impact
Fabienne Chetail (Université libre de Bruxelles, BE)
Individuals rapidly become sensitive to recurrent patterns present in the environment and this occurs in many situations. However, evidence of a role for statistical learning of orthographic regularities in reading is mixed, and its role has peripheral status in current theories of visual word recognition. Additionally, exactly which regularities readers learn to be sensitive to is still unclear, as well as the rapidity with which this learning develops. To address these two issues, we conducted experiments with both natural and artificial scripts, while collecting behavioural and EEG data. First, we used pseudowords including groups of letters (n-grams) which were frequent or not in a wordlikeness task and the results showed that readers were sensitive to the positional frequency of letter clusters and to bigram frequency beyond single letter frequency. Consistently, we found a specific response to n-gram frequency at a neural level in an oddball paradigm with fast periodic visual stimulation. Second, we showed that this knowledge about n-gram frequency impacts visual word processing. Frequent letters were detected more rapidly in a letter detection task than low-frequency letters and words including bigrams of low frequency were recognized more rapidly than those with high-frequency bigrams in a lexical decision task. Third, we exposed readers to a flow of artificial words for a few minutes, with some bigrams occurring very frequently. Both behavioural and EEG data showed that the participants very rapidly become sensitive to these new regularities. We discuss the implications of such results for models of orthographic encoding and reading.
10:15 Learning to read and hemispheric specialization for faces in children
Aliette Lochy (Université catholique de Louvain, BE)
The developmental origin of the human right hemispheric lateralization for face perception remains unclear. According to a recent hypothesis, the increase in left lateralized posterior neural activity during reading acquisition contributes to, or even determines, the right hemispheric lateralization for face perception (Behrmann & Plaut, 2013). This view contrasts with the right hemispheric advantage observed in few months old infant. Recently, a Fast Visual Periodic Stimulation (FPVS) paradigm in EEG showed that periodically presented faces among objects lead to strongly right lateralized face-selective responses in 4-6 months old infants (de Heering & Rossion, 2015). Here we used the exact same paradigm in EEG to study the lateralization of responses to faces in a group (N=50) of 5 years-old pre-school children showing left-lateralized responses to letters (Lochy et al., 2016). Rather surprisingly, we found bilateral face-selective responses in this population, with a positive correlation found between preschool letter knowledge and right hemispheric lateralization for faces (rho=0.30; p<0.04), but no correlation between the left lateralization to letters and the right lateralization to faces. However, discrimination of facial identity with FPVS (Liu-Shuang et al., 2014) in these pre-reading children was strongly right lateralized, and unrelated to their letter knowledge. These findings suggest that other factors than reading acquisition, such as the posterior corpus callosum maturation during early childhood as well as the level required by the perceptual categorization process (i.e., generic face categorization vs. face individualization), play a key role in the right hemispheric lateralization for face perception in humans.
10:45 Coffee break
11:15 The implementation of novel instructions
Marcel Brass (Ghent University, BE)
It has been argued that there is a fundamental difference between processing an instruction in order to memorize or implement it. In the neuropsychological literature this distinction has been referred to as the distinction between knowing and doing. I will present a set of studies in which we investigated this distinction with functional brain imaging using univariate and multivariate approaches. Our data suggest that instructions that are going to be implemented can be distinguished from merely memorized instructions based on the pattern of brain activation. Furthermore, we show that the implementation of instructions relies on a complex interplay of executive, motor and sensory brain areas. We relate these findings to recent cognitive models of working memory and instruction following.
11:45 Conscious and unconscious influences on the sense of agency
Nura Sidarus (Institut Jean Nicod, Ecole Normale Supérieure – PSL Research University, FR)
Sense of agency (SoA) refers to the feeling that we are in control of our own actions and, through them, of events in the outside world. One influential view claims that the SoA depends on a retrospective matching between the expected and actual outcome of an action. However, recent studies have revealed an additional, prospective component to the SoA, related to a metacognitive signal about action selection. When action selection is fluent, and we “just know what to do”, the sense of agency over the consequences of our actions is stronger than when action selection is dysfluent, or difficult. We present evidence that these effects are present across various conscious and unconscious manipulations of action selection. Additionally, event-related potentials (ERPs) that indexed action monitoring processes, signaling disruptions in action selection, were linked to a reduction in SoA. Thus, we show that action monitoring signals influence SoA prospectively, as they emerged already at the time of the action, and long before the outcome is known. Importantly, SoA is best understood as resulting from an integration of prospective signals, related to action monitoring, with retrospective signals, based on outcome monitoring. Yet, prospective and retrospective components may make independent contributions to SoA.
12:15 Title to be announced
Peter Lush (University of Sussex, UK)
12:45 Lunch time
14:00 Domain generality vs. domain specificity in statistical learning: new evidence, new perspectives
Ram Frost, Noam Siegelman & Louisa Bogaerts (Hebrew University of Jerusalem, IL)
Two main assumptions regarding individual statistical learning (SL) abilities drove our recent theoretical model of SL performance (Frost, Armstrong, Siegelman, & Christiansen, TICS, 2015). First, that the lack of a correlation between individual performance on visual and auditory SL tasks stems from general differences in constraints related to processing visual vs. auditory information, consistent with a view of modality specificity in SL. Second, we assumed that variance in SL performance can be split into two distinct and independent sources: one reflecting individual efficiency in encoding representations in a given modality, and the other related to the efficiency in learning the distributional properties of the encoded representations. The model has offered then clear testable predictions, and we will present recent empirical data regarding these predictions. First, we show that the lack of correlation in visual and auditory SL performance is related to the type of material used in the different tasks (i.e., verbal/non-verbal) and its interaction with the modality of presentation, rather than to simple pattern of modality specificity. Second, we show that, at least in the visual modality, encoding of events and the learning of their regularities (e.g., TPs between elements) are not temporally-modular independent processes. Rather, our findings suggest that a single underlying processing principle – rate of information (the amount of available information per second in the sensory stream) – drives SL performance, thereby blurring the distinction between encoding and regularity learning. The implications of these new sets of data to domain generality/specificity of SL will be discussed.
15:00 Final remarks
Registration: All welcome, participation is free of charge (including a coffee break and a light lunch), but registration is mandatory.
To register, click here.