Scheepers, C. (2019). What’s the syntax behind syntactic priming? Keynote at 25th Architectures and Mechanisms for Language Processing (AMLaP), Moscow, September 6-8, 2019. DOI: 10.13140/RG.2.2.25922.53440.
Scheepers, C. , Galkina, A., Shtyrov, Y., & Myachykov, A. (2019). Hierarchical structure priming from mathematics to two- and three-site relative clause attachment. Cognition, 189, 155-166. DOI: 10.1016/j.cognition.2019.03.021.
Abstract A number of recent studies found evidence for shared structural representations across different cognitive domains such as mathematics, music, and language. For instance, Scheepers et al. (2011) showed that English speakers’ choices of relative clause (RC) attachments in partial sentences like The tourist guide mentioned the bells of the church that … can be influenced by the structure of previously solved prime equations such as 80–(9 + 1) × 5 (making high RC-attachments more likely) versus 80–9 + 1 × 5 (making low RC-attachments more likely). Using the same sentence completion task, Experiment 1 of the present paper fully replicated this cross-domain structural priming effect in Russian, a morphologically rich language. More interestingly, Experiment 2 extended this finding to more complex three-site attachment configurations and showed that, relative to a structurally neutral baseline prime condition, N1-, N2-, and N3-attachments of RCs in Russian were equally susceptible to structural priming from mathematical equations such as 18+(7+(3 + 11)) × 2, 18 + 7+(3 + 11) × 2, and 18 + 7 + 3 + 11 × 2, respectively. The latter suggests that cross-domain structural priming from mathematics to language must rely on detailed, domain-general representations of hierarchical structure.
Keywords Priming, cross-domain, mathematics, syntax
Toivo, W., & Scheepers, C. (2019). Pupillary responses to affective words in bilinguals’ first versus second language. PLoS ONE, 14(4), e0210450, DOI: 10.1371/journal.pone.0210450.
Abstract Late bilinguals often report less emotional involvement in their second language, a phenomenon called reduced emotional resonance in L2. The present study measured pupil dilation in response to high- versus low-arousing words (e.g., riot vs. swamp) in German-English and Finnish-English late bilinguals, both in their first and in their second language. A third sample of English monolingual speakers (tested only in English) served as a control group. To improve on previous research, we controlled for lexical confounds such as length, frequency, emotional valence, and abstractness–both within and across languages. Results showed no appreciable differences in post-trial word recognition judgements (98% recognition on average), but reliably stronger pupillary effects of the arousal manipulation when stimuli were presented in participants’ first rather than second language. This supports the notion of reduced emotional resonance in L2. Our findings are unlikely to be due to differences in stimulus-specific control variables or to potential word-recognition difficulties in participants’ second language. Linguistic relatedness between first and second language (German-English vs. Finnish-English) was also not found to have a modulating influence.
Keywords Bilingualism, word processing, emotion, pupillometry
Myachykov, A., Chapman, A. J., Beal, J., & Scheepers, C. (2019). Random word generation reveals spatial encoding of syllabic word length. British Journal of Psychology, DOI: 10.1111/bjop.12399.
Abstract Existing random number generation studies demonstrate the presence of an embodied attentional bias in spontaneous number production corresponding to the horizontal Mental Number Line: Larger numbers are produced on right‐hand turns and smaller numbers on left‐hand turns (Loetscher et al.,2008, Curr. Biol., 18, R60). Furthermore, other concepts were also shown to rely on horizontal attentional displacement (Di Bono and Zorzi, 2013, Quart. J. Exp. Psychol., 66, 2348). In two experiments, we used a novel random word generation paradigm combined with two different ways to orient attention in horizontal space: Participants randomly generated words on left and right head turns (Experiment 1) or following left and right key presses (Experiment 2). In both studies, syllabically longer words were generated on right‐hand head turns and following right key strokes. Importantly, variables related to semantic magnitude or cardinality (whether the generated words were plural‐marked, referred to uncountable concepts, or were associated with largeness) were not affected by lateral manipulations. We discuss our data in terms of the ATOM (Walsh, 2015, The Oxford handbook of numerical cognition, 552) which suggests a general magnitude mechanism shared by different conceptual domains.
Keywords SNARC, random word generation, syllabic length, ATOM
Pozniak, C., Hemforth, B., & Scheepers, C. (2018). Cross-domain priming from mathematics to relative-clause attachment: A visual-world study in French. Frontiers in Psychology, 9:2056. DOI: 10.3389/fpsyg.2018.02056
Thompson, D., Ferreira, F., & Scheepers, C. (2018). One step at a time: Representational overlap between active voice, be-passive, and get-passive forms in English. Journal of Cognition, 1(1): 35, pp. 1–24, DOI: https://doi.org/10.5334/joc.36.
Jicol, C., Proulx, M. J., Pollick, F. E., & Petrini, K. (2018). Long-term music training modulates the recalibration of audiovisual simultaneity. Experimental brain research, 1-12.
Abstract: To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.
Until 6 July, 2018 there is free access to the paper/chapter at:
Pollick, F. E., Vicary, S., Noble, K., Kim, N., Jang, S., & Stevens, C. J. (2018). Exploring collective experience in watching dance through intersubject correlation and functional connectivity of fMRI brain activity. Progress in brain research. https://doi.org/10.1016/bs.pbr.2018.03.016
Abstract: How the brain contends with naturalistic viewing conditions when it must cope with concurrent streams of diverse sensory inputs and internally generated thoughts is still largely an open question. In this study, we used fMRI to record brain activity while a group of 18 participants watched an edited dance duet accompanied by a soundtrack. After scanning, participants performed a short behavioral task to identify neural correlates of dance segments that could later be recalled. Intersubject correlation (ISC) analysis was used to identify the brain regions correlated among observers, and the results of this ISC map were used to define a set of regions for subsequent analysis of functional connectivity. The resulting network was found to be composed of eight subnetworks and the significance of these subnetworks is discussed. While most subnetworks could be explained by sensory and motor processes, two subnetworks appeared related more to complex cognition. These results inform our understanding of the neural basis of common experience in watching dance and open new directions for the study of complex cognition.
The overarching aim of this PhD project is to develop a library of naturalistic emotional movements generated by expert dancers, and then implement and test the communicative value of these movements in artificial agents in naturalistic social settings. This studentship is richly interdisciplinary in nature, drawing from the social sciences, performing arts and engineering to tackle a major challenge that falls under the remit of the RCUK Digital Economy theme: namely, to improve artificial agents’ social acceptance and usability by providing them with emotionally expressive behaviours that are instantly readable by human interaction partners. This project comprises three main studies, with the first two primarily involving social sciences research (with performing arts elements as well), and the third study combining knowledge generated from the social sciences and performing arts with computing science. For the first third of the project, the student will work closely with the Scottish National Ballet and motion tracking technology to create and validate a rich library of emotions expressed via bodily movement. Next, the student will develop expertise with quantitative and qualitative behavioural methods (including eye tracking, self-report measures of affective valence), as well as working with different participant samples (expert and naïve dancers) to further identify how emotion is expressed via bodily movements, and which elements of a body in motion convey the most meaningful information about a mover’s emotion. The final third of the project applies insights gained from the first two parts to the computing science and robotics world, by implementing insights gained into the movements and behaviour of physically present robots and virtual representations of avatars. Together, the project provides an ideal and exciting opportunity to train a PhD student who is equipped with the theoretical and technical skills to work between the social sciences, arts, and technology.