Bodig, E., Toivo, W., & Scheepers, C. (2019). Investigating the foreign language effect as a mitigating influence on the ‘optimality bias’ in moral judgements. Journal of Cultural Cognitive Science, DOI: 10.1007/s41809-019-00050-4 .
Abstract Bilinguals often display reduced emotional resonance their second language (L2) and therefore tend to be less prone to decision-making biases in their L2 (e.g., Costa, Foucart, Arnon, Aparici, & Apesteguia, 2014; Costa, Foucart, Hayakawa, et al., 2014) – a phenomenon coined Foreign Language Effect (FLE). The present pre-registered experiments investigated whether FLE can mitigate a special case of cognitive bias, called optimality bias, which occurs when observers erroneously blame actors for making “suboptimal” choices, even when there was not sufficient information available for the actor to identify the best choice (De Freitas & Johnson, 2018). In Experiment 1, L1 English speakers (N=63) were compared to L2 English speakers from various L1 backgrounds (N=56). In Experiment 2, we compared Finnish bilinguals completing the study in either Finnish (L1, N=103) or English (L2, N=108). Participants read a vignette describing the same tragic outcome resulting from either an optimal or suboptimal choice made by a hypothetical actor with insufficient knowledge. Their blame attributions were measured using a 4-item scale. A strong optimality bias was observed; participants assigned significantly more blame in the suboptimal choice conditions, despite being told that the actor did not know which choice was best. However, no clear interaction with language was found. In Experiment 1, bilinguals gave reliably higher blame scores than natives. In Experiment 2, no clear influence of target language was found, but the results suggested that the FLE is actually more detrimental than helpful in the domain of blame attribution. Future research should investigate the benefits of emotional involvement in blame attribution, including factors such as empathy and perspective-taking.
Keywords Bilingualism, Foreign Language Effect, attribution, decision-making, blame
Scheepers, C. , Galkina, A., Shtyrov, Y., & Myachykov, A. (2019). Hierarchical structure priming from mathematics to two- and three-site relative clause attachment. Cognition, 189, 155-166. DOI: 10.1016/j.cognition.2019.03.021.
Abstract A number of recent studies found evidence for shared structural representations across different cognitive domains such as mathematics, music, and language. For instance, Scheepers et al. (2011) showed that English speakers’ choices of relative clause (RC) attachments in partial sentences like The tourist guide mentioned the bells of the church that … can be influenced by the structure of previously solved prime equations such as 80–(9 + 1) × 5 (making high RC-attachments more likely) versus 80–9 + 1 × 5 (making low RC-attachments more likely). Using the same sentence completion task, Experiment 1 of the present paper fully replicated this cross-domain structural priming effect in Russian, a morphologically rich language. More interestingly, Experiment 2 extended this finding to more complex three-site attachment configurations and showed that, relative to a structurally neutral baseline prime condition, N1-, N2-, and N3-attachments of RCs in Russian were equally susceptible to structural priming from mathematical equations such as 18+(7+(3 + 11)) × 2, 18 + 7+(3 + 11) × 2, and 18 + 7 + 3 + 11 × 2, respectively. The latter suggests that cross-domain structural priming from mathematics to language must rely on detailed, domain-general representations of hierarchical structure.
Keywords Priming, cross-domain, mathematics, syntax
Toivo, W., & Scheepers, C. (2019). Pupillary responses to affective words in bilinguals’ first versus second language. PLoS ONE, 14(4), e0210450, DOI: 10.1371/journal.pone.0210450.
Abstract Late bilinguals often report less emotional involvement in their second language, a phenomenon called reduced emotional resonance in L2. The present study measured pupil dilation in response to high- versus low-arousing words (e.g., riot vs. swamp) in German-English and Finnish-English late bilinguals, both in their first and in their second language. A third sample of English monolingual speakers (tested only in English) served as a control group. To improve on previous research, we controlled for lexical confounds such as length, frequency, emotional valence, and abstractness–both within and across languages. Results showed no appreciable differences in post-trial word recognition judgements (98% recognition on average), but reliably stronger pupillary effects of the arousal manipulation when stimuli were presented in participants’ first rather than second language. This supports the notion of reduced emotional resonance in L2. Our findings are unlikely to be due to differences in stimulus-specific control variables or to potential word-recognition difficulties in participants’ second language. Linguistic relatedness between first and second language (German-English vs. Finnish-English) was also not found to have a modulating influence.
Keywords Bilingualism, word processing, emotion, pupillometry
Myachykov, A., Chapman, A. J., Beal, J., & Scheepers, C. (2019). Random word generation reveals spatial encoding of syllabic word length. British Journal of Psychology, DOI: 10.1111/bjop.12399.
Abstract Existing random number generation studies demonstrate the presence of an embodied attentional bias in spontaneous number production corresponding to the horizontal Mental Number Line: Larger numbers are produced on right‐hand turns and smaller numbers on left‐hand turns (Loetscher et al.,2008, Curr. Biol., 18, R60). Furthermore, other concepts were also shown to rely on horizontal attentional displacement (Di Bono and Zorzi, 2013, Quart. J. Exp. Psychol., 66, 2348). In two experiments, we used a novel random word generation paradigm combined with two different ways to orient attention in horizontal space: Participants randomly generated words on left and right head turns (Experiment 1) or following left and right key presses (Experiment 2). In both studies, syllabically longer words were generated on right‐hand head turns and following right key strokes. Importantly, variables related to semantic magnitude or cardinality (whether the generated words were plural‐marked, referred to uncountable concepts, or were associated with largeness) were not affected by lateral manipulations. We discuss our data in terms of the ATOM (Walsh, 2015, The Oxford handbook of numerical cognition, 552) which suggests a general magnitude mechanism shared by different conceptual domains.
Keywords SNARC, random word generation, syllabic length, ATOM
Thompson, D., Ferreira, F., & Scheepers, C. (2018). One step at a time: Representational overlap between active voice, be-passive, and get-passive forms in English. Journal of Cognition, 1(1): 35, pp. 1–24, DOI: https://doi.org/10.5334/joc.36.
Continue reading “New paper: One step at a time: Representational overlap between active voice, be-passive, and get-passive forms in English”
Jicol, C., Proulx, M. J., Pollick, F. E., & Petrini, K. (2018). Long-term music training modulates the recalibration of audiovisual simultaneity. Experimental brain research, 1-12.
Abstract: To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.
Until 6 July, 2018 there is free access to the paper/chapter at:
Pollick, F. E., Vicary, S., Noble, K., Kim, N., Jang, S., & Stevens, C. J. (2018). Exploring collective experience in watching dance through intersubject correlation and functional connectivity of fMRI brain activity. Progress in brain research. https://doi.org/10.1016/bs.pbr.2018.03.016
Abstract: How the brain contends with naturalistic viewing conditions when it must cope with concurrent streams of diverse sensory inputs and internally generated thoughts is still largely an open question. In this study, we used fMRI to record brain activity while a group of 18 participants watched an edited dance duet accompanied by a soundtrack. After scanning, participants performed a short behavioral task to identify neural correlates of dance segments that could later be recalled. Intersubject correlation (ISC) analysis was used to identify the brain regions correlated among observers, and the results of this ISC map were used to define a set of regions for subsequent analysis of functional connectivity. The resulting network was found to be composed of eight subnetworks and the significance of these subnetworks is discussed. While most subnetworks could be explained by sensory and motor processes, two subnetworks appeared related more to complex cognition. These results inform our understanding of the neural basis of common experience in watching dance and open new directions for the study of complex cognition.
Hortensius, R. and Cross, E. S. (in press), From automata to animate beings: the scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Sciences
Abstract: Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioral and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent’s visual features are possibly of lesser importance. We combine these findings to provide an integrative theoretical account based on the “like me” hypothesis, and discuss the key role played by the Theory‐of‐Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long‐term interactions with artificial agents on the behavioral and brain mechanisms of attributing socialness to these agents
Frank Pollick will present his new research with Yashar Moshfeghi at the World Wide Web Conference in Lyon. This is part of the ongoing research collaboration joining together cognitive neuroscience and information retrieval/data science approaches to understand search.
Moshfeghi, Y., & Pollick, F. E. (2018, April). Search Process as Transitions Between Neural States. In Proceedings of the 2018 World Wide Web Conference on World Wide Web (pp. 1683-1692). International World Wide Web Conferences Steering Committee.
Search is one of the most performed activities on the World Wide Web. Various conceptual models postulate that the search process can be broken down into distinct emotional and cognitive states of searchers while they engage in a search process. These models significantly contribute to our understanding of the search process. However, they are typically based on self-report measures, such as surveys, questionnaire, etc. and therefore, only indirectly monitor the brain activity that supports such a process. With this work, we take one step further and directly measure the brain activity involved in a search process. To do so, we break down a search process into five time periods: a realisation of Information Need, Query Formulation, Query Submission, Relevance Judgment and Satisfaction Judgment. We then investigate the brain activity between these time periods. Using functional Magnetic Resonance Imaging (fMRI), we monitored the brain activity of twenty-four participants during a search process that involved answering questions carefully selected from the TREC-8 and TREC 2001 Q/A Tracks. This novel analysis that focuses on transitions rather than states reveals the contrasting brain activity between time periods – which enables the identification of the distinct parts of the search process as the user moves through them. This work, therefore, provides an important first step in representing the search process based on the transitions between neural states. Discovering more precisely how brain activity relates to different parts of the search process will enable the development of brain-computer interactions that better support search and search interactions, which we believe our study and conclusions advance.