Title Participants Abstract "Visual and Affective Multimodal Models of Word Meaning in Language and Mind" "Simon De Deyne" "One of the main limitations of natural language-based approaches to meaning is that they do not incorporate multimodal representations the way humans do. In this study, we evaluate how well different kinds of models account for people's representations of both concrete and abstract concepts. The models we compare include unimodal distributional linguistic models as well as multimodal models which combine linguistic with perceptual or affective information. There are two types of linguistic models: those based on text corpora and those derived from word association data. We present two new studies and a reanalysis of a series of previous studies. The studies demonstrate that both visual and affective multimodal models better capture behavior that reflects human representations than unimodal linguistic models. The size of the multimodal advantage depends on the nature of semantic representations involved, and it is especially pronounced for basic-level concepts that belong to the same superordinate category. Additional visual and affective features improve the accuracy of linguistic models based on text corpora more than those based on word associations; this suggests systematic qualitative differences between what information is encoded in natural language versus what information is reflected in word associations. Altogether, our work presents new evidence that multimodal information is important for capturing both abstract and concrete words and that fully representing word meaning requires more than purely linguistic information. Implications for both embodied and distributional views of semantic representation are discussed." "Quantifying the Structure of Free Association Networks Across the Life Span" "Haim Dubossarsky, Simon De Deyne, Thomas T Hills" "We investigate how the mental lexicon changes over the life span using free association data from over 8,000 individuals, ranging from 10 to 84 years of age, with more than 400 cue words per age group. Using network analysis, with words as nodes and edges defined by the strength of shared associations, we find that associative networks evolve in a nonlinear (U-shaped) fashion over the life span. During early life, the network converges and becomes increasingly structured, with reductions in average path length, entropy, clustering coefficient, and small world index. Into late life, the pattern reverses but shows clear differences from early life. The pattern is independent of the increasing number of word types produced per cue across the life span, consistent with a network encoding an increasing number of relations between words as individuals age. Lifetime variability is dominantly driven by associative change in the least well-connected words. (PsycINFO Database Record" "Thoughts About Disordered Thinking: Measuring and Quantifying the Laws of Order and Disorder" "Simon De Deyne" "LARGE-SCALE NETWORK REPRESENTATIONS OF SEMANTICS IN THE MENTAL LEXICON" "Simon De Deyne" "© 2017 Taylor & Francis. The mental lexicon contains the knowledge about words acquired over a lifetime. A central question is how this knowledge is structured and changes over time. Here we propose to represent this lexicon as a network consisting of nodes that correspond to words and links reflecting associative relations between two nodes, based on free association data. A network view of the mental lexicon is inherent to many cognitive theories, but the predictions of a working model strongly depend on a realistic scale, covering most words used in daily communication. Combining a large network with recent methods from network science allows us to answer questions about its organization at different scales simultaneously, such as: How efficient and robust is lexical knowledge represented considering the global network architecture? What are the organization principles of words in the mental lexicon (i.e. thematic versus taxonomic)? How does the local connectivity with neighboring words explain why certain words are processed more efficiently than others? Networks built from word associations are specifically suited to address prominent psychological phenomena such as developmental shifts, individual differences in creativity, or clinical states like schizophrenia. While these phenomena can be studied using these networks, various future challenges and ways in which this proposal complements other perspectives are also discussed." "Predicting human similarity judgments with distributional models: The value of word associations" "Simon De Deyne" "To represent the meaning of a word, most models use external language resources, such as text corpora, to derive the distributional properties of word usage. In this study, we propose that internal language models, that are more closely aligned to the mental representations of words, can be used to derive new theoretical questions regarding the structure of the mental lexicon. A comparison with internal models also puts into perspective a number of assumptions underlying recently proposed distributional text-based models could provide important insights into cognitive science, including linguistics and artificial intelligence. We focus on wordembedding models which have been proposed to learn aspects of word meaning in a manner similar to humans and contrast them with internal language models derived from a new extensive data set of word associations. An evaluation using relatedness judgments shows that internal language models consistently outperform current state-of-the art text-based external language models. This suggests alternative approaches to represent word meaning using properties that aren't encoded in text." "Rich semantic networks applied to schizophrenia: A new framework" "Simon De Deyne" "Predicting human similarity judgments with distributional models: The value of word associations" "Simon De Deyne, A Perfors, DJ Navarro" "© 1963-2018 ACL. Most distributional lexico-semantic models derive their representations based on external language resources such as text corpora. In this study, we propose that internal language models, that are more closely aligned to the mental representations of words could provide important insights into cognitive science, including linguistics. Doing so allows us to reflect upon theoretical questions regarding the structure of the mental lexicon, and also puts into perspective a number of assumptions underlying recently proposed distributional text-based models. In particular, we focus on word-embedding models which have been proposed to learn aspects of word meaning in a manner similar to humans. These are contrasted with internal language models derived from a new extensive data set of word associations. Using relatedness and similarity judgments we evaluate these models and find that the word-association-based internal language models consistently outperform current state-of-the art text-based external language models, often with a large margin. These results are not just a performance improvement; they also have implications for our understanding of how distributional knowledge is used by people." "Single-trial ERP component analysis using a spatio-temporal LCMV" "Nikolay Chumerin, Simon De Deyne, Gert Storms, Marc Van Hulle" "For statistical analysis of event related potentials (ERPs), there are convincing arguments against averaging across stimuli or subjects. Multivariate filters can be used to isolate an ERP component of interest without the averaging procedure. However, we would like to have certainty that the output of the filter accurately represents the component. Methods: We extended the linearly constrained minimum variance (LCMV) beamformer, which is traditionally used as a spatial filter for source localization, to be a flexible spatio-temporal filter for estimating the amplitude of ERP components in sensor space. In a comparison study on both simulated and real data, we demonstrated the strengths and weaknesses of the beamformer as well as a range of supervised learning approaches. Results: In the context of measuring the amplitude of a specific ERP component on a single trial basis, we found that the spatiotemporal LCMV beamformer is a filter that accurately captures the component of interest, even in the presence of both structured noise (e.g., other overlapping ERP components) and unstructured noise (e.g., ongoing brain activity and sensor noise). Conclusion: The spatio-temporal LCMV beamformer method provides an accurate and intuitive way to conduct analysis of a known ERP component, without averaging across trials or subjects. Significance: Eliminating averaging allows us to test more detailed hypotheses and apply more powerful statistical models. For example, it allows the usage of multi-level regression models that can incorporate between subject/stimulus variation as random effects, test multiple effects simultaneously and control confounding effects by partial regression." "Structure at every scale: A semantic network account of the similarities between unrelated concepts" "Simon De Deyne, Gert Storms" "Similarity plays an important role in organizing the semantic system. However, given that similarity cannot be defined on purely logical grounds, it is important to understand how people perceive similarities between different entities. Despite this, the vast majority of studies focus on measuring similarity between very closely related items. When considering concepts that are very weakly related, little is known. In this article, we present 4 experiments showing that there are reliable and systematic patterns in how people evaluate the similarities between very dissimilar entities. We present a semantic network account of these similarities showing that a spreading activation mechanism defined over a word association network naturally makes correct predictions about weak similarities, whereas, though simpler, models based on direct neighbors between word pairs derived using the same network cannot. (PsycINFO Database Record" "Structure and organization of the mental lexicon: A network approach derived from syntactic dependency relations and word associations" "Simon De Deyne, Steven Verheyen, Gert Storms" "© Springer-Verlag Berlin Heidelberg 2016. All rights are reserved. Semantic networks are often used to represent the meaning of a word in the mental lexicon. To construct a large-scale network for this lexicon, text corpora provide a convenient and rich resource. In this chapter the network properties of a text-based approach are evaluated and comparedwith a more directway of assessing the mental content of the lexicon through word associations. This comparison indicates that both approaches highlight different properties specific to linguistic and mental representations. Both types of network are qualitatively different in terms of their global network structure and the content of the network communities. Moreover, behavioral data from relatedness judgments show that language networks do not capture these judgments as well as mental networks."