How does human brain work? How does mathematical thinking, and abstract thinking in general, happen? What is active and enactive perception and how is it different from passive processing of sensory stimuli? What does all this have to do with machine learning and artificial intelligence? Or, with mathematics education? These are questions that intrigue me the most. My name is Vadim Kulikov, I am a mathematician and a cognitive scientist currently working at the Aalto University. My position is fixed-term and I am looking for new opportunities! Related links: my CV (PDF), my LinkedIn-profile, my ArXiv.org-papers.

–  Vadim

Work in progress

If you have seen my CV, you haven’t seen what I am doing now. Only the work that has already been published is in the CV. Here are my projects that are on their way.

Concept formation in mathematics

This is joint work with the philosopher and cognitive scientist Markus Pantsar. In this paper we evaluate the meaning the claim that set theory is a foundation of mathematics from a new perspective. Does set theory actually help us to form the concepts we use in mathematics? This leads us to the question how are mathematical concepts formed by a cogniser? We formulate a theory of abstract concept formation which incorporates ideas from various fields in cognitive science: from embodied cognitive science to cognitive linguistics and metaphor analysis. Our claim is that new concepts are formed when there is a robust body of different frameworks which are all coherent with, and predictive of each other. Download the draft of our paper here.

A Hebbian Neural Network model of the Stroop effect

The Stroop effect occurs when an individual is asked to name the ink-colour of a colour-word: green, blue, red. Version of this experiment are countless: instead of naming out-loud, press a button, or point to a colour patch, or instead of naming the ink-colour, read the word etc.,etc. There are certain robust patterns that have been experimentally verified and replicated during the last 100 years. One of them is that an incongruent word interferes more with the ink-naming than an incongruent ink-colour interferes with word-reading. Another known pattern is the difference between the so-called response and semantic incongruency: even if responses to blue and red are the same (e.g. pressing the same button), the reaction time for red will be shorter than for blue. This phenomenon has also been linked to brain areas in an fMRI study. I propose that many properties of the Stroop effect can be explained with a simple neural network model which assumes simple Hebbian-type learning. A draft of this paper can be downloaded here.

Comparing Russian and Finnish speakers in lateralised colour priming

This is joint work with Prof. Ulrich Ansorge. It is known that the Stroop effect can be laterelised: the effect can be stronger when the stimuli are presented in the right visual field than if they are presented in the left visual field. Our hypothesis is that this is related to the fact that language is processed in the left hemisphere – where the stimulus from the right visual field is first projected. We investigate this hypothesis by comparing Russian native speakers to Finnish native speakers on a set of colours which is differently distinguished by Finnish and Russian languages. Apart from a lot of piloting data, we have collected data from 9 Russian speakers and 9 Finnish speakers so far, the analysis of the data is currently in progress. This work started out as my master’s thesis in cognitive science. See my thesis here.

What is a representation in a deep neural network – if anything?

This philosophical effort to understand the nature of deep nets is joint work with Prof. Daniel Hutto. Based on the ideas in enactivist philosophy of mind, we analyse the claim that deep neural networks learn abstract representations of the data. This is a hot topic which is currently surrounded by a lot of research, because the failure to fully comprehend what happens inside a deep neural network is a source of a lot of challenges in modern day AI.

Affordances, causality and dynamics

A central idea of enactive cognitive science is that perception is not captured by the processing of sensory stimuli in the brain, but by the active engagement of the agent with the world. This is a shift of perspective which has a major influence on the way perception should be modelled. The central notion of affordances was introduced by James Gibson. An agent in a given situation has an affordance to do something if it is possible to do that in that situation (e.g. you can sit, if there is a chair in front of you, you can walk through, if there is a door in front of you). Perceiving affordances, according to Gibson, is the most important function of perception. We are setting out to mathematically model this and other ideas with Tuomas Sahlsten, an expert in dynamical systems. Our starting point is that to perceive that “it is possible to do something” translates to the knowledge of causal relationships between variables available to the agent (sensory, motor and more abstract ones). To do what the agent wants, then, translates to falsifying faithfulness in the causal structure which encompasses both the agent and the environment, because the agent constantly “compensates” for the effects in the environment. The aim in the project is to mathematically define what learning of affordances means, how it happens and to prove a convergence result.

My other project: www.batsandseahorses.com

Web Design

Patrick from BierPlus

Contact


Phone: +358503444045