i

Dieser Webauftritt wurde archiviert und wird nicht mehr aktualisiert.

Bei technischen Fragen kontaktieren Sie bitte das Webteam. Es gilt die Datenschutzerklärung der TU Darmstadt.

Keynotes

Keynotes

Frances Egan (Rutgers University): The Role of Representation in Computational Models

Much of cognitive neuroscience construes cognitive capacities as representational capacities, or as involving representation in some way. Computational theories of vision, for example, typically posit structures that represent edges in the world. Neurons are often said to represent elements of their receptive fields. Despite the widespread use of representational talk in computational theorizing there is surprisingly little consensus about how such claims are to be understood. Some argue that representational talk is to be taken literally; others claim that it is merely a useful fiction. In this talk I sketch an alternative account of the nature and function of representation in computational cognitive models that rejects both of these views.

Tessa Dekker (University College London): How do we develop an optimised sensorimotor system?

Humans are born with exceptionally poor visual and motor skills but by the time they reach adulthood, most vision and action is highly proficient and near-automatic. My lab investigates the processes that support this development. I will present data showing that adults’ are highly adept at accounting for the noise in their sensory estimates and the imprecisions of their movements. This allows them to form judgments and choose actions with a high chance of success – even in highly complex environments. Our recent research shows that this is not a trivial ability to acquire – using a combination of model-driven neuroimaging and behavioural methods, we demonstrate that children as old as 10-11 years do not correctly account for the noise in their system during vision and visually-guided action, and are placed at unnecessary risk of failing at basic tasks as result. I will present some examples of tasks that are substantially affected by this development, and present modelling work aimed at disentangling which processes drive this shift from suboptimal sensorimotor processing in childhood to the highly optimised performance in adults.

Noah Goodman (Stanford University): Some Thoughts on and Examples of How Language Works

Máté Lengyel (University of Cambridge & Central European University): A Bayesian approach to internal models

Our percepts rely on an internal model of the environment, relating physical processes of the world to inputs received by our senses, and thus their veracity critically hinges upon how well this internal model is adapted to the statistical properties of the environment. We use a combination of Bayesian inference-based theory and novel data analysis techniques applied to a range of human behavioural experiments, as well as electrophysiological recordings from V1, to reveal the principles by which complex internal models (1) are represented in neural activities, (2) are adapted to the environment, (3) can be shown to be task-independent, and (4) generalise across very different response modalities.

Iris van Rooij (Radboud University Nijmegen): Why cognitive scientists should care about computational complexity

Computational complexity theory studies the computational resources (e.g., time, space, randomness, etc.) required for solving computational problems. Its analytical tools aren’t yet commonly taught in cognitive science and many still go about their business without much concern for the computational resources presupposed by their theories and models. Yet, there are good reasons for cognitive scientists to care more about computational complexity. In this talk I will explain how computational complexity theory provides useful analytical tools to guide and constrain computational-level theorizing and how it can help bring rigor and clarity to long-standing debates that center on the notion of computational intractability—e.g., rationality vs. irrationality, modularity vs. domain-generality, cognitivism vs. enactivism.

Christopher Summerfield (University of Oxford): Ingredients of intelligence in minds and machines

There is considerable current excitement about advances in machine learning. However, artificial systems fail on a number of problems that humans master. For example, humans efficiently generate temporally extended behaviours (planning), can generalise abstract information (far transfer) and can perform multiple tasks in quick succession (multitasking). I will describe recent experiments that examine the neural and computational mechanisms underpinning these abilities, and discuss how the resultant insights might be used to build stronger AI.