- Neurodynamics of Intentional Behavior Generation: Biological Basics and Control Applications
- Computational Resources in Neural Network Models
- Neural Computing in Computer Graphics
- Temporal Extensions of Self-Organizing Maps
- How do biological neurons learn? Insights from computational modelling of neurobiological experiment
Speaker: Robert Kozma, FedEx Institute of Technology, University of Memphis, TN, USA
This tutorial reviews computational intelligence methods of sensory perception and cognitive functions in animals, humans, and artificial devices. Top-down symbolic methods and bottom-up sub-symbolic approaches are described. In recent years, computational intelligence, cognitive science and neuroscience have achieved a level of maturity that allows integration of top-down and bottom-up approaches in modeling the brain. Continuous adaptation and learning is a key component of computationally intelligent devices, which is achieved using dynamic models of cognition. Human cognition performs a granulation of the seemingly homogeneous temporal sequences of perceptual experiences into meaningful and comprehendible chunks of concepts and complex behavioral schemas. They are accessed during action selection and decision making as part of the intentional cognitive cycle. Intentional dynamics is a key component of biological intelligence and it provides clues to develop robust intelligence in man-made devices. This tutorial gives an overview of intentional behaviors in biological and artificial systems.
Presenter: Jiri Sima, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, Czech Republic
(Artiÿcial) neural network models enrich the traditional repertoire of formal computational models by introducing new sources of eÿcient computations. These characteristics establish a rich taxonomy of neural network models whose general-purpose computational capabilities varying from subregular to super-Turing power and their ability to implement particular important (logic, arithmetic, recognition) functions have now been understood in a satisfactory manner. In our tutorial, we will survey the taxonomy of formal neural net models with respect to their computational capabilities. Along this line we will focus on the main complexity theoretic results related to practical neurocomputing, which are not suÿciently known in the neural network community and we will clarify their impact on the ÿeld in a more popular way. For example, it is a widely known trivial fact that a single layer of perceptrons cannot compute the XOR (PARITY) function which requires two layers but a deeper theoretical result concerning the necessity of three layers for computing the Boolean inner product is not advertised so much. In addition, we will also discuss non-traditional resources of neurocomputing such as energy, temporal coding, liquid state etc. which have been analyzed only recently.
Presenter: Chi-Sing Leung, Department of Electronic Engineering,
City University of Hong Kong, Kowloon Tong, Hong Kong
This tutorial will discuss two issues of neural computing in computer graphics. In the first section, we will discuss how to use a neural network model for global illumination. In the second section, we will discuss several ways to speed up the simulation of neural networks based on the consumer level graphics hardware. The first section introduces a computationally more efficient, mathematically simpler, and visually identical substitute to spherical harmonics (SH) called spherical radial basis function (SRBF). It represents lighting including soft shadow, caustics, inter-reflection, HDR environment map, and other global illumination like effects. The tutorial covers the underlying mathematics (which shows simplicity), describes its implementation (which exhibits its computational efficiency), and demonstrates the visual results (which shows its comparability). In the second section, we will discuss the efficient implementation of neural computing models based on graphics hardware. We will first give an introduction of GPU. We then present several GPU implementation examples of neural computing models, including cellular neural networks and vector quantization.
Presenter: Igor Farkaš, Department of Applied Informatics, Comenius University Bratislava, Slovakia
Summary: Self-organizing map (SOMs), since its original introduction in early eighties, have become one of the most widespread neural network models, mostly used for clustering and topographic visualization of high-dimensional vectorial data. Recently there has been an outburst of interest in extending topographic maps of vectorial data to more general data structures, such as sequences or trees. However, there is no general consensus as to how best to process sequences using topographic maps, and this topic remains an active focus of neurocomputational research. The representational capabilities and internal representations of the models are not very well understood. The aim of the tutorial is to overview various models of temporally extended SOMs, as they have been introduced in the literature during the last decade. We also elaborate of their mutual differences and similarities, and their potential use in the field of neural computation.
How do biological neurons learn? Insights from computational modelling of neurobiological experiments
Presenter: Lubica Benuskova, Dept. of Computer Science, University of Otago, Dunedin, New Zealand
Summary: Artificial neural networks (ANN) are inspired by the properties of biological neurons and biological neural networks with respect to their architecture, computation and learning from examples. In ANN as in the brain, neurons learn by changing weights of their synaptic connections. In the context of supervised and unsupervised modes of ANN training, and the rate versus spike model of a neuron, we overview several recent neurobiological experimental results, which have revealed some new facts about the rules of synaptic modifications in biological neurons. Namely, new discoveries concern the existence of a moving threshold of synaptic strengthening, dependence of the sign of synaptic change upon timing of pre- and postsynaptic spikes and synaptic scaling. We will also present a proposal for a new formula that extends the learning rule with genetic influence upon learning.