Research Projects
Brain modeling and simulations:
The
understanding of the nonlinear dynamics of olfactory bulb (OB) is essential to
the modeling of brain and nervous system. On the base of our study of the OB
activities and the analysis of the conditions governing neural oscillations and
the nature of odor-receptor interactions, we proposed and developed models
and simulations addressing the questions of how the brain recognizes odors, how
it works in a noisy natural environment and why synchronization is used for
decoding brain circuits, which are still not successfully solved.
We simulated the dynamic behavior of the olfactory system in order to
understand the way in which odors are represented and processed by the brain. Further
experiments with artificial olfactory systems (AOS) based on different sets of
parameters, will allow the simulation and study how neuropathological changes
appear in cortex which receives projections from the olfactory bulb.
Odor threshold, odor identification, detection and recognition are the
basic measures of the medical studies detection and diagnosis of
neurodegenerative diseases as Alzheimer’s, Parkinson’s disease and
schizophrenia.
Self-organizing maps:
Self-Organizing maps (SOM) have become quite popular for tasks in
data visualization, pattern classification or natural language processing and
can be seen as one of the major concepts for artificial neural networks of
today. Their general idea is it to approximate a high dimensional – previously
unknown – input distribution by a lower dimensional neural network structure
with the goal to model the topology of the input space as good as possible.
Classical Self-Organizing Maps read the input values in random but sequential
order one by one and thus adjust the network structure over space: The network
will be build while reading bigger and bigger parts of the input. In contrast to
this approach, we present a Self-Organizing Map that processes the whole input
in parallel and organizes itself over time. The main reason for parallel input
processing lies in the fact that knowledge can be used to recognize parts of
patterns in the input space that have already been learned. This way, networks
can be developed that don’t reorganize their structure from scratch every time
a new set of input vectors is presented but rather adjust their internal
architecture in accordance with previous mappings. One basic application could
be a modeling of the whole-part
relationship through layered architectures.
Radial basis function neural
networks:
The
learning strategy used in RBFNs consists of approximating an unknown function
with a linear combination of non-linear functions called basis functions. The
latter have radial symmetry with respect to a center. A new strategy of
shape-adaptive radial basis functions based on potential functions and
optimization procedure for positioning of the centers during the learning
process is explored. Proposed are a static and dynamic versions of our learning
algorithm. The approach is distinctive from the conventional approaches using
space segmentation or neural network energy convergence.
Medical image processing:
The basic idea is the segmentation and classification of magnetic resonance
images (MRI) of the brain. The purpose is the segment the white matter in the
brain from the gray matter. The white matter contain information about the blood
flow in the brain, which is then used along with positron image tomography
images as possible diagnostic tool. The process of segmentation is a very
complex one, due to non-uniformity of the MRI. The human experts have long been
the most reliable segmentors. The aim is to create an intelligent aide in the
form of a neural network.
Created and maintained by Dr.
Iren Valova.
Last revised May 30, 2003
.