Fascinating World of Neural Nets  
add to favorites   tell a friend
Introduction to KCM
Why Neural Nets?
Introduction to ANN
ANN in Real World
Tools & Utilities
Blog Articles
Discussion Board
NNDef Toolkit
Knowledge Models
nBank Project

Introduction to Neural Networks

Artificial Neural Network is a system loosely modeled based on the human brain. The field goes by many names, such as connectionism, parallel distributed processing, neuro-computing, natural intelligent systems, machine learning algorithms, and artificial neural networks. It is an inherently multiprocessor-friendly architecture and without much modification, it goes beyond one or even two processors of the von Neumann architecture. It has ability to account for any functional dependency. The network discovers (learns, models) the nature of the dependency without needing to be prompted. No need to postulate a model, to amend it, etc.
Neural networks are a powerful technique to solve many real world problems. They have the ability to learn from experience in order to improve their performance and to adapt themselves to changes in the environment. In addition to that they are able to deal with incomplete information or noisy data and can be very effective especially in situations where it is not possible to define the rules or steps that lead to the solution of a problem.

They typically consist of many simple processing units, which are wired together in a complex communication network.
There is no central CPU following a logical sequence of rules - indeed there is no set of rules or program. This structure then is close to the physical workings of the brain and leads to a new type of computer that is rather good at a range of complex tasks.

Neural networks are a branch of the field known as "Artificial Intelligence".
In a nutshell a Neural network can be considered as a black box that is able to predict an output pattern when it recognizes a given input pattern. Once trained, the neural network is able to recognize similarities when presented with a new input pattern, resulting in a predicted output pattern.

In principle, NNs can compute any computable function, i.e. they can do everything a normal digital computer can do. Especially anything that can be represented as a mapping between vector spaces can be approximated to arbitrary precision by Neural Networks.

In practice, NNs are especially useful for mapping problems which are tolerant of some errors and have lots of example data available, but to which hard and fast rules can not easily be applied.

Existing papers suggest different categorization for Neural Networks. Following list represents my view on that subject and may vary from other publications.



Neural Network Applications can be grouped in following categories:


A clustering algorithm explores the similarity between patterns and places similar patterns in a cluster. Best known applications include data compression and data mining.
Classification/Pattern recognition:

The task of pattern recognition is to assign an input pattern (like handwritten symbol) to one of many classes. This category includes algorithmic implementations such as associative memory.

Function approximation:
The tasks of function approximation is to find an estimate of the unknown function f() subject to noise. Various engineering and scientific disciplines require function approximation.
Prediction/Dynamical Systems:
The task is to forecast some future values of a time-sequenced data. Prediction has a significant impact on decision support systems. Prediction differs from Function approximation by considering time factor.
Here the system is dynamic and may produce different results for the same input data based on system state (time).

Neural Network Types:

Neural Network types can be classified based on following attributes:

-Function approximation
Connection Type
- Static (feedforward)
- Dynamic (feedback)
- Single layer
- Multilayer
- Recurrent
- Self-organized
- . . .
Learning Methods
- Supervised
- Unsupervised

Learning Process:
One of the most important aspects of Neural Network is the learning process. To describe that process I am going to use a nice analogy from Thomas Lahore:

The learning process of a Neural Network can be viewed as reshaping a sheet of metal, which represents the output (range) of the function being mapped. The training set (domain) acts as energy required to bend the sheet of metal such that it passes through predefined points. However, the metal, by its nature, will resist such reshaping. So the network will attempt to find a low energy configuration (i.e. a flat/non-wrinkled shape) that satisfies the constraints (training data).
Learning can be done in supervised or unsupervised manner.

In supervised training, both the inputs and the outputs are provided.
The network then processes the inputs and compares its resulting outputs against the desired outputs. Errors are then calculated, causing the system to adjust the weights which control the network. This process occurs over and over as the weights are continually tweaked.

In unsupervised training, the network is provided with inputs but not with desired outputs. The system itself must then decide what features it will use to group the input data. This is often referred to as self-organization or adaption.

Following geometrical interpretations will demonstrate the learning process within different Neural Models:

Perceptrons in two dimensional space

Perceptrons are the simplest form of Neural Nets. The learning process involves changing the weights by an amount proportional to the difference between the desired output and the actual output. This example demonstrates a Network of two Perceptrons learning to classify a set of data.

Perceptron as three dimensional Classifier

This example demonstrates how a single Perceptron can learning to create a decision boundary in a three dimensional space.

Neural Network as linear filter

Linear Networks are well suited for filtering data and linear function approximation. This example shows how a single Neuron learns to linearly separate two class of data.

Neural Network as function Approximator

Feed-forward Networks are the most known and applied type of Neural Nets. They are well suited for many different type of applications. This example demonstrates the learning process of a two layer Network for a none-linear function approximation.

Unsupervised Neural Network & Self Organized Maps

One of the fascinating areas of Neural Networks is unsupervised learning. In this process a Network tries to learn and group presented data in to clusters on its own.

Decision Boundaries

Different learning methods and respective decision boundaries


Last Update: 01/12/2004