Personal tools

Artificial Neural Networks (ANNs)

Types of ANN_111122A
[Types of Artificial Neural Networks - Great Learning]
 

 

- Overview

With the development of neural networks, various tasks that were considered unimaginable can now be easily accomplished. Tasks like image recognition, speech recognition, and finding deeper relationships in datasets become easier. Sincere thanks to outstanding researchers in the field whose discoveries and discoveries have helped us harness the true power of neural networks.

Neural networks are the cornerstone of today's technological breakthroughs in deep learning. Neural networks can be thought of as massively parallel simple processing units capable of storing knowledge and applying that knowledge to make predictions.

Neural networks, also known as artificial neural networks (ANNs) or analog neural networks (SNNs), are a subset of machine learning (ML) and are at the heart of deep learning (DL) algorithms. Their name and structure are inspired by the human brain, mimicking the way biological neurons signal each other. 

 

- Artificial Neural Networks

Artificial neural networks (ANNs) are a branch of machine learning (ML) models inspired by the neuronal organization found in the biological neural networks in animal brains. 

An ANN is made of connected units or nodes called artificial neurons, which loosely model the neurons in a brain. These are connected by edges, which model the synapses in a brain. 

An artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the activation function. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. 

Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least 2 hidden layers. 

ANNs are used for predictive modeling, adaptive control, and other applications where they can be trained via a dataset. They are also used to solve problems in artificial intelligence. Networks can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information. 

 

- How an ANN Works

An artificial neural network (ANN) consists of a layer of nodes, which contains an input layer, one or more hidden layers, and an output layer. Each node or artificial neuron is connected to another node and has associated weights and thresholds. If the output of any single node is above a specified threshold, that node is activated, sending the data to the next layer of the network. Otherwise, no data is passed to the next layer of the network.

Neural networks rely on training data to learn and improve their accuracy over time. But once these learning algorithms are fine-tuned to improve accuracy, they become powerful tools in computer science and artificial intelligence, allowing us to classify and cluster data at high speed. Speech recognition or image recognition tasks can take minutes instead of hours compared to manual recognition by human experts. One of the most famous neural networks is Google's search algorithm.

 

- Neural Networks in Deep Learning (DL)

Neural networks are the core machinery that make DL so powerful. Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated. 

A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain. The development of neural network has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias. 

Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on. 

Neural networks can also extract features that are fed to other algorithms for clustering and classification; so you can think of deep neural networks as components of larger machine-learning applications involving algorithms for reinforcement learning, classification and regression.

 

- Deep Learning Algorithms and ANNs

Deep learning algorithms such as ANNs, are able to processing, technical and fundamental information from news, in order to detect features and patterns that can affect the pricing behavior in financial markets. The power of deep learning lies in the fact that can manage big amounts of data, and simultaneously, ANNs can learn from financial data and understand the price movements in stock markets, in a matter of seconds. 

An Artificial Intelligence System (AIS), can suggest the optimal solutions and by minimize the risk in trading strategies, can produce predictions that will be calculated based on the information we have gained, and in combination with other intelligent methods such as natural language processing (NLP), much more accurate results could be produced. 

Hybrid deep learning systems have been widely used in financial services, by detecting new market trends and by offering significant profits to investors, by forecasting the prices. All this without the need to apply a new investment theory as long as deep learning algorithms will discover in the data, structures that have never been studied or explained in the past. 

 

- Perceptrons - How Neural Networks Work

A normal neural network consists of multiple layers called input, output, and hidden layers. In each layer, each node (neuron) is connected to all nodes (neurons) in the next layer through parameters called "weights". 

A neural network consists of nodes called "perceptrons" that perform the necessary computations and detect the "characteristics" of the neural network. These perceptrons try to reduce the final cost error by adjusting the weight parameters. Furthermore, the perceptron can be thought of as a single-layer neural network. 

When considering a perceptron, its work can be described as follows: When you feed data with random weights into the model, it generates a weighted sum of them. Based on this value, the activation function determines the activation state of the neuron. The output of this perceptron can be used as the input to the next neuron layer. 

 

- Mutilayer Perceptrons - Deep Neural Networks

On the other hand, multilayer perceptrons are called deep neural networks. The perceptron is activated when there is a satisfying input. 

  • Initially, the dataset should be fed into the input layer and then flow to the hidden layer.
  • The connections that exist between the two layers randomly assign weights to the inputs.
  • Add a bias to each input. Bias is the constant used in the model to best fit the given data.
  • The weighted sum of all inputs will be sent to a function that decides the neuron's activity state by computing the weighted sum and adding a bias. This function is called the activation function.
  • The node that needs to be triggered for feature extraction is determined according to the output value of the activation function.
  • The final output of the network is then compared to the labeled data required for our dataset to calculate the final cost error. The cost error actually tells us how "bad" our network is. Therefore, we want the error to be as small as possible.
  • Adjust the weights by backpropagation, thereby reducing the error. This back-propagation process can be thought of as the central mechanism of neural network learning. It basically fine-tunes the weights of the deep neural network to reduce the cost value.

 

In simple terms, what we usually do when training a neural network is to calculate the loss (error value) of the model and check if it decreases. If the error is higher than expected, we have to update model parameters such as weights and bias values. Once the loss is below the expected error bound, we can use the model.  

 

- The Different Types of Artificial Neural Networks

Understanding the different types of Artificial Neural Networks (ANNs) not only helps in improving existing AI technology but also helps us to know more about the functioning of our own neural networks, upon which they are based. 

The nine types of neural networks are listed below: 

  • Perceptron
  • Feed Forward Neural Network
  • Multilayer Perceptron
  • Convolutional Neural Network
  • Radial Basis Functional Neural Network
  • Recurrent Neural Network
  • LSTM – Long Short-Term Memory
  • Sequence to Sequence Models
  • Modular Neural Network

 

 

[More to come ...]
 
Document Actions