Hardware Emulation Study Of Neuronal Processing In Cortex For Pattern Recognition
Do you want the complete project material? Download the complete project topic and material (chapter 1-5) on Edustore.NG below.
See below for the abstract, table of contents, list of figures, list of tables, list of appendices, list of abbreviations and chapter one, Literature Review, Methodology e.t.c
Abstract of Hardware Emulation Study Of Neuronal Processing In Cortex For Pattern Recognition
Artificial Neural network (ANN) is an area of computing that is modeled after the neural network of the biological brain and over the last few decades, has experienced huge success in its application in areas such as business, Medicine, Industry, Automotive, Astronomy, Finance, etc.
Since Neural Networks are inherently parallel architectures, there have been several earlier researches to build custom ASIC based systems that include multiple parallel processing units. However, these ASIC based systems suffered from several limitations such as the ability to run only specific algorithms and limitations on the size of a network. Recently, much work has focused on implementing artificial neural networks on reconfigurable computing platforms. Reconfigurable computing allows to increasing the processing density beyond that provided by general-purpose computing systems. Field Programmable Gate Arrays (FPGAs) can be used for reconfigurable computing and offer flexibility in design with performance speeds almost closer to Application Specific Integrated Circuits (ASICs).
This thesis presents a study of an FPGA-based acceleration solution and performance exploration of a Feedforward Artificial Neural Networks (FFANN). The architecture is described using Very-High-Speed Integrated Circuits Hardware Description Language (VHDL) and implemented and demonstrated on an FPGA board. Synthesis and simulation are made with Quartus II tool and ModelSim respectively. The given system was efficiently trained and evaluated in hardware with digit recognition application.
Table of contents on Hardware Emulation Study Of Neuronal Processing In Cortex For Pattern Recognition
HARDWARE EMULATION STUDY OF NEURONAL PROCESSING IN CORTEX FOR PATTERN RECOGNITION I
CERTIFICATION …………………………………………………………………………………………………………………. I
HARDWARE EMULATION STUDY OF NEURONAL PROCESSING IN CORTEX FOR PATTERN RECOGNITIONII
© 2017 ………………………………………………………………………………………………………………………….. III
ABSTRACT ………………………………………………………………………………………………………………………. IV
ACKNOWLEDGEMENT ……………………………………………………………………………………………………….. V
DEDICATION …………………………………………………………………………………………………………………… VI
LIST OF FIGURES ………………………………………………………………………………………………………………. IX
LIST OF TABLES ………………………………………………………………………………………………………………… X
CHAPTER 1 ………………………………………………………………………………………………………………………..1
1.1 INTRODUCTION ……………………………………………………………………………………………………………….. 1
1.2 STATEMENT OF PROBLEM …………………………………………………………………………………………………… 1
1.3 BIOLOGICAL NEURON ……………………………………………………………………………………………………….. 2
1.4 ARTIFICIAL NEURON …………………………………………………………………………………………………………. 2
1.5 ARTIFICIAL NEURAL NETWORK (ANN) ……………………………………………………………………………………. 4
1.6 ANN ARCHITECTURES ……………………………………………………………………………………………………….. 4
1.7 LEARNING ALGORITHMS …………………………………………………………………………………………………….. 5
1.7.1 Supervised learning ………………………………………………………………………………………… 5
1.7.2 Unsupervised learning ……………………………………………………………………………………. 6
1.7.3 Reinforced learning …………………………………………………………………………………………. 6
1.8 RESEARCH OBJECTIVES ………………………………………………………………………………………………………. 7
1.9 ORGANIZATION OF WORK ………………………………………………………………………………………………….. 7
CHAPTER 2 ………………………………………………………………………………………………………………………..8
2.1 BRIEF HISTORY………………………………………………………………………………………………………………… 8
2.2 APPLICATIONS OF NEURAL NETWORK …………………………………………………………………………………….. 9
2.3 OTHER RELATED WORKS ………………………………………………………………………………………………….. 10
CHAPTER 3 ……………………………………………………………………………………………………………………… 12
3.1 SYSTEM EMULATION ……………………………………………………………………………………………………….. 12
3.2 SYSTEM ARCHITECTURE ……………………………………………………………………………………………………. 12
3.2.1 SRAM ………………………………………………………………………………………………………………. 13
3.2.2 The Pattern Recognition Artificial Neural Network ……………………………………… 16
3.3 LEARNING PROCESS ………………………………………………………………………………………………………… 19
3.4 SCHEMATIC DESIGN OF THE SYSTEM ……………………………………………………………………………………… 20
3.5 SCHEMATIC DESIGN OF INDIVIDUAL COMPONENTS ……………………………………………………………………. 21
CHAPTER 4 ……………………………………………………………………………………………………………………… 27
4.1 SYSTEM TESTING ……………………………………………………………………………………………………………. 27
4.2 SYSTEM EVALUATION ………………………………………………………………………………………………………. 29
CHAPTER 5 ……………………………………………………………………………………………………………………… 30
5.1 CONCLUSION ………………………………………………………………………………………………………………… 30
5.2 FUTURE WORK ……………………………………………………………………………………………………………… 30
viii
REFERENCES …………………………………………………………………………………………………………………… 34
CHAPTER ONE
1.1 Introduction
Moore’s law predicted that the number of transistors on a dense integrated circuit doubles every two years(Moore, 1975). So far, this has been true, but it is only a matter of time before this circuit will max out, and this is because further increasing the number of transistors on it will make it consume more power, overheat and become impossible to cool. Again, it is difficult to get the conventional computer with Von Neumann architecture to perform operations like understanding human languages, recognizing objects, learning to dance, etc. activities the human brain does very easily. The human brain is not good at arithmetic operations, but it does well in operations that involves processing continuous streams of data from the environment and can do it very quickly. So, to build a computer that will be able to carry out these activities, a computing paradigm called artificial neural network which mimics the biological brain was adopted(Abdallah, 2017).
Artificial Neural Network is a computing paradigm after the neural network of the biological brain. The Biological brain is made up of billions of neurons which are interconnected to form a network. It is fault tolerant, consumes extremely low amount of power and can carry out significant parallel computations(Indiveri, Linares-Barranco, Legenstein, Deligeorgis, & Prodromakis, 2013). This computing paradigm started as early as 1943(Macukow, 2016) and has continued to improve having its application in the areas of pattern recognition; a discipline that is aimed at classifying objects (text, images, speech, etc.), image recognition, object classification and much more.
1.2 Statement of Problem
There has been massive success in implementing the neural network in software, and one of the reasons is because it allows for flexibility. However, real time applications like autonomous driving
2
vehicles, real-time surveillance cameras, air traffic control system, etc. require much speed for operation(Abdallah, 2016). This speed can better be offered by neural networks implemented in hardware rather than in software. Although lacking flexibility, hardware implementation allows the neural network more speed, the advantage of more parallelism and cost effectiveness by reducing the number of components (Misra & Saha, 2010).
1.3 Biological Neuron
The biological neuron has four features which the artificial neuron, models and they are; the dendrites which are responsible for receiving input signals from other neurons into the cell body, the neuron cell body (nucleus) which is responsible for processing the input signal, the axon which is responsible for transferring the result of the processed signal out of the neuron cell body and the synapse that serve as the point of connection between two neurons and also plays a part in the transfer of output from one neuron to another(Hagan & Beale, n.d.).
Figure 1.1: Structure of a Biological Neuron
1.4 Artificial Neuron
An artificial neuron takes in input(s) as figures or set of figures, and each input is multiplied with the synaptic weight (which represents the strength of the connection between two neurons) of its
3
connection. When this is done, the neuron takes the product of each input and the synaptic weight and performs a sum operation on them. A bias is added to the value of the sum, and finally, an activation function operates on it to determine the final output(Hagan & Beale, n.d.).
Several activation functions can be employed when designing a neural network, but the choice depends on the specification of the problem the designer wants the neuron to solve. These activation functions can either be linear; Linear, Saturating Linear, Symmetric Saturating Linear, etc. or non-linear; Log-Sigmoid, Hyperbolic Tangent Sigmoid, etc.(Hagan & Beale, n.d.).
Figure 1: Mathematical Model of an Artificial Neuron
Where:
• x a column vector is the input.
• w a matrix with one row is the synaptic weight.
• b, the bias.
• , the Summation function
• and , the Activation function
4
1.5 Artificial Neural Network (ANN)
A neuron alone can solve tiny or no problems but to solve bigger problems, an interconnection of multiple neurons arranged in layers working together is required. This interconnection forms a network which is called a neural network. The way these neurons are arranged in relation to each other in a network is called the architecture. The arrangement is basically organized by controlling the direction of the synaptic connection between neurons. Artificial neural networks are arranged in three layers; input, hidden and output. The input layer is tasked with receiving input from the environment, the hidden layer with processing the input to identify patterns, and the output layer with presenting the result done in the hidden layer(da Silva, Hernane Spatti, Andrade Flauzino, Liboni, & dos Reis Alves, 2017).
Figure 2: Artificial Neural Network (ANN)
1.6 ANN Architectures
The neurons in a neural network can be connected in different ways. These types of connection are what is referred to as architectures. The most common among them is the feedforward
5
neural network. This architecture has an input layer, one or more hidden layers, and an output layer. The data comes in from the input layer and flows in one direction through the hidden layer(s) until it gets to the output layer.
Other neural network architectures include a recurrent neural network which allows data to flow round in cycles being able to remember information for a long time and symmetrically connected neural networks(Hinton, Srivastava, & Swersky, 2012).
1.7 Learning Algorithms
For a neural network to solve problems and get an accurate result, it needs to be trained, and it must learn how to do so. Neural network learning is classified into three groups of algorithms.
1.7.1 Supervised learning
In supervised learning, the network is given data in pairs, an input data and a target result. The aim is for the network to be able to extract information from the labeled dataset given to it, so that it can label new data sets. It can also be called functional approximation.
Figure 3: Supervised Learning
6
1.7.2 Unsupervised learning
Here, the network is given only input data set(s) and from it, the network is expected to derive some structure from the relationship that exists between the input data. This neural network architecture deals more with description.
Figure 4: Unsupervised Learning
1.7.3 Reinforced learning
This neural network is based on the concept of reward. So, the network makes decisions that will enable it to obtain a maximum reward.
7
Figure 5: Reinforced Learning
1.8 Research Objectives
The goal of this thesis is to acquire a deep understanding of neuro-inspired computing and its application and to emulate in hardware neuronal processing in the cortex for pattern recognition. This will be carried out on a Field Programmable Gate Array (FPGA) a configurable integrated circuit that can be configured using Hardware Description Language (HDL).
1.9 Organization of Work
This work has been organized as follows: Chapter 2 starts with a brief history of the neural network and goes on to give an insight into related works. Chapter 3 presents the general system architecture, a description of the individual components that make up the entire system and the process of implementation. Chapter 4 discusses the analysis of the implementation and the result achieved, evaluating the power consumption and complexity. Chapter 5 concludes the research, giving insight into possible future work.
IF YOU CAN'T FIND YOUR TOPIC, CLICK HERE TO HIRE A WRITER»