Download this complete Project material titled; Development Of An Improved Approach To Biometric Fingerprint Image Compression Using Coiflet Signal Transformation Algorithm with abstract, chapters 1-5, references, and questionnaire. Preview Abstract or chapter one below

  • Format: PDF and MS Word (DOC)
  • pages = 65

 5,000

ABSTRACT

Biometric fingerprint images require substantial storage, transmission and computation costs,
thus their compression is advantageous to reduce these requirements. This research work
presents a novel approach to biometric fingerprint image compression by the innovative
application of non-uniform quantization scheme in combination with level-dependent
threshold strategy applied to wavelet transformation as opposed to the widely used uniform
quantization scheme. Comparative analysis of Coiflet wavelets implemented with level
dependent thresholds and Daubechies wavelets were conducted on the basis of percentage
retained energy, RE (%). The RE (%) values for Coiflet wavelet ranged from 99.32% to
99.69% as opposed to the values for Daubechies wavelet which ranged from 98.45% to
99.15%. These results revealed that the Coiflet wavelet bases performed better than the
Daubechies wavelet. Hence, the choice of Coiflet wavelet for image transformation in the
proposed compression algorithm was justified. The performance analysis of uniform and nonuniform
scalar quantization schemes for biometric fingerprint image compression was
conducted. The non-uniform quantization method based on Lloyd-Max approach performed
better than the uniform quantization method used in the existing fingerprint compression
standards. The Signal-to-Quantization Noise Ratio (SQNR) values for non-uniform
quantization increased from 19.2977 dB for 3 bit per pixel (bpp) to 44.6083 dB for 7 bpp
whereas for the same range (3 bpp to 7 bpp) for uniform quantization, SQNR values
increased from 17.0903 dB to 40.1349 dB. Therefore, non-uniform quantization based on
Lloyd-Max approach was employed for this compression algorithm. The implementation of
the proposed biometric fingerprint image compression algorithm involved three stages,
namely: the transformation of biometric fingerprint image; non-uniform quantization of
transformed image and the entropy coding which is the final stage. In order to determine the
overall performance of the algorithm, Peak Signal-to-Noise Ratio (PSNR) and Compression
Ratio (CR) were used as performance metrics. PSNR was used as a measure of the resultant
image quality after compression and the Compression Ratio was used as a measure of the
degree of compression achievable. A trade-off was made between the achievable
compression ratio and the realizable image quality which is a function of the achievable
PSNR in the overall compression process. The overall performance of the proposed
compression algorithm achieved an improvement in terms of compression ratio of 20:1 over
the existing compression standard for biometric applications which have a compression ratio
limit of 15:1. The improvement was largely due to the novel approach employed in this
research work as stated above.

 

 

TABLE OF CONTENTS

Title page i
Declaration ii
Certification iii
Dedication iv
Acknowledgement v
Abstract vi
Table of Contents vii
List of Tables x
List of Figures xi
Abbreviations, Definitions, Glossary and Symbols xiii
CHAPTER ONE: INTRODUCTION
1.1 Background of Study 1
1.2 Types of Digital Image 2
1.2.1 Binary Image 2
1.2.2 Grayscale Image 2
1.2.3 True Colour Image 2
1.2.4 Indexed Image 3
1.3 Data Redundancy 3
1.3.1 Coding Redundancy 4
1.3.2 Interpixel Redundancy 4
1.3.3 Psychovisual Redundancy 5
1.4 Characteristics of Fingerprint Image 6
1.5 Fingerprint Image Compression Techniques and Standards 7
viii
1.6 Motivation 9
1.7 Aim and objectives 9
1.8 Statement of Problem 10
1.9 Justification of Study 10
1.10 Methodology 10
1.11 Significance of Study 11
1.12 Scope and Limitation of Work 12
1.13 Thesis Outline 12
CHAPTER TWO: LITERATURE REVIEW
2.1 Introduction 13
2.2 Review of Fundamental Concepts 13
2.2.1 Transformation 14
2.2.2 Quantization 31
2.2.3 Entropy Coding 41
2.2.4 Validation and Performance Measures 57
2.3 Review of Similar Works 59
CHAPTER THREE: MATERIALS AND METHODS
3.1 Introduction 69
3.2 Materials 69
3.3 Methods 69
3.3.1 2-D Discrete Wavelet Transform (DWT) of Source Fingerprint Image 70
3.3.2 Non-Uniform Scalar Quantization using Lloyd-Max Procedure 78
3.3.3 Arithmetic Entropy Coding to Generate Image Bit-Streams 85
ix
3.3.4 Reconstruction of Source Fingerprint Image from Compressed
Bit-Stream 88
3.3.5 Validation and Evaluation of the Performance of Compression Algorithm 90
CHAPTER FOUR: RESULTS AND DISCUSSIONS
4.1 Introduction 95
4.1 Results of Performance Analysis of Wavelet Bases 95
4.2 Results of the Quantization Schemes 96
4.3 Results of Performance Evaluation of the Compression Algorithm 97
4.4 Discussion of Results 99
4.5 Performance Analysis of Wavelet Bases 99
4.6 Comparative Analysis of Global Threshold and Level-Dependent
Threshold Strategies 101
4.7 Comparison of the Non-uniform Quantization and Uniform
Quantization Methods 102
4.8 Performance Evaluation of the Compression Algorithm 106
4.9 Contribution to Knowledge 109
CHAPTER FIVE: CONCLUSION AND RECOMMENDATIONS
5.1 Conclusion 111
5.2 Recommendations 113
References 114
Appendix I 120
Appendix II 121

 

 

CHAPTER ONE

INTRODUCTION
1.1 Background of Study
Images contain large amount of information that requires huge storage space and large
transmission bandwidth. Image data processing and storage attract cost and the cost is
directly proportional to the size of data. In spite of the advancements made in mass storage
and processing capacities, these have continued to fall below capacity requirements of
application systems (Ashok et al., 2010). Therefore, it is advantageous to compress an image
by storing only the essential information needed to reconstruct the image. An image can be
thought of as a matrix of pixel (or intensity) values and in order to compress it, redundancies
must be exploited. Image compression is the general term for the various algorithms that
have been developed to address these problems.
Data compression algorithms are categorized into two, namely; lossless and lossy
compression techniques. A lossless technique guarantees that the compressed data is
identical to the original data whereas in lossy compression technique, images are compressed
with some degree of data loss or degradation while still retaining their essential features. This
distinction is important because lossy techniques are much more effective at compression
than lossless methods. Lossy technique is the preferred choice for fingerprint image
compression to reduce computation, storage and transmission costs. Huge volumes of
fingerprint images that need to be stored and transmitted over a network of biometric
databases are an excellent example of why data compression is important. The cardinal goal
of image compression is to obtain the best possible image quality at a reduced storage,
transmission and computation costs (Mallat, 2009).
2
1.2 Types of Digital Image
A digital image can be considered as a large array of discrete dots, each of which has a
brightness associated with it. These dots are called picture elements or simply pixels
(McAndrew, 2004). Spatial resolution is the density of pixels over the image. High spatial
resolution simply means more pixels are used to display the image. Spatial frequencies define
the amount by which image pixel values change with distance. High frequency components
are characterized by large changes in pixel values over small distance and these are attributed
to edges and noise. Low frequency components are parts characterized by little change in the
pixel values and these are attributed to background and textures of images (McAndrew,
2004). There are four basic types of digital image, namely (McAndrew, 2004):
i) Binary image;
ii) Grayscale image;
iii) True colour image; and
iv) Indexed image
1.2.1 Binary Image
In this image type, each pixel is either black or white. There are only two possible values for
each pixel (0, 1). Only one bit is needed to represent a pixel. Data for which a binary
representation may be suitable include text, fingerprint and architectural plans (McAndrew,
2004).
1.2.2 Grayscale Image
This is an image in which each pixel is a shade of gray, that is from 0 (black) to 255 (white).
This range means that each pixel can be represented by 8 bits, or exactly one byte. The
grayscale levels are suitable for representing any type of natural images. Other grayscale
range exist (-255 to 255), such image representations are often used in medical imaging
applications (McAndrew, 2004).
3
1.2.3 True Colour Image
In this format, each pixel has a particular colour which is described by the amount of Red,
Green, and Blue (RGB) in it. With image component range of 0 to 255, this gives a total of
2563 = 16,777,216 different possible colours in the image. Since the number of bits required
for representing each pixel is 24 bits, such images are referred to as 24-bits colour images
(McAndrew, 2004).
1.2.4 Indexed Image
In this colour image type, the images only have a small subset of the 2563 possible colours.
For ease of storage and data handling, the image has an associated colour-map, or colour
palette which is simply a list of all the colour components in the image. As opposed to the
RGB image, each pixel of the indexed image has a value which does not give its colour but
an index to the colour in the map (McAndrew, 2004).
In the compression of a digital gray-scale fingerprint, the source image has a lot of
redundancies that need to be removed to achieve compression. These redundancies are not
obvious in the spatial domain. Therefore, some kind of transformation method is needed to
convert the image into another analysis domain that makes the redundancies more obvious.
This process will lead to image decomposition and pixel energy de-correlation.
1.3 Data Redundancy
Since data compression refers to the process of reducing the amount of data required to
represent a given quantity of information, a clear distinction must be made between data and
information. Data are the means by which information is conveyed (Gonzalez and Woods,
2002). Various amounts of data may be used to represent a specified amount of information
depending on the data source. More often than not, a large amount of data only contains a
4
small amount of relevant information (Gonzalez and Woods, 2002). This is a case of data
redundancy which is a critical issue in digital image compression.
In digital images, the neighbouring pixels are correlated and, therefore, contain redundant
information. The fundamental objectives of any compression process are to reduce
redundancy and eliminate irrelevancy. Redundancy is defined as duplication of pixels, as
well as irrelevancy which refers to insignificant pixels which are not noticeable by the
Human Visual System (HVS) (Iqbal, 2004). In digital image compression, three basic data
redundancies can be identified and exploited, namely (Gonzalez and Woods, 2002; Singla
and Sharma, 2013):
i) Coding redundancy
ii) Interpixel redundancy
iii) Psychovisual redundancy
Image compression is achieved when one or more of these redundancies are reduced or
eliminated. These redundancies are presented in greater details in the next sections.
1.3.1 Coding Redundancy
This occurs when the gray level pixels representing an image are coded in a way that uses
more code symbols than absolutely necessary to represent each gray level pixel. Put
differently, coding redundancy results when the codes assigned to pixel values of an image
signal have not been selected to take full advantage of the probabilities of occurrence of the
values (Gonzalez and Woods, 2002; Singla and Sharma, 2013).
1.3.2 Interpixel Redundancy
This is an important form of data redundancy. It directly relates to the interpixels correlation
within an image. The value of any given pixel can be reasonably predicted from the value of
its neighbours. As a result, the information carried by individual pixel is relatively small. In
other words, much of the visual contribution of a single pixel to an image is redundant and it
5
can as well be derived from the values of its neighbours (Gonzalez and Woods, 2002; Singla
and Sharma, 2013). The level of correlation between image pixel values is responsible for
interpixel redundancy. This correlation is as a result of structural or geometric relationship
between objects in the image. Consequently, interpixel redundancy is also referred to as
spatial redundancy or geometric redundancy (Gonzalez and Woods, 2002). In order to
reduce the interpixel redundancy in an image, its array of pixel values must be transformed
into a mathematical domain where the image energy correlation of the constituent pixels can
be lowered. This is achievable by the 2-D discrete wavelet transform (Gonzalez and Woods,
2002; Singla and Sharma, 2013).
1.3.3 Psychovisual Redundancy
The human eye does not respond with equal sensitivity to all visual information. Certain
information simply has less relative importance than other information in normal visual
processes. Such information with less relative importance are said to be psychovisually
redundant (Gonzalez and Woods, 2002). This redundancy can be eliminated using suitable
technique without any visually perceptible loss in image quality. Since the elimination of
psychovisually redundant data results in a loss of quantitative information, the compression
process is said to be lossy at this stage (Gonzalez and Woods, 2002; Singla and Sharma,
2013).
Image compression addresses the problem of reducing the amount of data required to
represent a digital image. The basis for this reduction process is the removal of redundancies
in the digital image data (Gonzalez and Woods, 2002; Iqbal, 2004).
1.4 Characteristics of Fingerprint Image
Fingerprint is the biometric features of a finger. The development of these features is
congenital and maintains uniqueness among the population (Kazi et al., 2011). A fingerprint
6
usually appears as a series of dark lines that represents the high peaking portion of the friction
ridge skin, while the valleys between these ridges appear as white space and are the low
shallow portion of the friction ridge skin. Minutia is the term used to describe the location
and direction of the ridge endings and bifurcations along a ridge path. The upper most point
on the inner most ridge of the fingerprint image is known as core (George et al., 2012). The
fingerprint has been used for personal identity verification for more than a century, and is the
most widely used in biometrics today (Rajkumar et al., 2012). Fingerprint images are
digitized at a resolution of 500 pixels per inch (ppi) with 256 levels of gray-scale information
per pixel. A single fingerprint is about 700,000 pixels and needs about 0.6 Mbytes for
storage. A pair of hands, then, requires about 6 Mbytes of storage (CJIS, 2000; Kumar et al.,
2010). This huge storage requirement by fingerprint images impacts adversely on the
efficiency of biometric application systems. The only way to improve on these resource
requirements is to compress these images, such that they can be cost-effectively stored,
transmitted and then reconstructed without compromising the essential biometric features
such as the fingerprint’s core, ridge endings and bifurcations. These features are shown in
Figure 1.1.
7
Figure 1.1: Biometric Features of a Fingerprint Image
1.5 Fingerprint Image Compression Techniques and Standards
The efficiency of the application of wavelet transform on image coding was significantly
boosted by the introduction of embedded zero-tree wavelet (EZW) algorithm introduced by
Shapiro (1993). The algorithm has since undergone significant improvements in the set
partitioning in hierarchical trees (SPIHT) introduced by Said and Pearlman (1996). The
EZW and SPIHT performed better than JPEG with most images. However, they both
produced blurring effect on feature pattern of fingerprint images which renders the data
useless for biometric application. Wavelet Scalar Quantization (WSQ) is a compression
standard developed specifically for the compression of fingerprint images to improve the
capability of preserving the fingerprint features for biometric pattern recognition. A
compression ratio limit of 15:1 is specified for WSQ fingerprint compression standard (NIST,
2011). In order words, its performance becomes unsatisfactory at compression ratio higher
than 15:1 (Onyshczak and Youssef, 2000; CJIS, 2000). The embedded block coding with
optimized truncation of embedded bit-streams (EBCOT) by Taubman (2000) have resulted in
Core
Ridge Ending
Bifurcation
8
modern wavelet image compression and coding techniques. As a matter of fact, the latest
Joint Photographic Expert Group 2000 (JPEG 2000) image coding standard was developed
based on the EBCOT algorithm (Mallat, 2009; Usevitch, 2001). The EBCOT-based
JPEG2000 as a robust general-purpose compression standard has the limitation of not being
able to adequately preserve the crucial biometric features of fingerprint images at high
compression ratio and it has the problem of complex algorithm implementation. The JPEG
was the earlier version of JPEG2000 standard and it was based on discrete cosine transform
technique while the JPEG 2000 was based on wavelet transform technique (Strang and
Nguyen, 1996; Usevitch, 2001; Mallat, 2009).
The differences between JPEG2000 and WSQ standards are in their wavelet transform
decomposition structures and the entropy coding method used. In wavelet decomposition,
JPEG 2000 applies Mallat’s algorithm or the pyramidal approach with Cohen Daubechies
Feauveau (CDF), a variant Daubechies wavelet filters, while WSQ uses a fixed wavelet
packet basis with the same CDF wavelet filter. WSQ uses raster scanning order, while
JPEG2000 uses vertical bitplane scanning order (Strang and Nguyen, 1996). More
significantly, both standards are based on Daubechies wavelets and have been adopted as
fingerprint compression standards. Arithmetic entropy coding method was used in
JPEG2000 while Huffman coding was used in WSQ. However, for the quantization process,
uniform scalar quantization was used in both compression standards (Libert et al., 2012). It is
noteworthy that JPEG 2000 is designed for general-purpose compression with significant
flexibility. It has the disadvantage of complex algorithm implementation and high
computation cost (Libert et al., 2012).
9
1.6 Motivation
Computer systems have provided the capability for the acquisition of massive volumes of
biometric fingerprint data, and it is reasonable to develop computational techniques to help
extract, store in compact form and derive meaningful patterns and structures from these data
in order to address the problem of data overload. Additionally, data mining and identity
verification processes for law enforcement, immigration services, and forensic applications
can immensely benefit from an efficient lossy compression method. This is the motivation
behind this research work.
1.7 Aim and Objectives
The aim of this research work is to implement an efficient and reliable compression method
that can significantly reduce the image size of fingerprint images while retaining the essential
biometric features resulting in reduction of storage, transmission and computation costs. The
objectives are to:
(i) achieve 2-D discrete wavelet transform of fingerprint image into an array of
transformed coefficients using an efficient wavelet bases implemented with robust
threshold strategy;
(ii) achieve an efficient quantization of transformed image coefficients to realize further
compression with minimum distortion or degradation in the process;
(iii) achieve an efficient entropy coding using the most effective symbolic representation
scheme. An efficient entropy coding provides the process for coding strings of
symbols in a source image for cost-effective storage, and it requires no training or
pre-stored tables or codebook generation;
(iv) achieve an efficient compression with good quality that retains essential fingerprint
image features with significant reduction in storage and computation costs. The
10
proposed algorithm is expected to achieve a compression ratio higher than the WSQ
compression limit of 15:1 and still retains the essential biometric features of the
compressed images;
(v) validate and evaluate the performance of proposed algorithm using the compression
ratio (CR) and peak signal-to-noise ratio (PSNR) as performance metrics.
1.8 Statement of Problem
The problems that have been identified in the existing fingerprint compression standard
include: the limitation of a compression ratio of 15:1 which could be improved with better
algorithm. High complexity of image encoding process of the existing techniques is also a
problem. Most of the existing methods require the generation of codebooks or lookup tables
which require additional computational cost for implementation. In addition, the significant
degradation in the biometric features of fingerprint at compression ratio higher than 15:1 as a
consequence of the uniform quantization method used in the existing compression scheme
remains a major challenge.
1.9 Justification of Study
Fingerprint images contain very large data which require massive storage, computation,
transmission bandwidth costs. The preservation of the essential features of fingerprint
images for effective biometric application is very crucial. Therefore, the investigation of an
efficient compression method that can significantly reduce fingerprint image size while
preserving its biometric properties (the core, ridge endings and bifurcations) with
compression ratio higher than 15:1 is justified.
11
1.10 Methodology
The research methodologies employed in this study covered the transformation. Quantization
and entropy coding stages of a standard lossy compression scheme and these include:
i) 2-Dimensional Discrete Wavelet Transform (DWT) of source fingerprint image;
ii) Non-uniform scalar quantization using Lloyd-Max procedure for transformed
images;
iii) Arithmetic entropy coding of transformed and quantized coefficients to generate
image bit-streams;
iv) Reconstruction of Fingerprint image from bit-stream; and
v) Validation and evaluation of compression algorithm using Compression Ratio
(CR) and Peak Signal to Noise Ratio (PSNR) performance metrics
1.11 Significance of Study
Biometrics system is an emerging technology that deals with automatic identification of
individuals, based on their physiological and behavioral characteristics. The biometric traits
must satisfy universality, uniqueness, permanence, accessibility and collectivity (Kazi et al.,
2011). Fingerprint is one of the physiological biometrics used for human identity verification
(George, et al., 2012, Villegas et al., 2006). The methods for human identity authentication
based on biometrics using the physiological and behavioural characteristics of a person have
been evolving continuously and seen significant improvement in performance and robustness
over the last few years. However, the performances of most of the existing systems are far
from satisfactory in terms of robustness and accuracy (George, et al., 2012). To address these
challenge and satisfy the requirements of new and emerging biometric applications, there is
the need for the development of improved method of cost effectively storing and transmitting
huge volume of fingerprint images that are collected for biometric applications.
12
1.12 Scope and Limitation of Work
This research work covers source coding of biometric fingerprint image using the combined
techniques of wavelet transform, non-uniform quantization, level-dependent threshold and
entropy coding for efficient storage. It does not include channel coding for transmission
bandwidth and computation costs estimation and management.
1.13 Thesis Outline
This thesis is divided into six chapters. Chapter one presents the background of study,
research motivation, problem statement, significance of study, as well as aim and objectives
of the research work. The review of fundamental concepts which are pertinent to this
research work and a critical review of similar research works are presented in chapter two.
Chapter three outlines the various research methodologies adopted in the work. Chapters four
discusses the results of the experimental analyses carried out in the work, as well as,
contribution to knowledge. Lastly, the research summary, conclusion and recommendation
for future research work are provided in Chapter five.
13

GET THE COMPLETE PROJECT»

Do you need help? Talk to us right now: (+234) 08060082010, 08107932631 (Call/WhatsApp). Email: [email protected].

IF YOU CAN'T FIND YOUR TOPIC, CLICK HERE TO HIRE A WRITER»

Disclaimer: This PDF Material Content is Developed by the copyright owner to Serve as a RESEARCH GUIDE for Students to Conduct Academic Research.

You are allowed to use the original PDF Research Material Guide you will receive in the following ways:

1. As a source for additional understanding of the project topic.

2. As a source for ideas for you own academic research work (if properly referenced).

3. For PROPER paraphrasing ( see your school definition of plagiarism and acceptable paraphrase).

4. Direct citing ( if referenced properly).

Thank you so much for your respect for the authors copyright.

Do you need help? Talk to us right now: (+234) 08060082010, 08107932631 (Call/WhatsApp). Email: [email protected].

//
Welcome! My name is Damaris I am online and ready to help you via WhatsApp chat. Let me know if you need my assistance.