Guide to Biometric Reference Systems and Performance Evaluation

by ; ; ;
Format: Hardcover
Pub. Date: 2009-12-12
Publisher(s): Springer-Verlag New York Inc
  • Free Shipping Icon

    This Item Qualifies for Free Shipping!*

    *Excludes marketplace orders.

List Price: $166.95

Rent Textbook

Select for Price
There was a problem. Please try again later.

Rent Digital

Rent Digital Options
Online:30 Days access
Downloadable:30 Days
$53.64
Online:60 Days access
Downloadable:60 Days
$71.52
Online:90 Days access
Downloadable:90 Days
$89.40
Online:120 Days access
Downloadable:120 Days
$107.28
Online:180 Days access
Downloadable:180 Days
$116.22
Online:1825 Days access
Downloadable:Lifetime Access
$178.80
$116.22

New Textbook

We're Sorry
Sold Out

Used Textbook

We're Sorry
Sold Out

How Marketplace Works:

  • This item is offered by an independent seller and not shipped from our warehouse
  • Item details like edition and cover design may differ from our description; see seller's comments before ordering.
  • Sellers much confirm and ship within two business days; otherwise, the order will be cancelled and refunded.
  • Marketplace purchases cannot be returned to eCampus.com. Contact the seller directly for inquiries; if no response within two days, contact customer service.
  • Additional shipping costs apply to Marketplace purchases. Review shipping costs at checkout.

Summary

Our daily interactions are based on recognizing people. As information technology advances, our interactions will be accomplished through newly designed interfaces, with a vital element being the automated process of recognition of a person's real identity: Biometrics. In recent years, biometrics has moved from using fingerprints, to many different methods of assessing human physical and behavioural traits.This useful guide introduces a new performance evaluation framework designed to provide complete coverage of performance evaluation of major biometric systems. Two key areas are presented: The first gives a unique snapshot of the performance evaluation of various biometrics within a common evaluation framework (databases and protocols). The second presents a state-of-the-art benchmarking evaluation framework for use in future performance evaluations ' comprised of open-source reference systems.Features and benefits:'¢ Includes a Foreword by renowned biometric expert Prof. Anil K. Jain, Michigan State University, USA'¢ Introduces the concept of reproducible results and comparative evaluations'¢ Presents an extensive benchmarking methodology'¢ Describes reference algorithms in detail, with a link to source code'¢ Biometrics Reference and Evaluation Toolkit provided via http://share.int-evry.fr/svnview-eph/'¢ Implements this methodology throughout all chapters'¢ Offers readers a tool for a better evaluation of their algorithms by giving them the instruments to fully reproduce the benchmarking (reference) experiments (software, database, experimental protocols)'¢ Provides a global perspective with a common evaluation scheme and methodology'¢ Examines current research results with this framework'¢ Consolidates results from publicly available databases, using well defined protocolsThis unique text/reference combines state-of-the-art research with practical tools for researchers, practitioners and graduates in the field of Biometrics and Pattern Recognition, and with its comprehensive coverage, it will prove an indispensable resource and frequently used tool. Key topicsBioSecure Benchmarking MethodologyBiometric Performance EvaluationIris RecognitionFingerprint RecognitionHand RecognitionOnline Handwritten Signature VerificationText-independent Speaker Verification2D and 3D Face RecognitionTalking Face Verification

Table of Contents

Prefacep. vii
Contributorsp. xix
Acronymsp. xxv
Symbolsp. xxix
Introduction-About the Need of an Evaluation Framework in Biometricsp. 1
Reference Softwarep. 1
Biometricsp. 3
Databasesp. 4
Risksp. 5
Biometric "Menagerie"p. 6
Spoofingp. 6
Evaluationp. 6
Evaluation Campaignsp. 7
Outline of This Bookp. 8
Referencesp. 9
The BioSecure Benchmarking Methodology for Biometric Performance Evaluationp. 11
Introductionp. 11
Terminologyp. 12
When Experimental Results Cannot be Comparedp. 13
Reporting Results on a Common Evaluation Database and Protocol(s)p. 15
Reporting Results with a Benchmarking Frameworkp. 17
Description of the Proposed Evaluation Frameworkp. 19
Use of the Benchmarking Packagesp. 22
Conclusionsp. 23
Referencesp. 23
Iris Recognitionp. 25
Introductionp. 25
State of the Art in Iris Recognitionp. 26
The Iris Codep. 26
Correlation-based Methodsp. 28
Texture-based Methodsp. 28
Minutiae-based Methodsp. 28
Current Issues and Challengesp. 29
Existing Evaluation Databases, Campaigns and Open-source Softwarep. 30
Databasep. 30
Evaluation Campaignsp. 31
Masek's Open-source Systemp. 33
The BioSecure Evaluation Framework for Irisp. 34
OSIRIS v1.0 Open-source Reference Systemp. 34
Benchmarking Databasesp. 35
Benchmarking Protocolsp. 36
Benchmarking Resultsp. 36
Experimental Results with OSIRIS v1.0 on ICE'2005 Databasep. 37
Validation of the Benchmarking Protocolp. 37
Study of the Interclass Distributionp. 39
Research Systems Evaluated within the Benchmarking Frameworkp. 40
Correlation System [TELECOM SudParis]p. 40
Ordinal Measure [CASIA]p. 43
Experimental Resultsp. 45
Fusion Experimentsp. 46
Conclusionp. 48
Referencesp. 48
Fingerprint Recognitionp. 51
Introductionp. 51
State of the Art in Fingerprint Recognitionp. 53
Fingerprint Sensingp. 53
Preprocessing and Feature Extractionp. 54
Fingerprint Matchingp. 59
Current Issues and Challengesp. 60
Fingerprint Databasesp. 62
FVC Databasesp. 62
MCYT Bimodal Databasep. 63
BIOMET Multimodal Databasep. 64
Michigan State University (MSU) Databasep. 64
BioSec Multimodal Databasep. 64
BiosecurID Multimodal Databasep. 65
BioSecure Multimodal Databasep. 65
Fingerprint Evaluation Campaignsp. 65
Fingerprint Verification Competitionsp. 65
NIST Fingerprint Vendor Technology Evaluationp. 68
Minutiae Interoperability NIST Exchange Testp. 69
The BioSecure Benchmarking Frameworkp. 69
Reference System: NFIS2p. 70
Benchmarking Database: MCYT-100p. 72
Benchmarking Protocolsp. 73
Benchmarking Resultsp. 74
Research Algorithms Evaluated within the Benchmarking Frameworkp. 75
Halmstad University Minutiae-based Fingerprint Verification System [HH]p. 75
UPM Ridge-based Fingerprint Verification System [UPM]p. 79
Experimental Results within the Benchmarking Frameworkp. 80
Evaluation of the Individual Systemsp. 80
Multialgorithmic Fusion Experimentsp. 82
Conclusionsp. 84
Referencesp. 85
Hand Recognitionp. 89
Introductionp. 89
State of the Art in Hand Recognitionp. 90
Hand Geometry Featuresp. 90
Hand Silhouette Featuresp. 92
Finger Biometric Featuresp. 92
Palmprint Biometric Featuresp. 92
Palmprint and Hand Geometry Featuresp. 93
The BioSecure Evaluation Framework for Hand Recognitionp. 94
The BioSecure Hand Reference System v1.0p. 94
The Benchmarking Databasesp. 97
The Benchmarking Protocolsp. 98
The Benchmarking Resultsp. 100
More Experimental Results with the Reference Systemp. 101
Influence of the Number of Enrollment Images for the Benchmarking Protocolp. 103
Performance with Respect to Population Sizep. 103
Performance with Respect to Enrollmentp. 104
Performance with Respect to Hand Typep. 105
Performance Versus Image Resolutionp. 107
Performances with Respect to Elapsed Timep. 108
Appearance-Based Hand Recognition System [BU]p. 109
Nonrigid Registration of Handsp. 110
Features from Appearance Images of Handsp. 112
Results with the Appearance-based Systemp. 115
Conclusionsp. 121
Referencesp. 122
Online Handwritten Signature Verificationp. 125
Introductionp. 125
State of the Art in Signature Verificationp. 128
Existing Main Approachesp. 128
Current Issues and Challengesp. 133
Databasesp. 133
PHILIPSp. 133
BIOMET Signature Subcorpusp. 134
SVC'2004 Development Setp. 135
MCYT Signature Subcorpusp. 136
BioSecure Signature Subcorpus DS2p. 137
BioSecure Signature Subcorpus DS3p. 138
Evaluation Campaignsp. 139
The BioSecure Benchmarking Framework for Signature Verificationp. 139
Design of the Open Source Reference Systemsp. 140
Reference System 1 (Ref1-v1.0)p. 141
Reference System 2 (Ref2 v1.0)p. 143
Benchmarking Databases and Protocolsp. 145
Results with the Benchmarking Frameworkp. 147
Research Algorithms Evaluated within the Benchmarking Frameworkp. 148
HMM-based System from Universidad Autonoma de Madrid (UAM)p. 149
GMM-based Systemp. 150
Standard DTW-based Systemp. 150
DTW-based System with Score Normalizationp. 150
System Based on a Global Approachp. 151
Experimental Resultsp. 151
Conclusionsp. 161
Referencesp. 164
Text-independent Speaker Verificationp. 167
Introductionp. 167
Review of Text-independent Speaker Verificationp. 169
Front-end Processingp. 171
Speaker Modeling Techniquesp. 175
High-level Information and its Fusionp. 179
Decision Makingp. 181
Performance Evaluation Metricsp. 184
Current Issues and Challengesp. 185
Speaker Verification Evaluation Campaigns and Databasesp. 186
National Institute of Standards and Technology Speaker Recognition Evaluations (NIST-SRE)p. 186
Speaker Recognition Databasesp. 187
The BioSecure Speaker Verification Benchmarking Frameworkp. 188
Description of the Open Source Softwarep. 188
The Benchmarking Framework for the BANCA Databasep. 189
The Benchmarking Experiments with the NIST'2005 Speaker Recognition Evaluation Databasep. 191
How to Reach State-of-the-art Speaker Verification Performance Using Open Source Softwarep. 193
Fine Tuning of GMM-based Systemsp. 194
Choice of Speaker Modeling Methods and Session's Variability Modelingp. 198
Using High-level Features as Complementary Sources of Informationp. 201
Conclusions and Perspectivesp. 203
Referencesp. 206
2D Face Recognitionp. 213
State of the Art in Face Recognition: Selected Topicsp. 213
Subspace Methodsp. 214
Elastic Graph Matching (EGM)p. 216
Robustness to Variations in Facial Geometry and Illuminationp. 217
2D Facial Landmarkingp. 221
Dynamic Face Recognition and Use of Video Streamsp. 226
Compensating Facial Expressionsp. 228
Gabor Filtering and Space Reduction Based Methodsp. 231
2D Face Databases and Evaluation Campaignsp. 232
Selected 2D Face Databasesp. 232
Evaluation Campaignsp. 233
The BioSecure Benchmarking Framework for 2D Facep. 233
The BioSecure 2D Face Reference System v1.0p. 234
Reference 2D Face Database: BANCAp. 234
Reference Protocolsp. 235
Benchmarking Resultsp. 235
Method 1: Combining Gabor Magnitude and Phase Information [TELECOM SudParis]p. 237
The Gabor Multiscale/Multiorientation Analysisp. 237
Extraction of Gabor Face Featuresp. 238
Linear Discriminant Analysis (LDA) Applied to Gabor Featuresp. 240
Experimental Results with Combined Magnitude and Phase Gabor Features with Linear Discriminant Classifiersp. 241
Method 2: Subject-Specific Face Verification via Shape-Driven Gabor Jets (SDGJ) [University of Vigo]p. 247
Extracting Textural Informationp. 248
Mapping Corresponding Featuresp. 249
Distance Between Facesp. 250
Results on the BANCA Databasep. 250
Method 3: SIFT-based Face Recognition with Graph Matching [UNISS]p. 251
Invaiant and Robust SIFT Featuresp. 251
Representation of Face Imagesp. 251
Graph Matching Methodologiesp. 252
Results on the BANCA Databasep. 253
Comparison of the Presented Approachesp. 253
Conclusionsp. 255
Referencesp. 255
3D Face Recognitionp. 263
Introductionp. 263
State of the Art in 3D Face Recognitionp. 264
3D Acquisition and Preprocessingp. 264
Registrationp. 267
3D Recognition Algorithmsp. 269
3D Face Databases and Evaluation Campaignsp. 279
3D Face Databasesp. 279
3D Evaluation Campaignsp. 281
Benchmarking Framework for 3D Face Recognitionp. 282
3D Face Recognition Reference System v1.0 (3D-FRRS)p. 282
Benchmarking Databasep. 287
Benchmarking Protocolsp. 287
Benchmarking Verification and Identification Resultsp. 287
More Experimental Results with the 3D Reference Systemp. 289
Conclusionsp. 290
Referencesp. 291
Talking-face Verificationp. 297
Introductionp. 297
State of the Art in Talking-face Verificationp. 298
Face Verification from a Video Sequencep. 298
Liveness Detectionp. 300
Audiovisual Synchronyp. 301
Audiovisual Speechp. 307
Evaluation Frameworkp. 308
Reference Systemp. 309
Evaluation Protocolsp. 310
Detection Cost Functionp. 313
Evaluationp. 314
Research Systemsp. 315
Face Recognitionp. 315
Speaker Verificationp. 316
Client-dependent Synchrony Measurep. 317
Two Fusion Strategiesp. 318
Evaluationp. 320
Conclusionp. 321
Referencesp. 322
BioSecure Multimodal Evaluation Campaign 2007 (BMEC'2007)p. 327
Introductionp. 327
Scientific Objectivesp. 328
Monomodal Evaluationp. 328
Multimodal Evaluationp. 329
Existing Multimodal Databasesp. 329
BMEC Databasep. 331
Datap. 332
Statisticsp. 336
Performance Evaluationp. 337
Evaluation Platformp. 337
Criteriap. 338
Confidence Intervalsp. 340
Experimental Resultsp. 341
Monomodal Evaluationsp. 341
Multimodal Evaluationp. 360
Conclusionp. 365
Appendixp. 366
Equal Error Ratep. 366
Parametric Confidence Intervalsp. 367
Participantsp. 368
Referencesp. 369
Indexp. 373
Table of Contents provided by Ingram. All Rights Reserved.

An electronic version of this book is available through VitalSource.

This book is viewable on PC, Mac, iPhone, iPad, iPod Touch, and most smartphones.

By purchasing, you will be able to view this book online, as well as download it, for the chosen number of days.

Digital License

You are licensing a digital product for a set duration. Durations are set forth in the product description, with "Lifetime" typically meaning five (5) years of online access and permanent download to a supported device. All licenses are non-transferable.

More details can be found here.

A downloadable version of this book is available through the eCampus Reader or compatible Adobe readers.

Applications are available on iOS, Android, PC, Mac, and Windows Mobile platforms.

Please view the compatibility matrix prior to purchase.