
Guide to Biometric Reference Systems and Performance Evaluation
by Petrovska-delacretaz, Dijana; Chollet, Gerard; Dorizzi, Bernadette; Jain, Anil K.-
This Item Qualifies for Free Shipping!*
*Excludes marketplace orders.
Rent Textbook
Rent Digital
New Textbook
We're Sorry
Sold Out
Used Textbook
We're Sorry
Sold Out
How Marketplace Works:
- This item is offered by an independent seller and not shipped from our warehouse
- Item details like edition and cover design may differ from our description; see seller's comments before ordering.
- Sellers much confirm and ship within two business days; otherwise, the order will be cancelled and refunded.
- Marketplace purchases cannot be returned to eCampus.com. Contact the seller directly for inquiries; if no response within two days, contact customer service.
- Additional shipping costs apply to Marketplace purchases. Review shipping costs at checkout.
Summary
Table of Contents
Preface | p. vii |
Contributors | p. xix |
Acronyms | p. xxv |
Symbols | p. xxix |
Introduction-About the Need of an Evaluation Framework in Biometrics | p. 1 |
Reference Software | p. 1 |
Biometrics | p. 3 |
Databases | p. 4 |
Risks | p. 5 |
Biometric "Menagerie" | p. 6 |
Spoofing | p. 6 |
Evaluation | p. 6 |
Evaluation Campaigns | p. 7 |
Outline of This Book | p. 8 |
References | p. 9 |
The BioSecure Benchmarking Methodology for Biometric Performance Evaluation | p. 11 |
Introduction | p. 11 |
Terminology | p. 12 |
When Experimental Results Cannot be Compared | p. 13 |
Reporting Results on a Common Evaluation Database and Protocol(s) | p. 15 |
Reporting Results with a Benchmarking Framework | p. 17 |
Description of the Proposed Evaluation Framework | p. 19 |
Use of the Benchmarking Packages | p. 22 |
Conclusions | p. 23 |
References | p. 23 |
Iris Recognition | p. 25 |
Introduction | p. 25 |
State of the Art in Iris Recognition | p. 26 |
The Iris Code | p. 26 |
Correlation-based Methods | p. 28 |
Texture-based Methods | p. 28 |
Minutiae-based Methods | p. 28 |
Current Issues and Challenges | p. 29 |
Existing Evaluation Databases, Campaigns and Open-source Software | p. 30 |
Database | p. 30 |
Evaluation Campaigns | p. 31 |
Masek's Open-source System | p. 33 |
The BioSecure Evaluation Framework for Iris | p. 34 |
OSIRIS v1.0 Open-source Reference System | p. 34 |
Benchmarking Databases | p. 35 |
Benchmarking Protocols | p. 36 |
Benchmarking Results | p. 36 |
Experimental Results with OSIRIS v1.0 on ICE'2005 Database | p. 37 |
Validation of the Benchmarking Protocol | p. 37 |
Study of the Interclass Distribution | p. 39 |
Research Systems Evaluated within the Benchmarking Framework | p. 40 |
Correlation System [TELECOM SudParis] | p. 40 |
Ordinal Measure [CASIA] | p. 43 |
Experimental Results | p. 45 |
Fusion Experiments | p. 46 |
Conclusion | p. 48 |
References | p. 48 |
Fingerprint Recognition | p. 51 |
Introduction | p. 51 |
State of the Art in Fingerprint Recognition | p. 53 |
Fingerprint Sensing | p. 53 |
Preprocessing and Feature Extraction | p. 54 |
Fingerprint Matching | p. 59 |
Current Issues and Challenges | p. 60 |
Fingerprint Databases | p. 62 |
FVC Databases | p. 62 |
MCYT Bimodal Database | p. 63 |
BIOMET Multimodal Database | p. 64 |
Michigan State University (MSU) Database | p. 64 |
BioSec Multimodal Database | p. 64 |
BiosecurID Multimodal Database | p. 65 |
BioSecure Multimodal Database | p. 65 |
Fingerprint Evaluation Campaigns | p. 65 |
Fingerprint Verification Competitions | p. 65 |
NIST Fingerprint Vendor Technology Evaluation | p. 68 |
Minutiae Interoperability NIST Exchange Test | p. 69 |
The BioSecure Benchmarking Framework | p. 69 |
Reference System: NFIS2 | p. 70 |
Benchmarking Database: MCYT-100 | p. 72 |
Benchmarking Protocols | p. 73 |
Benchmarking Results | p. 74 |
Research Algorithms Evaluated within the Benchmarking Framework | p. 75 |
Halmstad University Minutiae-based Fingerprint Verification System [HH] | p. 75 |
UPM Ridge-based Fingerprint Verification System [UPM] | p. 79 |
Experimental Results within the Benchmarking Framework | p. 80 |
Evaluation of the Individual Systems | p. 80 |
Multialgorithmic Fusion Experiments | p. 82 |
Conclusions | p. 84 |
References | p. 85 |
Hand Recognition | p. 89 |
Introduction | p. 89 |
State of the Art in Hand Recognition | p. 90 |
Hand Geometry Features | p. 90 |
Hand Silhouette Features | p. 92 |
Finger Biometric Features | p. 92 |
Palmprint Biometric Features | p. 92 |
Palmprint and Hand Geometry Features | p. 93 |
The BioSecure Evaluation Framework for Hand Recognition | p. 94 |
The BioSecure Hand Reference System v1.0 | p. 94 |
The Benchmarking Databases | p. 97 |
The Benchmarking Protocols | p. 98 |
The Benchmarking Results | p. 100 |
More Experimental Results with the Reference System | p. 101 |
Influence of the Number of Enrollment Images for the Benchmarking Protocol | p. 103 |
Performance with Respect to Population Size | p. 103 |
Performance with Respect to Enrollment | p. 104 |
Performance with Respect to Hand Type | p. 105 |
Performance Versus Image Resolution | p. 107 |
Performances with Respect to Elapsed Time | p. 108 |
Appearance-Based Hand Recognition System [BU] | p. 109 |
Nonrigid Registration of Hands | p. 110 |
Features from Appearance Images of Hands | p. 112 |
Results with the Appearance-based System | p. 115 |
Conclusions | p. 121 |
References | p. 122 |
Online Handwritten Signature Verification | p. 125 |
Introduction | p. 125 |
State of the Art in Signature Verification | p. 128 |
Existing Main Approaches | p. 128 |
Current Issues and Challenges | p. 133 |
Databases | p. 133 |
PHILIPS | p. 133 |
BIOMET Signature Subcorpus | p. 134 |
SVC'2004 Development Set | p. 135 |
MCYT Signature Subcorpus | p. 136 |
BioSecure Signature Subcorpus DS2 | p. 137 |
BioSecure Signature Subcorpus DS3 | p. 138 |
Evaluation Campaigns | p. 139 |
The BioSecure Benchmarking Framework for Signature Verification | p. 139 |
Design of the Open Source Reference Systems | p. 140 |
Reference System 1 (Ref1-v1.0) | p. 141 |
Reference System 2 (Ref2 v1.0) | p. 143 |
Benchmarking Databases and Protocols | p. 145 |
Results with the Benchmarking Framework | p. 147 |
Research Algorithms Evaluated within the Benchmarking Framework | p. 148 |
HMM-based System from Universidad Autonoma de Madrid (UAM) | p. 149 |
GMM-based System | p. 150 |
Standard DTW-based System | p. 150 |
DTW-based System with Score Normalization | p. 150 |
System Based on a Global Approach | p. 151 |
Experimental Results | p. 151 |
Conclusions | p. 161 |
References | p. 164 |
Text-independent Speaker Verification | p. 167 |
Introduction | p. 167 |
Review of Text-independent Speaker Verification | p. 169 |
Front-end Processing | p. 171 |
Speaker Modeling Techniques | p. 175 |
High-level Information and its Fusion | p. 179 |
Decision Making | p. 181 |
Performance Evaluation Metrics | p. 184 |
Current Issues and Challenges | p. 185 |
Speaker Verification Evaluation Campaigns and Databases | p. 186 |
National Institute of Standards and Technology Speaker Recognition Evaluations (NIST-SRE) | p. 186 |
Speaker Recognition Databases | p. 187 |
The BioSecure Speaker Verification Benchmarking Framework | p. 188 |
Description of the Open Source Software | p. 188 |
The Benchmarking Framework for the BANCA Database | p. 189 |
The Benchmarking Experiments with the NIST'2005 Speaker Recognition Evaluation Database | p. 191 |
How to Reach State-of-the-art Speaker Verification Performance Using Open Source Software | p. 193 |
Fine Tuning of GMM-based Systems | p. 194 |
Choice of Speaker Modeling Methods and Session's Variability Modeling | p. 198 |
Using High-level Features as Complementary Sources of Information | p. 201 |
Conclusions and Perspectives | p. 203 |
References | p. 206 |
2D Face Recognition | p. 213 |
State of the Art in Face Recognition: Selected Topics | p. 213 |
Subspace Methods | p. 214 |
Elastic Graph Matching (EGM) | p. 216 |
Robustness to Variations in Facial Geometry and Illumination | p. 217 |
2D Facial Landmarking | p. 221 |
Dynamic Face Recognition and Use of Video Streams | p. 226 |
Compensating Facial Expressions | p. 228 |
Gabor Filtering and Space Reduction Based Methods | p. 231 |
2D Face Databases and Evaluation Campaigns | p. 232 |
Selected 2D Face Databases | p. 232 |
Evaluation Campaigns | p. 233 |
The BioSecure Benchmarking Framework for 2D Face | p. 233 |
The BioSecure 2D Face Reference System v1.0 | p. 234 |
Reference 2D Face Database: BANCA | p. 234 |
Reference Protocols | p. 235 |
Benchmarking Results | p. 235 |
Method 1: Combining Gabor Magnitude and Phase Information [TELECOM SudParis] | p. 237 |
The Gabor Multiscale/Multiorientation Analysis | p. 237 |
Extraction of Gabor Face Features | p. 238 |
Linear Discriminant Analysis (LDA) Applied to Gabor Features | p. 240 |
Experimental Results with Combined Magnitude and Phase Gabor Features with Linear Discriminant Classifiers | p. 241 |
Method 2: Subject-Specific Face Verification via Shape-Driven Gabor Jets (SDGJ) [University of Vigo] | p. 247 |
Extracting Textural Information | p. 248 |
Mapping Corresponding Features | p. 249 |
Distance Between Faces | p. 250 |
Results on the BANCA Database | p. 250 |
Method 3: SIFT-based Face Recognition with Graph Matching [UNISS] | p. 251 |
Invaiant and Robust SIFT Features | p. 251 |
Representation of Face Images | p. 251 |
Graph Matching Methodologies | p. 252 |
Results on the BANCA Database | p. 253 |
Comparison of the Presented Approaches | p. 253 |
Conclusions | p. 255 |
References | p. 255 |
3D Face Recognition | p. 263 |
Introduction | p. 263 |
State of the Art in 3D Face Recognition | p. 264 |
3D Acquisition and Preprocessing | p. 264 |
Registration | p. 267 |
3D Recognition Algorithms | p. 269 |
3D Face Databases and Evaluation Campaigns | p. 279 |
3D Face Databases | p. 279 |
3D Evaluation Campaigns | p. 281 |
Benchmarking Framework for 3D Face Recognition | p. 282 |
3D Face Recognition Reference System v1.0 (3D-FRRS) | p. 282 |
Benchmarking Database | p. 287 |
Benchmarking Protocols | p. 287 |
Benchmarking Verification and Identification Results | p. 287 |
More Experimental Results with the 3D Reference System | p. 289 |
Conclusions | p. 290 |
References | p. 291 |
Talking-face Verification | p. 297 |
Introduction | p. 297 |
State of the Art in Talking-face Verification | p. 298 |
Face Verification from a Video Sequence | p. 298 |
Liveness Detection | p. 300 |
Audiovisual Synchrony | p. 301 |
Audiovisual Speech | p. 307 |
Evaluation Framework | p. 308 |
Reference System | p. 309 |
Evaluation Protocols | p. 310 |
Detection Cost Function | p. 313 |
Evaluation | p. 314 |
Research Systems | p. 315 |
Face Recognition | p. 315 |
Speaker Verification | p. 316 |
Client-dependent Synchrony Measure | p. 317 |
Two Fusion Strategies | p. 318 |
Evaluation | p. 320 |
Conclusion | p. 321 |
References | p. 322 |
BioSecure Multimodal Evaluation Campaign 2007 (BMEC'2007) | p. 327 |
Introduction | p. 327 |
Scientific Objectives | p. 328 |
Monomodal Evaluation | p. 328 |
Multimodal Evaluation | p. 329 |
Existing Multimodal Databases | p. 329 |
BMEC Database | p. 331 |
Data | p. 332 |
Statistics | p. 336 |
Performance Evaluation | p. 337 |
Evaluation Platform | p. 337 |
Criteria | p. 338 |
Confidence Intervals | p. 340 |
Experimental Results | p. 341 |
Monomodal Evaluations | p. 341 |
Multimodal Evaluation | p. 360 |
Conclusion | p. 365 |
Appendix | p. 366 |
Equal Error Rate | p. 366 |
Parametric Confidence Intervals | p. 367 |
Participants | p. 368 |
References | p. 369 |
Index | p. 373 |
Table of Contents provided by Ingram. All Rights Reserved. |
An electronic version of this book is available through VitalSource.
This book is viewable on PC, Mac, iPhone, iPad, iPod Touch, and most smartphones.
By purchasing, you will be able to view this book online, as well as download it, for the chosen number of days.
Digital License
You are licensing a digital product for a set duration. Durations are set forth in the product description, with "Lifetime" typically meaning five (5) years of online access and permanent download to a supported device. All licenses are non-transferable.
More details can be found here.
A downloadable version of this book is available through the eCampus Reader or compatible Adobe readers.
Applications are available on iOS, Android, PC, Mac, and Windows Mobile platforms.
Please view the compatibility matrix prior to purchase.