Research Article
Vol. 59, No. 28 / 1 October 2020 / Applied Optics
8751
3D finger vein biometric authentication with
photoacoustic tomography
Ye Zhan,1 Aditya Singh Rathore,2 Giovanni Milione,3 Yuehang Wang,1
Wenhan Zheng,1 Wenyao Xu,2 AND Jun Xia1, *
1
Optical & Ultrasonic Imaging Laboratory, Department of Biomedical Engineering, University at Buffalo, State University of New York, Buffalo,
New York 14260, USA
2
Embedded Sensing and Computing (ESC) group, Department of Computer Science and Engineering, University at Buffalo, State University of
New York, Buffalo, New York 14260, USA
3
Optical Networking and Sensing Department, NEC Laboratories America, Inc., Princeton, New Jersey 08540, USA
*Corresponding author:
[email protected]
Received 17 June 2020; revised 24 August 2020; accepted 24 August 2020; posted 8 September 2020 (Doc. ID 400550);
published 28 September 2020
Biometric authentication is the recognition of human identity via unique anatomical features. The development
of novel methods parallels widespread application by consumer devices, law enforcement, and access control. In
particular, methods based on finger veins, as compared to face and fingerprints, obviate privacy concerns and degradation due to wear, age, and obscuration. However, they are two-dimensional (2D) and are fundamentally limited
by conventional imaging and tissue-light scattering. In this work, for the first time, to the best of our knowledge,
we demonstrate a method of three-dimensional (3D) finger vein biometric authentication based on photoacoustic
tomography. Using a compact photoacoustic tomography setup and a novel recognition algorithm, the advantages
of 3D are demonstrated via biometric authentication of index finger vessels with false acceptance, false rejection,
and equal error rates <1.23%, <9.27%, and <0.13%, respectively, when comparing one finger, a false acceptance
rate improvement > 10× when comparing multiple fingers, and <0.7% when rotating fingers ±30. © 2020
Optical Society of America
https://doi.org/10.1364/AO.400550
1. INTRODUCTION
Biometrics are human anatomical features, whose uniqueness is exploited to authenticate human identity. Biometric
authentication has shown immense capability in the “Internet
of Things” and smart devices because of its high accuracy, user
convenience, and security. However, the external visibility of
fingerprint and face ID allows the adversary to remotely capture
the user’s biometric information and leverage it for spoofing
attacks [1,2]. In recent years, finger vessel authentication has
gained immense attention as a promising biometric approach in
physical access systems, e.g., ATM transactions [3]. The vessel
pattern within a user’s finger is unique and invisible from the
outside; thus, it can hardly be forged.
The current vessel-based authentication systems capture
the vessel structure primarily through either ultrasound or
optical sensing [4–6]. Doppler ultrasound has been leveraged
to recognize the main vessels based on the blood flow in tissues.
However, the system is not adequate for sensing small vascular
structures [6]. Lin et al. [7] proposed a novel approach using
thermal images of the palm-dorsal vessel patterns for personal
verification. However, their system had limited robustness
1559-128X/20/288751-08 Journal © 2020 Optical Society of America
Provided under the terms of the OSA Open Access Publishing Agreement
against ambient temperature. Other studies in this domain
include the development of a low-cost hand vessel scanner
[5]. The integrated device can be used to process the captured
infrared (IR) hand images [5], and then, a graph-matching algorithm is utilized for verifying dorsal vessels. However, existing
IR sensors can only detect vessel patterns in a two-dimensional
(2D) space. These sensors suffer from (1) limited domain information (i.e., vessel depth) and poor image quality due to the
diffusive nature of light, and (2) insufficient robustness against
human artifacts (e.g., variations in measuring position), thereby
limiting the overall system performance.
To address these issues, in 2019, we proposed a palm-vessel
sensing approach using the photoacoustic (PA) effect [8].
Photoacoustic tomography (PAT), in contrast to pure optical
modalities, can acquire profoundly detailed images of the vasculature with sufficient depth information. In PAT, a pulsed laser
light illuminates the skin surface, and light absorption causes
thermoelastic expansion, which is converted into ultrasonic
waves. After detecting the pressure waves, a PAT image can be
reconstructed based on the acoustic time of arrival. Compared
with pure optical imaging technologies, PAT can detect deeper
8752
Research Article
Vol. 59, No. 28 / 1 October 2020 / Applied Optics
vascular structures at a higher spatial resolution, because acoustic wave scattering is three orders of magnitude less than optical
scattering in tissue [9]. Moreover, optical imaging suffers from
light diffusion, and ultrasound imaging suffers from low sensitivity to small blood vessels. PAT provides images with a higher
signal-to-noise ratio (SNR) because non-absorbing tissue components will not generate any PA signals. In our previous study,
the system utilized side optical illumination and the participant’s hand was placed underneath the imaging platform. The
system was bulky and inconvenient for both the participant and
the operator. For instance, a small error in the subject’s hand
placement would lead to misalignment in the light illumination and might significantly deteriorate the image quality. In
addition, scanning the whole palm area is neither common nor
convenient in portable biometrics devices.
To address these issues, we developed a new system, 3D
Finger, for robust imaging of the finger vasculature. Compared
to the previous system, we redesigned the light delivery scheme
and adjusted the system’s scanning geometry. With the new
design, subjects place their finger directly on top of the imaging
window. This imaging pose significantly improves the user
experience and reduces the experimental preparation time. In
addition, by utilizing a high-performance cold mirror as an
acoustic-optical combiner, we achieved co-planar light illumination and acoustic recording, thus significantly improving the
system’s robustness and imaging depth [10]. Regardless of the
finger placement and curvature, the acoustic and light beams are
always coaxial to each other, eliminating alignment errors. In
this study, we use fingers instead of palms as the scanning target,
which is more convenient for future implementation in portable
devices. To cover the four fingers (index to little) of a subject, we
used a 2.25 MHz center frequency ultrasound transducer with
8.6-cm lateral coverage. The field of view is over two times larger
than that in our previous study [8]. We also developed a new
vascular matching algorithm, which leverages the fact that users
have distinct vessel patterns whose uniqueness is dependent
not only on the overall vessel structure but also on its depth
inside the finger. The algorithm employs multiple key features
to build a robust 3D vascular model that can classify the input as
the legitimate or illegitimate user. After testing the system and
the algorithm in 36 subjects, we obtained high authentication
accuracy and robustness.
2. METHODS
A. Photoacoustic Effect
The sensing of vasculature in our system is based on the PA
effect. Upon light illumination, the initial PA pressure ( p 0 ) can
be described as follows [11]:
p 0 (r ) = Ŵµa F (r ),
(1)
where µa is the absorption coefficient and F (r ) is the local
optical influence. Ŵ represents the Grüneisen parameter,
which is determined by the material’s thermal and mechanical
properties. Equation (1) indicates that the PA amplitude is proportional to the optical absorption. Because the major absorber
in the near-infrared region is hemoglobin, PAT allows for direct
sensing of hemoglobin distribution (blood vasculature [12]).
As for image formation, based on the speed of sound in tissue
and the time of arrival of PA signals, the reconstruction algorithm back-projects data to possible locations of the acoustic
source. After performing the projection for all transducer elements, a 2D or 3D representation of the optical absorber can be
formed.
B. 3D Finger System
An end-to-end overview of our 3D Finger system is illustrated in
Fig. 1. First, we used the newly designed PAT system to collect
the raw PA signals from the fingers. A 3D image of the finger
blood vessels was then obtained through reconstruction. Next,
the 3D vessel structures were fed to the biometric framework,
which consists of three major steps: (1) image preprocessing,
(2) vessel segmentation, and (3) pattern analysis.
1. 3D Finger Hardware
Figure 2 shows a schematic drawing of the imaging platform.
The light source is an Nd: YAG laser (Continuum, SL III) with
1064 nm wavelength output, 10-Hz pulse repetition frequency,
and 10-ns pulse width. Laser pulse was coupled into a fiber
bundle with 1.1 cm-diameter circular input and 5.1 cm-length
line outputs (Schott Fostec). The subject’s fingers are placed
on top of a water tank, which is made with transparent acrylic
sheets and has an opening at the top sealed with a transparent
50 µm thick plastic film. Inside the water tank are the optical
fiber output and ultrasound transducer. The light delivery and
acoustic detection paths are combined by a dichroic cold mirror
(TECHSPEC, Edmund Optics Inc.), which allows for 97%
of the 1064 nm light to pass through. The energy irradiated
on the finger surface is approximately 25 mJ/cm2 , which is
well below the ANSI safety limit of 100 mJ/cm2 for 1064 nm
light [13]. The PA signals generated by the finger vessels were
reflected by the cold mirror and received by a 2.25 MHz linear
transducer with 128 elements (IMASONIC SAS, 0.67 mm
elementary pitch, 15 mm element height, and 40 mm elevation
focus). Due to an impedance mismatch, the acoustic reflection
by the cold mirror is above 90% [14]. A 3D-printed holder
combined the ultrasound transducer, fiber output, and a cold
mirror. To scan the entire finger area, we used a 20-cm stroke
translation stage (McMaster-Carr, 6734K2) mounted on the
optical table. The laser synchronizes the scanning and data
acquisition systems. The stepper motor moves at a speed of
1 mm/s, and the transducer collects data immediately after each
laser pulse. The raw data of the experiment was collected using
a Vantage 256 system (Verasonics) and reconstructed using
the back-projection algorithm [15,16]. A similar system has
been utilized for breast imaging [17], and we have quantified
the spatial resolution to be around 1 mm in both lateral and
elevation (linear scanning direction of array) directions, around
the acoustic focus.
2. Experimental Procedure
Before beginning the experiment, a small amount of ultrasound
gel was placed on the imaging window to improve acoustic
impedance matching. Next, we asked the subjects to rest their
Research Article
Fig. 1.
Flow diagram of the 3D Finger sensing system.
Fig. 2.
Schematic diagram of the PA sensing hardware.
fingers on the imaging window and started the data acquisition.
The whole experiment took approximately 35 seconds, covering a motor scanning distance of 3.5 cm. The raw data matrix
size was 2048 (A-line length) by 128 (128 elements) by 350
(scanning steps). To form a 3D image, we first reconstructed
2D data acquired at each laser pulse and then stacked all the 2D
data based on their acquisition position. The final 3D data was
depth-encoded and maximum intensity projected (MIP).
3. 3D Finger Imaging Processing Scheme
Figure 3 illustrates the imaging processing steps. It is critical
to remove nonvascular features, such as background tissue
signals and electronic noises, to obtain detailed finger vessel patterns. Therefore, we first de-noised and smoothed the original
reconstructed image using Gaussian and bilateral filters [16].
Given the high-frequency attribute of these noises, the Gaussian
filter was a low-pass blur filter (σ = 0.2) for attenuating the
Vol. 59, No. 28 / 1 October 2020 / Applied Optics
8753
high frequency. The bilateral filter also has a Gaussian kernel
(spatial Sigma = 4) for preserving the vessel information.
Specifically, the output image can be described as follows [18]:
P+S P+S
(t)
E(t)
i=−S
j =−S I (x 1 + i, x 2 + j )w
(t+1)
E
I
(E
x) =
(2)
P+S P+S
(t)
i=−S
j =−S w
with the weights given as follows:
2
2
− I (ξE ) − I (E
x)
− ξE − xE
w(t) (E
x , ξE ) = exp
exp
.
2σ D2
2σ R2
(3)
Here, xE = (x 1 , x 2 ), ξE = (ξ1 , ξ2 ) are space variables and w is the
window size of the filter.
After smoothing, we used a 3D vessel pattern segmentation
technique (binarization) to remove the noise and identify vessel
structures, where we create a binary image from a 3D PA image
8754
Fig. 3.
Research Article
Vol. 59, No. 28 / 1 October 2020 / Applied Optics
Flow diagram of the 3D Finger image processing steps.
by replacing all values above a globally determined threshold
with 1 s and set all other values to 0 s [19]. Traditional optical
methods [20] cannot offer a detailed perception of blood flow in
the vessels due to inferior imaging resolution, light scattering,
and optical blurring. By contrast, the 3D Finger can reveal
high-quality vascular patterns through image binarization,
which is computationally efficient. To improve continuity in
vessel patterns, we leveraged a nearest-neighbor searching and
vascular structure fine-tuning module (skeleton) to track the
vessels, which links disjoint vessels depending on the number
of neighboring background points and a speeded up robust features (SURF)-based feature extraction. We also readjusted the
vascular topology based on the vessel distinctions, using a multiscale vessel enhancement algorithm. This algorithm searches
for geometrical structures in the binarized PA image that can be
considered as tubular [21]. The image L(x ) is convoluted with a
normalized second-order Gaussian derivative as follows:
1
L(x , σ ) = σ 2
∂ 2 G(x , σ )
∗ L(x ),
∂u∂v
(4)
where σ represents the scales at which the response is computed.
An ideal tubular structure would have eigenvalues λ1 (along the
direction of the vessel) and λ2 (orthogonal to the vessel) of the
Hessian as |λ1 | ≈ 0 and |λ1 | ≪ |λ2 |. Finally, a vesselness factor
is defined to exclusively enhance the vascular patterns in the PA
image using the following equation:
LR(σ ) =
Table 1.
exp
0
λ1 2
λ2
−2β 2
!
2
+|λ2 |
1 − EXP |λ1 |−2c
2
Vascular Matching Algorithm
1. procedure Matching (a , b)
2. [ f a , va ] SUFA (a )
3. [ f b , vb ] SUFA (b)
4. indexPairs matchFeatures( f b , vb )
5. count length (indexPairs)
6. if count > 0 then
7. for 1 to count do
8. if indexPairs ∈ thresholdproximity then
9. 1depth ← compare (va , vb )
10. if 1depth < thresholdcolor then
11. CrossCorr ← CrossCorr + 1
12. CrossCorr ← CrossCorr/length( f a )
13. metric3DVein ← CrossCorr
14. return metric3DVein
2
,
(5)
where β and c are weights to regulate the filter’s sensitivity. It
should be noted that even though vessel discontinuities are still
present, they only limit the number of features detected from
deep regions. As will be shown in the results section, the current
features are sufficient to ensure a high matching accuracy.
After vessel enhancement, we need to select appropriate
features that can highlight the distinct characteristics of finger
vessel patterns for user authentication. To this end, we employed
(SURF) [22,23], which are invariant to rotation, scale, blur,
and other noise interferences. While the vessel structure lies in
the 3D domain, the SURF features are primarily intended for
a 2D space. More specifically, the algorithm recovers key points
such as bifurcations and endpoints. The feature number varies
among users due to their finger position, muscle activity, or
body metabolism during the sensing process. To account for
these variations, we designed an M-to-N matching algorithm
to compute the correlation between SURF obtained from the
vessel patterns of two users, while considering the vessel depth
attribute. Specifically, the sum of the squared differences of
compared subjects was used to calculate the spatial correlation
between the SURF of two users. The identified feature pairs
with high correlation were further tested to examine their depth
similarity (represented by the color of vein patterns). As the
finger is weakly compressible, we expect the vessel depth information of the same subject to not vary much under different
pressures. A cross-correlation score (CrossCorr) was defined
based on the number of similar feature pairs between two users.
The score was further used to determine four metrics: accuracy,
false acceptance rate (FAR) [24], false rejection rate (FRR), and
equal error rate (EER). These metrics are commonly employed
in biometric studies to examine the uniqueness of the finger vein
a and b are enhanced PA images of different subjects
Extract SURF features and their locations
Extract SURF features and their locations
Determine the location of common features between two PA images
Total number of identified feature pairs
Perform the following operation for every feature pair
Determine the depth similarity (represented by RGB colors in PA image) of feature pairs
Increment the correlation score
Average the score by the total number of feature pairs initially detected
Determine FAR, FRR, and EER
Metric for finger vein uniqueness
Research Article
pattern. Table 1 describes the detailed processes of the vascular
matching algorithm.
To test the potential of our system for biometric applications,
we analyzed the uniqueness of 3D finger veins (recorded within
a PA image) in 36 subjects (imaged eight fingers per subject).
Each individual subject’s PA image was compared against the
other 35 subjects. The matching results were used to quantify
accuracy, FAR, FRR, and EER scores.
3. RESULTS
The post-processed results are shown in Fig. 4. The SNR of
the reconstruction image [Fig. 4(a)] is 9.8, whereas that of
smoothed and denoised image [Fig. 4(b)] is 16.2. The vessel
Vol. 59, No. 28 / 1 October 2020 / Applied Optics
8755
structure after binarization is illustrated in Fig. 4(c). We can see
that the vascular image contains intricate depth information;
however, the patches of the blood flow in between the vessels
may still affect the system performance for biometric authentication. The vessel continuity was improved after skeleton
extraction, as shown in Fig. 4(d). Finally, the vesselness filterenhanced image is shown in Fig. 4(e). It can be seen that the
vessel background has disappeared in the final image, and the
vascular patterns are more prominent. The feature extraction of
Fig. 4(e) is shown in Fig. 4(f ), where the extracted features are
marked by green circles.
Figure 5 shows the results of using only one finger (index finger) for biometric identification. Here, different colors represent
Fig. 4. 3D-Finger images after each processing step. (a) Original (reconstructed) image without enhancement. (b) Gaussian and bilateral filter processed image of (a). (c) Binarization of image of (b). (d) Skeleton extracted image of (c). (e) Final enhanced image. (f ) Biometric features (marked with
green circles) extracted by SURF.
Fig. 5. CrossCorr scores among vessel patterns from 36 subjects using their (a) left index fingers and (b) right index fingers. The results demonstrate high uniqueness of 3D finger vessels acquired using our 3D Finger system.
8756
Research Article
Vol. 59, No. 28 / 1 October 2020 / Applied Optics
different cross-correlation scores. Yellow indicates high crosscorrelation between the vascular characteristics of the legitimate
subject and that of the predicted subject, while blue indicates
low cross-correlation. Results from left index fingers [Fig. 5(a)]
and right index fingers [Fig. 5(b)] indicate that all nonmatching
pairs exhibit very low cross-correlation (except the left index
finger of Subject 15, which will be discussed later). These results
prove that vascular structures can be used to accurately match
the legitimate subject and the predicted subject. The accuracy,
FAR, FRR, and EER scores are presented in Table 2. The current
system has a low EER < 1.23%. The FRR also remained reasonably low, except for the left index finger, which could be caused
by results from Subject 15. The EER was 0.13% for left index
fingers and 0% for right index fingers, showing replicability
across trials.
To verify whether the matching accuracy increases with more
fingers, we also calculated the FAR using the different number
of fingers of the same subject. As expected, the more fingers
involved, the smaller the FAR (Fig. 6). This result confirms that
subjects can be distinguished more accurately if the number
of fingers used for matching is increased. At the same time, the
error range of the FAR is also reduced, indicating better stability.
Table 2. Overall System Performance (All 36
Subjects)
Evaluation
Metrics
Accuracy
False Acceptance Rate (FAR)
False Rejection Rate (FRR)
Equal Error Rate
Sensitivity (1-FAR)
Specificity (1-FRR)
Left Index
Finger
Right Index
Finger
99.38%
1.23%
9.27%
0.13%
98.77%
90.73%
99.84%
0.31%
0.51%
0%
99.69%
99.49%
To verify how our system handles rotation invariance, we also
applied clockwise (30 deg) and counterclockwise (−30 deg)
rotation to the index finger images. The results (Fig. 7) indicate
that most of the FARs in both left [Fig. 7(a)] and right [Fig. 7(b)]
index fingers are lower than 0.2%. There are only two discrete
points in the right index finger that are higher than 0.5%; this
is probably caused by human artifacts, such as body motion or
the finger not in contact with the surface, during the sensing
process. These artifacts may induce distortion in the PA image,
resulting in incorrect vessel depth information. Nevertheless,
the overall performance of our system is close to or slightly better
Fig. 6.
False acceptance rate using different numbers of fingers (1–4). (a) Left hand. (b) Right hand.
Fig. 7.
FAR among 36 subjects with vascular images at 0 deg, rotated at 30 deg and −30 deg. (a) Left index finger. (b) Right index finger.
Research Article
Vol. 59, No. 28 / 1 October 2020 / Applied Optics
8757
than the palm vein-sensing approach [8]. Based on these results,
we can conclude that our system can handle rotation invariance.
Table 3. Overall System Performance (35 Subjects;
Subject 15 excluded)
4. DISCUSSION
Evaluation Metrics
Overall, we obtained a high matching accuracy in most subjects,
except for one outlier. For instance, from Fig. 5, we can see that
the left index finger of Subject 15 has a high false acceptance
value. This might be caused by limited information obtained
from the blood vessel image. In Fig. 8, we compare the unenhanced reconstructed PA image of Subject 15 [Fig. 8(a)] and
Subject 1 [Fig. 8(b)]. It can be seen that the vessel contrast in
Fig. 8(b) is much higher than that in Fig. 8(a). We also confirmed that the SNR of Fig. 8(a) is 4.2, while that of Fig. 8(b)
is 9.2. We speculate that the low SNR in Subject 15 may be
caused by finger movement during the scanning. After removing this outlier, the left finger results (Table 3) are significantly
improved. The left FAR and the FRR are both reduced by over
50%. The EER is observed to be 0% in both the left and right
index fingers. To further improve our model, we can design a
threshold to screen samples with low-quality images, and our
imaging protocol can be enhanced to ensure high-quality imaging acquisition. We also aim to explore enhancements to PA
imaging setup for resolving vessel discontinuities in the future.
As for the comparison of our technique with an infrared palm
vein scanner, our previous studies [25] have shown the difference in both setups based on the SNR, proving that the infrared
scanners lead to blurry and less-dense vessel structures. The
current performance (EER < 1.23%) of our system is superior
to that of several widely used biometrics (e.g., EER 5.6% in 2D
vein infrared scanners [26], EER 7% in gait [27], EER 2.5% in
capacitive hand touchscreens [28]) making it practical to use in
a real-world scenario. On the other hand, the preprocessing time
averages were around 0.4588 s, well within the real-time range.
In the future, it is possible for us to develop the system to include
a real-time display function.
In Fig. 7, we also noticed an increase in the FAR values of
some rotated images. In principle, the rotated images should
have the same matching accuracy as the original images. We
believe that the variation was caused by zero-padding in our
algorithm to accommodate the expanded area. The padded
boundary regions might slightly affect our image enhancement,
feature extraction, and matching algorithms, leading to a small
change in matching accuracy. However, the variation in the FAR
Accuracy
False Acceptance Rate
False Rejection Rate
Equal Error Rate
Sensitivity (1-FAR)
Specificity (1-FRR)
Fig. 8.
Left Index
Finger
Right Index
Finger
99.45%
0.27%
4.8%
0%
99.73%
95.2%
99.84%
0%
0.26%
0%
100%
99.74%
is fairly small (<0.7%), imposing no effects on the evaluation
metrics (i.e., the accuracy remains above 99%).
Further efforts can be made to mitigate the system size.
While the whole imaging setup is still relatively big due to the
laser and ultrasound systems, our scanning platform is quite
compact. To further mitigate the setup, we can use high-energy
light-emitting diodes (LEDs) or laser diodes arrays, which are
smaller and possess higher energy efficiency than flashlamppumped lasers [29,30]. Also, by using LEDs or laser diodes with
a high repetition frequency, we can reduce the imaging time
to less than 1 second. As for the ultrasound system, we can use
multi-channel data acquisition cards or compact ultrasound
systems [31] for PA data acquisition. As ultrasound systems
have already been realized in smartphones, we believe that PA
imaging can be implemented in smart devices as well [32]. Last,
the linear array can be replaced by a 2D matrix array, which does
not need mechanical scanning [33]. With these improvements,
the system has great potential for portable or wearable biometric
authentication in real time.
5. CONCLUSION
In summary, we have successfully developed a reliable and
robust 3D Finger biometric sensing system. It allows performing
a fast and steady scan by simply placing fingers on the scanning
window. Compared with existing IR-based palm-vessel imaging
techniques, our system provides depth information from 3D
vascular structure imaging. Compared with the existing 3D
PAT palm vessel sensing system, the 3D Finger system offers a
more user-friendly approach, and it distinguishes subjects based
on the vascular structure of the fingers. After testing 36 subjects’
left and right fingers, we obtain a low average FRR (1.23% for
Unenhanced reconstructed PA image of (a) Subject 15 and (b) Subject 1.
8758
Vol. 59, No. 28 / 1 October 2020 / Applied Optics
the left index fingers and 0.31% for the right index fingers)
and EER (0.13% for the left index fingers and 0% for the right
index fingers). Because of the increasing demand for securer
identification and authentication devices, we believe that our
system will have a broad application in various areas.
Funding. Center for Identification Technology Research
and the National Science Foundation (1822190).
Disclosures. Dr. Jun Xia is the founder of Sonioptix, LLC,
which, however, did not support this work. All other authors
declare no conflicts of interest.
REFERENCES
1. S. Marcel, M. S. Nixon, and S. Z. Li, Handbook of Biometric
Anti-spoofing (Springer, 2014), Vol. 1.
2. D. White, A. M. Burton, R. Jenkins, and R. I. Kemp, “Redesigning
photo-ID to improve unfamiliar face matching performance,” J. Exp.
Psychol. Appl. 20, 166–173 (2014).
3. O. E. Aru and I. Gozie, “Facial verification technology for use in ATM
transactions,” Am. J. Eng. Res. 2, 188–193 (2013).
4. J.-C. Lee, “A novel biometric system based on palm vein image,”
Pattern Recogn. Lett. 33, 1520–1528 (2012).
5. R. R. Fletcher, V. Raghavan, R. Zha, M. Haverkamp, and P. L.
Hibberd, “Development of mobile-based hand vein biometrics for
global health patient identification,” in IEEE Global Humanitarian
Technology Conference (GHTC) (2014).
6. A. Iula, A. Savoia, and G. Caliano, 3D Ultrasound palm vein pattern for
biometric recognition,” in IEEE International Ultrasonics Symposium
(2012).
7. C.-L. Lin and K.-C. Fan, “Biometric verification using thermal images
of palm-dorsa vein patterns,” IEEE Trans. Circuits Syst. Video
Technol. 14, 199–213 (2004).
8. Y. Wang, Z. Li, T. Vu, N. Nyayapathi, K. W. Oh, W. Xu, and J. Xia, “A
robust and secure palm vessel biometric sensing system based on
photoacoustics,” IEEE Sens. J. 18, 5993–6000 (2018).
9. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging
from organelles to organs,” Science 335, 1458–1462 (2012).
10. Y. Wang, R. S. A. Lim, H. Zhang, N. Nyayapathi, K. W. Oh, and J. Xia,
“Optimizing the light delivery of linear-array-based photoacoustic
systems by double acoustic reflectors,” Sci. Rep. 8, 13004 (2018).
11. M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev.
Sci. Instrum. 77, 041101 (2006).
12. J. Xia, J. Yao, and L. V. Wang, “Photoacoustic tomography: principles
and advances,” Electromagn. Waves 147, 1–22 (2014).
13. American National Standards Institute, American National Standard
for Safe Use of Lasers (2007).
14. Y. Wang, “Optimizing the light delivery of linear-array-based photoacoustic systems by double acoustic reflectors,” Sci. Rep. 8, 13004
(2018).
15. L. V. Wang, “Tutorial on photoacoustic microscopy and computed
tomography,” IEEE J. Sel. Top. Quantum Electron. 14, 171–179
(2008).
16. M. Xu and L. V. Wang, “Universal back-projection algorithm for
photoacoustic computed tomography,” Phys. Rev. E 71, 016706
(2005).
Research Article
17. N. Nyayapathi, R. Lim, H. Zhang, W. Zheng, Y. Wang, M. Tiao, K. W.
Oh, X. C. Fan, E. Bonaccio, K. Takabe, and J. Xia, “Dual scan mammoscope (DSM)—a new portable photoacoustic breast imaging system with scanning in craniocaudal plane,” IEEE Trans. Biomed. Eng.
67, 1321–1327 (2019).
18. D. Barash, “Fundamental relationship between bilateral filtering,
adaptive smoothing, and the nonlinear diffusion equation,” IEEE
Trans. Pattern Anal. Mach. Intell. 24, 844–847 (2002).
19. “imbinarize,” 2020, https://www.mathworks.com/help/images/ref/
imbinarize.html#d120e113233.
20. Y. Zhou and A. Kumar, “Human identification using palm-vein
images,” IEEE Trans. Inf. Forensics Security 6, 1259–1274 (2011).
21. A. F. Frangi, W. J. Niessen, K. L. Vincken, and M. A. Viergever,
“Multiscale vessel enhancement filtering,” in International
Conference on Medical Image Computing and Computer-Assisted
Intervention (Springer, 1998).
22. H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: speeded up robust
features,” in European Conference on Computer Vision (Springer,
2006).
23. D. Mistry and A. Banerjee, “Comparison of feature detection and
matching approaches: SIFT and SURF,” GRD J. 2, 7–13 (2017).
24. M. Sabir, “Sensitivity and specificity analysis of fingerprints based
algorithm,” in International Conference on Applied and Engineering
Mathematics (ICAEM) (IEEE, 2018).
25. Z. Li, Y. Wang, A. S. Rathore, C. Song, N. Nyayapathi, T. Vu, J. Xia,
and W. Xu, “PAvessel: practical 3D vessel structure sensing through
photoacoustic effects with its applications in palm biometrics,” in
ACM on Interactive, Mobile, Wearable Ubiquitous Technologies
(2018), Vol. 2, pp. 1–24.
26. K.-Q. Wang, A. S. Khisa, X.-Q. Wu, and Q.-S. Zhao, Finger
vein recognition using LBP variance with global matching,”
in International Conference on Wavelet Analysis and Pattern
Recognition (IEEE, 2012).
27. J. Mantyjarvi, M. Lindholm, E. Vildjiounaite, S.-M. Makela, and H. A.
Ailisto, “Identifying users of portable devices from gait pattern with
accelerometers,” in IEEE International Conference on Acoustics,
Speech, and Signal Processing (ICASSP) (IEEE, 2005).
28. R. Tartz and T. Gooding, “Hand biometrics using capacitive
touchscreens,” in Adjunct Proceedings of the 28th Annual ACM
Symposium on User Interface Software & Technology (2015).
29. Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N.
Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8,
9885 (2018).
30. P. K. Upputuri and M. Pramanik, “Performance characterization
of low-cost, high-speed, portable pulsed laser diode photoacoustic
tomography (PLD-PAT) system,” Biomed. Opt. Express 6, 4118–4129
(2015).
31. A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang,
N. Dadashzadeh, J. Xia, and K. Avanaki, “Review of cost reduction
methods in photoacoustic computed tomography,” Photoacoustics
15, 100137 (2019).
32. S. Gummadi, J. Eisenbrey, and J. Li, “Advances in modern clinical
ultrasound,” Adv. Ultrasound Diagn. Ther. 2, 51–63 (2018).
33. T. L. Szabo and P. A. Lewin, “Ultrasound transducer selection in clinical imaging practice,” J. Ultrasound Med. 32, 573–582 (2013).