Efficient Segmentation and Registration of Retinal Image Using Gumble Probability Distribution and BRISK Feature

Efficient Segmentation and Registration of Retinal Image Using Gumble Probability Distribution and BRISK Feature

Nagendra Pratap Singh* | Vibhav Prakash Singh

Department of Computer Science and Engineering, National Institute of Technology, Hamirpur 177005, H.P., India

Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology, Allahabad 211004, Prayagraj, India

Corresponding Author Email: 
Page: 
855-864
|
DOI: 
https://doi.org/10.18280/ts.370519
Received: 
30 July 2020
|
Revised: 
6 October 2020
|
Accepted: 
15 October 2020
|
Available online: 
25 November 2020
| Citation

© 2020 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The registration of segmented retinal images is mainly used for the diagnosis of various diseases such as glaucoma, diabetes, and hypertension, etc. These retinal diseases depend on the retinal vessel structure. The fast and accurate registration of segmented retinal images helps to identify the changes in vessels and the diagnosis of the diseases. This paper presents a novel binary robust invariant scalable key point (BRISK) feature-based segmented retinal image registration approach. The BRISK framework is an efficient keypoint detection, description, and matching approach. The proposed approach contains three steps, namely, pre-processing, segmentation using matched filter based Gumbel pdf, and BRISK framework for registration of segmented source and target retinal images. The effectiveness of the proposed approach is demonstrated by evaluating the normalized cross-correlation of image pairs. Based on the experimental analysis, it has been observed that the performance of the proposed approach is better in both aspect, registration performance as well as computation time with respect to SURF and Harris partial intensity invariant feature descriptor based registration.

Keywords: 

retinal image, feature descriptor, segmentation, registration, probability distribution functions

1. Introduction

The objective of segmented retinal image registration is to identify the disease progression by aligning two retinal images namely the source and target image. These images are taken at different time interval or different view points of the same retina. The various Retinal diseases such as diabetes [1, 2], glaucoma [3], and hypertension [4] are easily detected and diagnosed through the Retinal image registration. The variation of intensities with respect to time and poor quality of retinal images are the major challenges of retinal image registration. For example, registration of retinal image pair is difficult because the selected image pair is captured in between some years apart may be acquired from different camera with different sensitivity or may be in different modality. Nowadays, researchers are working in the area of retinal image for the disease detection and retrieval using low-level features [5, 6] and machine learning approaches [7-11]. But they are not using the relevancy of retinal image segmentation and registration. According to Brown [12], Maintz and Viergever [13], and Lester and Arridge [14] registration techniques are classified into two categories, namely area-based and feature-based technique. The area-based techniques [15] compare the intensity differences of retinal image pairs by using mutual information [15] and normalize cross correlation [16] as similarity metric and an optimization technique [17] is used to achieve the optimum similarity metric which indicates the better registration. According to the author Chanwimaluang et al. [18], area-based techniques are not suitable in case of low overlapping area of registration. To overcome this problem, generally a region of interest within image pair is selected for evaluating the similarity metric [17].

The changes in illumination and initial-misalignment also affect the performances of area-based techniques [19]. Therefore area-based techniques are susceptible to background changes due to their pathologies and changes to the viewpoints of camera [19]. On the basis of exhaustive literature survey it is found that the feature-based techniques [20, 21] are more suitable for the segmented retinal image registration in comparison to area-based techniques. The main characteristics of feature-based techniques are their robustness against illumination changes.

Feature-based techniques generally extract the salient and distinct features for searching the appropriate transformation such as scaling, rotation, and translation [22] between the image pair which optimizes the correspondence between selected features. However, it is tedious task to extract the features from the poor quality images. To overcome this problem for retinal image registration, generally blood vessel structure of respective segmented retinal images is used to identify the matched feature points [23]. For extracting distinct matched feature point, a scale invariant feature transform (SIFT) has been used by various authors [24, 25]. The SIFT features are scale invariant as well as rotation invariant and provide robust matching across the change in viewpoint, changes in illumination and a substantial range of affine distortion [25, 26]. The SIFT features are highly efficient to sense a single feature that can be exactly matched with the large set of features of other images and specially designed for mono-modal image registration [24]. The main disadvantage of SIFT features are its scale invariance strategy, which is not able to provide sufficient control points in case of high order transformation [27]. Therefore Bay et al. [27] proposed speed up robust features (SURF), which is faster and robust with respect to SIFT features. The author Cattin et al. [28] proposed SURF based retinal image registration method, which is based on Haar wavelet, and does not depend on vasculature; however, this method is applicable for mono-modal retinal image registration.

The authors Stewart et al. [19] and Tsai et al. [29] proposed a general dual bootstrap iterative closest point algorithm (GDB-ICP) for registration of poor quality retinal images. To provide the initial matches before applying the GDB-ICP, there are two methods available in literature. In first method to identify the initial matches, various authors [30, 31] used Lowe’s multi-scale keypoint detector and the SIFT descriptor. In second method, the central line extraction algorithm [25] was used to extract the bifurcations point of the retinal blood vessels and generate the initial matches. After identifying the first matches the GDB-ICP algorithm was iteratively expand the area around first matches by mapping the corner points. According to authors Stewart et al. [19] and Tsai et al. [29] only one initial match is sufficient for the iterative registering process. Further, author Chen et al. [32] state that for more poor quality images, having correct initial match, but due to poor quality image corner point the GDB-ICP algorithm may fail.

Further, the author Taha et al. [16] and Chen et al. [33] proposed a feature-based retinal image registration schema by using SURF feature, to improve the quality of retinal image registration. Ozdemir et al. proposed a GA-Based Optimization of SURF Algorithm [34] and the author Wang et al. [35] proposed a robust point matching method for multimodal retinal image registration based on speed up robust feature (SURF) detector. Wang et al. [35], Chen et al. [33], and Taha et al. [16] claimed that the performance of their approach is good but still some improvement is required in thresholding techniques and vascular network detection phase, which gives an idea for our proposed approach.

On the basis of an exhaustive literature survey, it was found that, the main challenges are accurate and fast detection of changes in the vascular structure. This research gap motivates us to design and implement an efficient framework for segmentation and registration of the retinal image. Leutenegger et al. [36] set a new milestone and proposed a Binary robust invariant scalable key point (BRISK) methodology that handles the both challenges. The BRISK achieved comparable quality of feature matching at less computational time [36]. Therefore, in this paper, we proposed a BRISK feature-based registration approach of segmented retinal image, to achieve the better registration accuracy. The proposed approach is able to handle two major challenges of segmented retinal image registration which are as follows:

  • Selection of suitable retinal vessels segmentation approach which helps for better segmented retinal image registration.
  • Use of the BRISK framework for feature detection and matching because it is a high quality descriptor and require less computational time.

The organization of the paper is as follows: the methods and model, experimental results and performance analysis and the conclusion of the proposed approach was discussed in section 2, 3 and 4 respectively.

2. Methods and Model

Our proposed retinal image registration approach contains three steps, namely, pre-processing, Gumbel PDF based approach for retinal image segmentation, and BRISK feature based registration between the pairs of segmented retinal images. The block diagram of the proposed registration approach is shown in Figure 1.

2.1 Pre-processing

Due to low contrast difference between vessels and their background and continuous decrement in the vessels width, when slightly move away from the optical disk of retina image the pre-processing is required. The pre-processing module contains two sub-modules; first convert the RGB retinal image into gray scale by using principal component analysis (PCA) based technique [37] and then improve the contrast of gray scale image by applying Contrast limited Adaptive Histogram Equalization (CLAHE). The main steps used in PCA based gray image conversion method [29] are mentioned in Figure 1. The resulting image after pre-processing is shown in Figure 2.

2.2 Gumbel PDF based MF for segmentation

A comparison between the gray scale cross section profile of retinal blood vessel segmentation approach and redefined kernel happens in the matched filter (MF) based retinal blood vessel segmentation approach. Based on Gaussian function the Chaudhuri et al. [38] proposed the first matched filter approach and their claim was to show that the vessel cross-section profile is in approximate Gaussian shape. Further, Zolfagharnasa et al. [39], based on their proposed Cauchy PDF based matched filter approach draw the claim that vessel cross-section profile matched better with Cauchy PDF Curve. Even though, towards the truncated values on both sides of the peaks both the Gaussian Cauchy PDF diminished equally, the diminishing rate for the Gaussian curve is faster than that of the Cauchy PDF curve. If we analyze the vessel cross-section intensity profile of two marked regions with different color of lines (as shown in Figure 3(a)) are shown in Figure 3(b) and 3(c) by selecting total thirty pixels between two points. According to the Figure 3(b) and 3(c), it is observed that the cross-section profile is moderately skewed that means not equally diminish toward the respective truncated values on both sides of their peak value.

Therefore, in this paper, we use the concepts of our recently proposed approach [40-42], In a novel Gumbel PDF based approach, for efficient retinal blood vessel segmentation. In papers [40-42] through statistical analysis, we justify that the curve of retinal image is slightly skewed for the gray scale intensity profile rather than approximate Gaussian shape.

The Gumbel PDF having skewed characteristics is defined by the following Eq. (1):

$\mathrm{f}(\mathrm{x}, \mathrm{y})=\frac{1}{\beta} e^{\frac{\mathrm{X}-\mu}{\beta}} e^{-e^{\frac{\mathrm{X}-\mu}{\beta}}} f o r|y| \leq L / 2$     (1)

where, x is the perpendicular distance between point (x, y) and straight line passing through the center of retinal blood vessel, µ and β are representing the location parameter and scale parameter respectively, and the L represents a piece-wise line segment in the kernel.

2.3 BRISK feature base registration

This sub-section, describe the main steps of BRISK framework, namely feature point detection, and construction of feature point descriptor.

2.3.1 Feature points detection

Efficient feature point detection is a prominent step for different tasks of computer vision. The corner detection plays an important role for the detection of interesting feature points because it provides important clue due to their two dimensional (2D) constraint and useful for fast algorithms to detect them [43]. Rosten et al. [44] proposed a Feature from Accelerated Segment Test (FAST) feature detector approach which outperforms in both computational performance and repeatability with respect to previous existing algorithms in literature [43]. After that author Mair et al. [43] accelerated performance of the popular FAST feature detector approach and proposed Adaptive and Generic Accelerated Segment Test (AGAST) feature detector approach.

Figure 1. Block diagram of proposed registration model

Figure 2. (a) Color retina image (b) Gray scale image and (c) Contrast enhanced gray scale image of 20_test.tif selected from DRIVE dataset

Figure 3. Retina image selected from the test set of DRIVE database, (a) Gray scale retina image of '11_test.tif, (b) Cross-section intensity profile of the region marked by claret color in (a) by selecting 30 pixels between two points, (c) Cross-section intensity profile of the region marked by blue color in (a) by selecting 30 pixels between two points

Figure 4. 40 interest points in segmented retinal image by using BRISK feature detection

Leutenegger inspired from the work of Mair et al. [43] and proposed a novel BRISK detector. The main objective of the BRISK detector is to achieve an invariance to scale which is difficult for high-quality key points and searching the maxima in the image plane as well as in scale-space [36]. This detector also uses to estimates the true scale of all key points in the continuous scale-space [36]. For interest key point detection, the BRISK framework use the scale-space pyramid layer of n octaves Oi and n intra-octeves Ioi for i= 0, 1, 2,...., n-1 and assume the value of n is equal to 4. The first octaves are created by progressively by half sampling the original image (O0). The intra-octave Ioi exists in between the two octave layers that means each Ioi exist in between Oi and Oi+1. The first intra-octave Io0 is obtained by down sampling the original image O0 by a factor of 1.5. Remaining intra-octave layers are obtained progressively by half sampling. In the BRISK framework, initially, the FAST 9-16 detector is applied on all octave and intra-octave layers by using the same threshold (Th) value to identify the suitable region of interest (ROI). The 9-16 represents, at least nine consecutive pixels in the circle of sixteen pixels are either be sufficiently darker or brighter than the central pixel. The points which exist in the ROI are subjected to non-maximal suppression in scale-space.

The non-maximal suppression in scale-space is required for removing the multiple interest points adjacent to one another. The algorithm for the nonmaximal suppression is as follows:

1. Calculate the SAD value i.e. sum of the absolute difference between the pixels in the contiguous arc and the center pixel (say D).

2. Compare D value of two adjacent interest points and discard the interest point having lower value of D.

Above steps are mathematically summarized as follows:

$D=\operatorname{Max}\left\{\begin{array}{ll}\sum\left(P_{\text {arc }}-P_{\text {center }}\right), & \text { if }\left(P_{\text {arc }}-P_{\text {center }}\right)>T_{h} \\ \sum\left(P_{\text {center }}-P_{\text {arc }}\right), & \text { if }\left(P_{\text {arc }}-P_{\text {center }}\right) \leq T_{h}\end{array}\right.$

where, $P_{\text {center }}$ represents the value of center pixel, $P_{\operatorname{arc}}$ represents the values of contiguous pixels in the circle and Th is the threshold for detection, which is determined by using the value of center pixel. The non-maximal suppression in scale-space is applying to identify the interest key points at octave layer and their immediate-neighboring intra-octave layers above and below the respective octave layer. Further, an image saliency can be considered as a continuous quantity across an image and the scale dimension. For each detected maximum, it performs a sub-pixel and continuous scale refinement. For determining the true scale of the key-point, prior to the fitting of the 1D parabola along the scale-axis, in all the three layers of interest, the local saliency maximum is refined the sub-pixel. After this re-interpolation of the key point’s location is done between the patch maxima closest to the determined scale. The BRISK feature point detector is able to detect both the bifurcation and corner points presents in segmented retinal blood vessel structure as shown in Figure 4.

2.3.2 Feature points descriptor

The set of interest points contains the refined sub-pixel image location and respective floating-point scale values. After identifying the set of interest points the BRISK descriptor is composed as binary string. The binary string is generated by concatenating the result of brightness comparison test which is described by Chli et al. [45]. In BRISK framework, to identify the characteristic direction of every interest point, the rotation and scale-normalized descriptors are used.

Sampling. Basic idea behind the BRISK descriptor is that, it uses the pattern for sampling the neighborhood of interest point. The BRISK Sampling pattern contains N locations equally spaced on concentric circles with the interest point. It was specifically built for dense matching and to capture more information [28]. To prevent aliasing effects, when sampling the intensity of a particular point Pi in pattern, Gaussian smoothing with standard deviation σi proportional to the distance between the points presented on the particular circle is applied. For positioning and scaling the pattern to the particular point A in the image, we consider one sampling-point pairs (Pi, Pj) out of the total N·(N−1)/2 sampling-point pairs. The smoothed intensity values of selected points I(Pi, σi) and I(Pj , σj ) are used to estimate the local gradient Lg(Pi , Pj ) by using the following Eq. (2):  

$L_{g}\left(P_{i}, P_{j}\right)=\left(P_{i}, P_{j}\right) \cdot \frac{I\left(P_{i}-\sigma_{i}\right)-I\left(P_{j}-\sigma_{j}\right)}{\left\|P_{i}-P_{j}\right\|^{2}}$      (2)

Let us consider the set Sa contains all sampling-point pairs, then two subsets of set Sa are defined that contains small-distance pairs and large-distance pairs which are represented by Ss and Sl respectively. The set Sa, Ss, Sl are mathematically defined as follows:

$S^{a}=\left\{\left(P_{i}, P_{j}\right) \in R^{2} \times R^{2} \mid i, j \in \mathrm{N}\right.$ and $\left.\mathrm{i}<\mathrm{N}^{\wedge} \mathrm{j}<\mathrm{i}\right\}$      (3)

$S^{S}=\left\{\left(P_{i}, P_{j}\right) \in S^{a} \mid\left\|P_{i}-P_{j}\right\|<\delta_{\max }\right\}$     (4)

$S^{l}=\left\{\left(P_{j}, P_{i}\right) \in S^{a} \mid\left\|P_{i}-P_{j}\right\|>\delta_{\min }\right\}$     (5)

where, symbol || is used for representing absolute values, and | is used for “such that” representation. The symbol δmax and δmin are known as maximum and minimum threshold distance respectively. The value of δmax and δmin are set to 9.75t and 13.67t where t is the scale of particular point A.

Overall characteristic pattern direction vector ∇ of particular point A is estimated by adding the gradients of all point pairs in a set Sl by using the following Eq. (6):

$\nabla=\left(\begin{array}{l}\nabla_{x} \\ \nabla_{y}\end{array}\right)=\frac{1}{L} \cdot \sum_{\left(P_{i}, P_{j}\right) \in S^{l}} L_{g}\left(P_{i}, P_{j}\right)$      (6)

The large-distance pairs Sl are used for the computation of an overall characteristic pattern direction vector ∇, by assuming that the local gradients annihilate each other therefore, it is not necessary in the global gradient determination [36].

 

Descriptor. To construct the rotation and scale-normalized descriptor in a BRISK framework, we sampling the pattern, rotated by θ degree around the particular point A and θ is defined by the following Eq. (7):

$\theta=\arctan 2(\nabla \mathrm{x}, \nabla \mathrm{y})$      (7)

Then bit-vector descriptor Db is assembled by performing all the small-distance intensity comparisons of point pairs $\left(P_{i}^{\theta}, P_{j}^{\theta}\right) \in \mathrm{Ss}$ such that each bit correspond to as follows:

$b=\left\{\begin{array}{ll}1, & \text { if }\left(P_{j}^{\theta}, P_{j}^{\theta}\right)>\left(P_{i}^{\theta}, P_{i}^{\theta}\right) \\ 0, & \text { Otherwise }\end{array}\right.$     For $\forall\left(P_{i}^{\theta}, P_{j}^{\theta}\right) \in \mathrm{S}^{\mathrm{S}}$     (8)

2.3.3 Feature matching and transformation model estimation

After identifying interest/feature points of two segmented retinal images, we start the process of registration. Registration of two retinal images (source and target images) from their segmented image is a tedious task. The feature matching and the estimation of transformation model are the main task of feature based retinal image registration process. Chen et al. [33] and Taha et al. [16] improve the performance of the registration, when more robust feature points are adopted. To adopt the robust feature points, in this paper, BRISK framework is used because it provides the robust and fast feature points detection and their description. The BRISK feature detector is able to detect the bifurcation points as shown in Figure 4. The bifurcation point contains three surrounding branches and angles and each branch that is connected to their neighboring bifurcation points or closed at end point. Most of the end points are detected as corner points. Each bifurcation structure is represented by characteristic vector Vc. The bifurcation structure contains any one of the following characteristics:

• Bifurcation structure contains three branches and three angles.

• Bifurcation structure contains three branches and six angles.

• Bifurcation structure contains three branches and nine angles.

• Bifurcation structure contains three branches and twelve angles.

Therefore to handle all the possibilities of bifurcation structure the characteristic vector Vc contains three branches having various length and twelve angles which is invariant towards the scaling and translation. The process of feature matching between two images will search for better similarity among all the structure pairs. Let us consider Sf, Tf are the feature groups of the source and target image and contains Ns, Nt bifurcation structures respectively. The similarity S is the distance between the characteristic vectors of two bifurcation structure pair which is evaluated by using the following Eq. (9):

$S_{(i, j)}=D\left(V_{c i}, V_{c j}\right)$     (9)

where, $V_{c i}$ and $V_{c j}$ are the characteristic vectors of ith and jth bifurcation structure of the source and target image.

After identifying associated matched features between the target and source image (as shown in Figure 5(c) and (g)), the next critical issue for feature based registration is, the estimation of the transformation model. On the basis of literature survey it was found that the minimum points required for linear and affine transformation is two and three pairs of points respectively. Hence, one matched bifurcation pair is sufficient to identify the transformation model. The estimation of transformation model is done by using the following Eq. (10):

$E_{(\mathrm{pq}, \mathrm{mn})}=D\left(M_{1}\left(V_{p}, V_{q}\right), M_{2}\left(V_{m}, V_{n}\right)\right)$      (10)

where, $M_{1}\left(V_{p}, V_{q}\right)$ and $M_{2}\left(V_{m}, V_{n}\right)$ are the transformation model estimated from the matched pairs $\left(V_{p}, V_{q}\right)$ and $\left(V_{m}, V_{n}\right)$ respectively and having good similarity according to Eq. (9). The correspondence pairs are used together to estimate the linear, affine, and quadratic transformation. For the estimation of appropriate transformation, experimentally evaluate the performance of proposed registration approach by using linear, affine, and quadratic transformation and observed that the affine transformation is performed better with respect to linear and quadratic transformation. Therefore in the proposed approach, an affine transformation is used for registration of segmented retinal images. The registered retinal images of two different source and target image pairs are shown in Figure 5(d) and (h). In registered retinal image, the changes in vascular structures of image pairs are represented in gray color which is clearly visible in Figure 5(d) and (h).

The performance of proposed registration approach is evaluated by using the normalized cross correlation (NCC) which is commonly used similarity measure between two registered images. The normalized cross correlation between the source and target image is evaluated by using the following Eq. (11):

$\operatorname{NCC}(\mathrm{S}, \mathrm{T})=\frac{\sum_{X} S(X) \cdot T(X)}{\sqrt{\sum_{X} S(X)^{2} \cdot T(X)^{2}}}$      (11)

where, S and T represent the source and target image and X represents the two matched points between the S and T. According to Tsai et al. [46], the value of NCC closed to one indicates better degree of similarity.

Figure 5. (a) and (e) represents 40 feature points of two segmented source retinal images, (b) and (f) represents 40 feature points of respective segmented target retinal image, (c) and (g) represents the respective matched feature points between source and target image pairs, (d) and (h) represents the registered retinal images of respective source and target image pairs

3. Experimental Results and Performance Analysis

The DRIVE retinal database [4] is used to evaluate the performance of the proposed approach. All 20 retinal images of DRIVE database are used as a source image. The target image for respective source image is produced randomly by rotating, scaling, and translating with different angles, scaling, and translation factors respectively. Some target images are also produced by flipping (horizontal/vertical) the source image. The set of all 20 pairs of source and target images are used for experimental analysis.

Table 1. Comparative analysis with respect to NCC similarity measures and computation time

Source Image

Target Image

NCC (Gumbel pdf based MF+Harris-PIIFD)

NCC (Gumbel pdf based MF + SURF)

NCC (Proposed Approach)

Harris-PIIFD Time (Sec.)

SURF Time (Sec.)

Proposed approach Time (Sec.)

S1

T1

0.8101

0.8033

0.8098

29.37

8.41

6.97

S2

T2

0.8331

0.8410

0.8418

32.18

9.78

3.84

S3

T3

0.1155

0.0021

0.8005

19.79

8.16

3.64

S4

T4

0.6585

0.6836

0.7263

19.78

8.74

3.96

S5

T5

0.7400

0.7351

0.7715

26.20

8.31

3.65

S6

T6

0.1104

0.0015

0.7685

19.24

10.81

3.58

S7

T7

0.1160

0.1885

0.8197

19.16

5.76

3.63

S8

T8

0.6770

0.6974

0.7661

23.29

5.96

3.83

S9

T9

0.7285

0.7029

0.7354

23.77

6.04

3.50

S10

T10

0.9454

0.9437

0.9870

29.92

5.59

4.23

S11

T11

0.7747

0.7918

0.8235

33.02

8.23

3.86

S12

T12

0.1169

0.1230

0.8376

21.54

7.95

3.98

S13

T13

0.7507

0.7550

0.7792

25.12

6.31

3.97

S14

T14

0.1083

0.2130

0.7445

21.52

7.33

4.34

S15

T15

0.8054

0.8103

0.8194

28.65

9.56

4.15

S16

T16

0.7756

0.7890

0.8297

27.29

9.64

4.35

S17

T17

0.7022

0.7333

0.8098

24.37

10.29

4.64

S18

T18

0.8340

0.8385

0.8455

33.21

9.16

4.06

S19

T19

0.8021

0.8199

0.8319

29.49

7.92

4.55

S20

T20

0.7705

0.7624

0.7996

26.76

6.46

4.41

(a)

(b)

Figure 6. (a) Comparative analysis of average performance measure (Average NCC) (b) Comparative analysis of average computation time (Average Time in Sec.)

Figure 7. Comparative analysis of Gumbel Probability distribution function based retinal image segmentation approach

The performance (NCC) and the computation time of proposed approach are evaluated for all 20 image pairs and mentioned in Table 1. According to the experimental results presented in Table 1, it has been observed that the performance of proposed approach is close to 1, which indicates that the registration accuracy is acceptable with less computation time.

Furthermore, we evaluate the performance and computation time of SURF feature based registration approach proposed by Taha et al. [16] and Harris partial intensity invariant feature descriptor based registration approach proposed by Chen et al. [32] by using same set of 20 image pairs which is also mentioned in Table 1. In both cases all pairs of images are segmented by the Gumbel pdf based matched filter approach proposed by Singh and Srivastava [40].

On the basis of comparative analysis of proposed, BRISK feature based segmented retinal image registration approach with SURF feature based [9] and Harris partial intensity invariant feature based segmented retinal image registration approach [32], it is observed that the performance and computation time of proposed approach is better with respect to other two approaches. The average performance measure (Average NCC) and computation time (average time) of all three approaches is also evaluated and their comparative analysis mentioned in Figure 6.

On the basis of that, again found that the overall average performance and computation time of proposed registration approach is better with respect to other two approaches. The reasons to found the better performance are as follows:

• On the basis of comparative analysis as shown in Figure 7, it has been observed that the Gumbel Probability distribution function based retinal image segmentation approach performs significantly encouraging than other approaches [38, 39, 47-53] based on matched filter. Also according to the literature, the main requirement of feature-based retinal image registration for detecting the accurate bifurcation structure depends upon the better segmentation [40, 42]. Therefore the Gumbel pdf based retinal image segmentation approach is used to overcome the limitation of feature based retinal image registration approach.

• The BRISK framework [36] is able to handle the main challenges (high quality description and low computational cost) to detection of suitable feature points from image. The BRISK framework is also able to handle scaling, translational and rotation of the vascular tree structures of source and target image pairs. Therefore, in this paper, the BRISK framework is used for feature detection and matching, which improve the performance of registration and reduces the computation time.

• Retinal diseases such as diabetes, Glaucoma and age-based macular degeneration (AMD) are the major cause of human blindness. It is very painful for aging society to survive without vision. So the propose approach may useful for designing an intelligent system to detect the retinal diseases in early stage and control the vision loss.

4. Conclusions

The retinal diseases such as glaucoma, hypertension, and diabetes, etc. are diagnosed by identifying the changes in the retinal vessel structure, therefore accurate and less time-consuming registration of segmented retinal images is a prominent task. In this paper, we proposed an efficient BRISK feature-based segmented retinal image registration approach because the BRISK framework is an efficient keypoint detector, descriptor, and matching approach. The Gumbel pdf based segmentation approach is used to segmentation the source and target image because the Gumbel pdf based segmentation approach provided a better segmentation result with respect to other existing blood vessel segmentation approach in both normal as well as abnormal retinal image. The performance of the proposed registration approach was demonstrated by evaluating the normalized cross-correlation similarity index for image pairs. On the basis of comparative analysis of the proposed approach with the SURF and Harris partial intensity invariant feature descriptor based segmented retinal blood vessel registration approach, it was observed that the performance of the proposed registration approach was better in both aspects, the performance as well as computation time. In the future, the proposed approach may use to develop an intelligent system for aging society.

  References

[1] Martinez-Perez, M.E., Hughes, A.D., Thom, S.A., Bharath, A.A., Parker, K.H. (2007). Segmentation of blood vessels from red-free and fluorescein retinal images. Medical Image Analysis, 11(1): 47-61. https://doi.org/10.1016/j.media.2006.11.004 

[2] Niemeijer, M., Staal, J., van Ginneken, B., Loog, M., Abramoff, M.D. (2004). Comparative study of retinal vessel segmentation methods on a new publicly available database. In Medical Imaging 2004: Image Processing, 5370: 648-656. https://doi.org/10.1117/12.535349 

[3] Leung, H., Wang, J.J., Rochtchina, E., Wong, T.Y., Klein, R., Mitchell, P. (2004). Impact of current and past blood pressure on retinal arteriolar diameter in an older population. Journal of Hypertension, 22(8): 1543-1549. https://doi.org/10.1097/01.hjh.0000125455.28861.3f 

[4] Mitchell, P., Leung, H., Wang, J.J., Rochtchina, E., Lee, A.J., Wong, T.Y., Klein, R. (2005). Retinal vessel diameter and open-angle glaucoma: The Blue Mountains Eye study. Ophthalmology, 112(2): 245-250. https://doi.org/10.1016/j.ophtha.2004.08.015

[5] Das, S., Malathy, C. (2018). Survey on diagnosis of diseases from retinal images. Journal of Physics: Conference Series, 1000(1): 012053. https://doi.org/10.1088/1742-6596/1000/1/012053

[6] Qureshi, I., Ma, J., Abbas, Q. (2019). Recent development on detection methods for the diagnosis of diabetic retinopathy. Symmetry, 11(6): 749. https://doi.org/10.3390/sym11060749 

[7] Sukhia, K.N., Riaz, M.M., Ghafoor, A. (2019). Content-based retinal image retrieval. IET Image Processing, 13(9): 1525-1534. https://doi.org/10.1049/iet-ipr.2018.6371 

[8] Pawar, P.M., Agrawal, A.J. (2018). Retinal disease detection using machine learning techniques. HELIX, 8(5): 3932-3937. https://doi.org/10.29042/2018-3932-3937

[9] Akyol, K., Şen, B., Bayır, Ş. (2016). Automatic detection of optic disc in retinal image by using keypoint detection, texture analysis, and visual dictionary techniques. Computational and Mathematical Methods in Medicine, 2016: 1-10. https://doi.org/10.1155/2016/6814791 

[10] Carrera, E.V., González, A., Carrera, R. (2017). Automated detection of diabetic retinopathy using SVM. 2017 IEEE XXIV International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Cusco, pp. 1-4. https://doi.org/10.1109/INTERCON.2017.8079692

[11] Santhakumar, R., Tandur, M., Rajkumar, E.R., Geetha, K.S., Haritz, G., Rajamani, K.T. (2016). Machine learning algorithm for retinal image analysis. 2016 IEEE Region 10 Conference (TENCON), Singapore, pp. 1236-1240. https://doi.org/10.1109/TENCON.2016.7848208

[12] Brown, L.G. (1992). A survey of image registration techniques. ACM Computing Surveys (CSUR), 24(4): 325-376. https://doi.org/10.1145/146370.146374 

[13] Maintz, J.A., Viergever, M.A. (1998). A survey of medical image registration. Medical Image Analysis, 2(1): 1-36. https://doi.org/10.1016/S1361-8415(01)80026-8 

[14] Lester, H., Arridge, S.R. (1999). A survey of hierarchical non-linear medical image registration. Pattern Recognition, 32(1): 129-149. https://doi.org/10.1016/S0031-3203(98)00095-8 

[15] Wachowiak, M.P., Smolíková, R., Zheng, Y., Zurada, J.M., Elmaghraby, A.S. (2004). An approach to multimodal biomedical image registration utilizing particle swarm optimization. IEEE Transactions on Evolutionary Computation, 8(3): 289-301. https://doi.org/10.1109/TEVC.2004.826068 

[16] Taha, H.M., El-Bendary, N., Hassanien, A.E., Badr, Y., Snasel, V. (2011). Retinal feature-based registration schema. In: Abd Manaf A., Zeki A., Zamani M., Chuprat S., El-Qawasmeh E. (eds) Informatics Engineering and Information Science. ICIEIS 2011. Communications in Computer and Information Science, vol 252. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25453-6_3

[17] Orchard, J. (2007). Efficient least squares multimodal registration with a globally exhaustive alignment search. IEEE Transactions on Image Processing, 16(10): 2526-2534. https://doi.org/10.1109/TIP.2007.904956 

[18] Chanwimaluang, T., Fan, G., Fransen, S.R. (2006). Hybrid retinal image registration. IEEE Transactions on Information Technology in Biomedicine, 10(1): 129-142. https://doi.org/10.1109/TITB.2005.856859 

[19] Stewart, C.V., Tsai, C.L., Roysam, B. (2003). The dual-bootstrap iterative closest point algorithm with application to retinal image registration. IEEE Transactions on Medical Imaging, 22(11): 1379-1394. https://doi.org/10.1109/TMI.2003.819276 

[20] Heneghan, C., Maguire, P., Ryan, N., De Chazal, P. (2002). Retinal image registration using control points. Proceedings IEEE International Symposium on Biomedical Imaging, Washington, DC, USA, pp. 349-352. https://doi.org/10.1109/ISBI.2002.1029265

[21] Park, J., Keller, J.M., Gader, P.D., Schuchard, R.A. (1998). Hough-based registration of retinal images. SMC'98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218), San Diego, CA, USA, pp. 4550-4555. https://doi.org/10.1109/ICSMC.1998.727568 

[22] Ranade, S., Rosenfeld, A. (1980). Point pattern matching by relaxation. Pattern Recognition, 12(4): 269-275. https://doi.org/10.1016/0031-3203(80)90067-9 

[23] Laliberté, F., Gagnon, L., Sheng, Y. (2003). Registration and fusion of retinal images-an evaluation study. IEEE Transactions on Medical Imaging, 22(5): 661-673. https://doi.org/10.1109/TMI.2003.812263 

[24] Lowe, D.G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2): 91-110. https://doi.org/10.1023/B:VISI.0000029664.99615.94 

[25] Lowe, D.G. (1999). Object recognition from local scale-invariant features. Proceedings of the seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, pp. 1150-1157. https://doi.org/10.1109/ICCV.1999.790410 

[26] Lindeberg, T. (1998). Feature detection with automatic scale selection. International Journal of Computer Vision, 30(2): 79-116. https://doi.org/10.1023/A:1008045108935 

[27] Bay, H., Tuytelaars, T., Van Gool, L. (2006). Surf: Speeded up robust features. In: Leonardis A., Bischof H., Pinz A. (eds) Computer Vision – ECCV 2006. ECCV 2006. Lecture Notes in Computer Science, vol 3951. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11744023_32

[28] Cattin, P.C., Bay, H., Van Gool, L., Székely, G. (2006). Retina mosaicing using local features. In: Larsen R., Nielsen M., Sporring J. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006. MICCAI 2006. Lecture Notes in Computer Science, vol 4191. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11866763_23

[29] Tsai, C.L., Li, C.Y., Yang, G., Lin, K.S. (2009). The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence. IEEE Transactions on Medical Imaging, 29(3): 636-649. https://doi.org/10.1109/TMI.2009.2030324 

[30] Mikolajczyk, K., Schmid, C. (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10): 1615-1630. https://doi.org/10.1109/TPAMI.2005.188 

[31] Se, S., Lowe, D., Little, J. (2001). Vision-based mobile robot localization and mapping using scale-invariant features. Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), Seoul, South Korea, pp. 2051-2058. https://doi.org/10.1109/ROBOT.2001.932909 

[32] Chen, J., Tian, J., Lee, N., Zheng, J., Smith, R.T., Laine, A.F. (2010). A partial intensity invariant feature descriptor for multimodal retinal image registration. IEEE Transactions on Biomedical Engineering, 57(7): 1707-1718. https://doi.org/10.1109/TBME.2010.2042169 

[33] Chen, L., Xiang, Y., Chen, Y., Zhang, X. (2011). Retinal image registration using bifurcation structures. 2011 18th IEEE International Conference on Image Processing, Brussels, pp. 2169-2172. https://doi.org/10.1109/ICIP.2011.6116041 

[34] Özdemir, H., Sever, R., Polat, Ö. (2019). GA-based optimization of SURF algorithm and realization based on Vivado-HLS. Traitement du Signal, 36(5): 377-382. https://doi.org/10.18280/ts.360501 

[35] Wang, G., Wang, Z., Chen, Y., Zhao, W. (2015). Robust point matching method for multimodal retinal image registration. Biomedical Signal Processing and Control, 19: 68-76. https://doi.org/10.1016/j.bspc.2015.03.004 

[36] Leutenegger, S., Chli, M., Siegwart, R.Y. (2011). BRISK: Binary robust invariant scalable keypoints. 2011 International Conference on Computer Vision, Barcelona, pp. 2548-2555. https://doi.org/10.1109/ICCV.2011.6126542 

[37] Saroj, S.K., Kumar, R., Singh, N.P. (2020). Fréchet PDF based matched filter approach for retinal blood vessels segmentation. Computer Methods and Programs in Biomedicine, 194: 105490. https://doi.org/10.1016/j.cmpb.2020.105490 

[38] Chaudhuri, S., Chatterjee, S., Katz, N., Nelson, M., Goldbaum, M. (1989). Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Transactions on Medical Imaging, 8(3): 263-269. https://doi.org/10.1109/42.34715 

[39] Zolfagharnasab, H., Naghsh-Nilchi, A.R. (2014). Cauchy based matched filter for retinal vessels detection. Journal of Medical Signals and Sensors, 4(1): 1-9. https://doi.org/10.4103/2228-7477.128432

[40] Singh, N.P., Srivastava, R. (2016). Retinal blood vessels segmentation by using Gumbel probability distribution function based matched filter. Computer Methods and Programs in Biomedicine, 129: 40-50. https://doi.org/10.1016/j.cmpb.2016.03.001 

[41] Singh, N.P., Srivastava, R. (2019). Extraction of retinal blood vessels by using an extended matched filter based on second derivative of gaussian. Proceedings of the National Academy of Sciences, India Section A: Physical Sciences, 89(2): 269-277. https://doi.org/10.1007/s40010-017-0465-3 

[42] Singh, N.P., Srivastava, R. (2017). Weibull probability distribution function-based matched filter approach for retinal blood vessels segmentation. In: Sahana S., Saha S. (eds) Advances in Computational Intelligence. Advances in Intelligent Systems and Computing, vol 509. Springer, Singapore. https://doi.org/10.1007/978-981-10-2525-9_40

[43] Mair, E., Hager, G.D., Burschka, D., Suppa, M., Hirzinger, G. (2010). Adaptive and generic corner detection based on the accelerated segment test. In: Daniilidis K., Maragos P., Paragios N. (eds) Computer Vision – ECCV 2010. ECCV 2010. Lecture Notes in Computer Science, vol 6312. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15552-9_14

[44] Rosten, E., Drummond, T. (2005). Fusing points and lines for high performance tracking. In Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, Beijing, pp. 1508-1515. https://doi.org/10.1109/ICCV.2005.104 

[45] Chli, M., Davison, A.J. (2008). Active matching. In: Forsyth D., Torr P., Zisserman A. (eds) Computer Vision – ECCV 2008. ECCV 2008. Lecture Notes in Computer Science, vol 5302. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88682-2_7

[46] Tsai, D.M., Lin, C.T. (2003). Fast normalized cross correlation for defect detection. Pattern Recognition Letters, 24(15): 2625-2631. https://doi.org/10.1016/S0167-8655(03)00106-5

[47] Al-Rawi, M., Qutaishat, M. Arrar, M. (2007). An improved matched filter for blood vessel detection of digital retinal images, Computer Biology and Medicine, 37(2): 262-267. https://doi.org/10.1016/j.compbiomed.2006.03.003

[48] Zhang, B., Zhang, L., Zhang, L., Karray, F. (2010). Retinal vessel extraction by matched filter with first-order derivative of Gaussian. Computer Biology and Medicine, 40(4): 438-445. https://doi.org/10.1016/j.compbiomed.2010.02.008 

[49] Cinsdikici, M.G., Aydın, D. (2009). Detection of blood vessels in ophthalmoscope images using mf/ant (matched filter/ant colony) algorithm, Computer Methods and Programs in Biomedicine, 96(2): 85-95. https://doi.org/10.1016/j.cmpb.2009.04.005

[50] Amin, M.A., Yan, H. (2011). High speed detection of retinal blood vessels in fundus image using phase congruency. Soft Computing, 15(6): 1217-1230. https://doi.org/10.1007/s00500-010-0574-2

[51] Jiang, X., Mojon, D. (2003). Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(1): 131-137. https://doi.org/10.1109/TPAMI.2003.1159954

[52] D. Marí n, A. Aquino, M.E. Gegúndez-Arias, Bravo, J.M. (2011). A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Transactions on Med. Imaging, 30(1): 146-158. https://doi.org/10.1109/TMI.2010.2064333

[53] Lam, B.S., Gao, Y., Liew, A.W.C. (2010). General retinal vessel segmentation using regularization-based multiconcavity modeling. IEEE Transactions on Medical Imaging, 29(7): 1369-1381. https://doi.org/10.1109/TMI.2010.2043259