Next Article in Journal
Symmetry Reduction and Numerical Solution of Von K a ´ rm a ´ n Swirling Viscous Flow
Next Article in Special Issue
Neutrosophic Triplet Cosets and Quotient Groups
Previous Article in Journal
A Simple Method for Measuring the Bilateral Symmetry of Leaves
Previous Article in Special Issue
An Extension of Neutrosophic AHP–SWOT Analysis for Strategic Planning and Decision-Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Skin Lesion Detection Approach Using Neutrosophic Clustering and Adaptive Region Growing in Dermoscopy Images

1
Department of Computer Science, University of Illinois at Springfield, Springfield, IL 62703, USA
2
Department of Electronics and Electrical Communications Engineering, Faculty of Engineering, Tanta University, Tanta 31527, Egypt
3
Department of Mathematics, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA
*
Author to whom correspondence should be addressed.
Submission received: 26 March 2018 / Revised: 9 April 2018 / Accepted: 14 April 2018 / Published: 18 April 2018

Abstract

:
This paper proposes novel skin lesion detection based on neutrosophic clustering and adaptive region growing algorithms applied to dermoscopic images, called NCARG. First, the dermoscopic images are mapped into a neutrosophic set domain using the shearlet transform results for the images. The images are described via three memberships: true, indeterminate, and false memberships. An indeterminate filter is then defined in the neutrosophic set for reducing the indeterminacy of the images. A neutrosophic c-means clustering algorithm is applied to segment the dermoscopic images. With the clustering results, skin lesions are identified precisely using an adaptive region growing method. To evaluate the performance of this algorithm, a public data set (ISIC 2017) is employed to train and test the proposed method. Fifty images are randomly selected for training and 500 images for testing. Several metrics are measured for quantitatively evaluating the performance of NCARG. The results establish that the proposed approach has the ability to detect a lesion with high accuracy, 95.3% average value, compared to the obtained average accuracy, 80.6%, found when employing the neutrosophic similarity score and level set (NSSLS) segmentation approach.

1. Introduction

Dermoscopy is an in-vivo and noninvasive technique to assist clinicians in examining pigmented skin lesions and investigating amelanotic lesions. It visualizes structures of the subsurface skin in the superficial dermis, the dermoepidermal junction, and the epidermis [1]. Dermoscopic images are complex and inhomogeneous, but they have a significant role in early identification of skin cancer. Recognizing skin subsurface structures is performed by visually searching for individual features and salient details [2]. However, visual assessment of dermoscopic images is subjective, time-consuming, and prone to errors [3]. Consequently, researchers are interested in developing automated clinical assessment systems for lesion detection to assist dermatologists [4,5]. These systems require efficient image segmentation and detection techniques for further feature extraction and skin cancer lesion classification. However, skin cancer segmentation and detection processes are complex due to dissimilar lesion color, texture, size, shape, and type; as well as the irregular boundaries of various lesions and the low contrast between skin and the lesion. Moreover, the existence of dark hair that covers skin and lesions leads to specular reflections.
Traditional skin cancer detection techniques implicate image feature analysis to outline the cancerous areas of the normal skin. Thresholding techniques use low-level features, including intensity and color to separate the normal skin and cancerous regions. Garnavi et al. [6] applied Otsu’s method to identify the core-lesion; nevertheless, such process is disposed to skin tone variations and lighting. Moreover, dermoscopic images include some artifacts due to water bubble, dense hairs, and gel that are a great challenge for accurate detection. Silveira et al. [7] evaluated six skin lesions segmentation techniques in dermoscopic images, including the gradient vector flow (GVF), level set, adaptive snake, adaptive thresholding, fuzzy-based split and merge (FSM), and the expectation–maximization level set (EMLV) methods. The results established that adaptive snake and EMLV were considered the superior semi-supervised techniques, and that FSM achieved the best fully computerized results.
In dermoscopic skin lesion images, Celebi et al. [8] applied an unsupervised method using a modified JSEG algorithm for border detection, where the original JSEG algorithm is an adjusted version of the generalized Lloyd algorithm (GLA) for color quantization. The main idea of this method is to perform the segmentation process using two independent stages, namely color quantization and spatial segmentation. However, one of the main limitations occurs when the bounding box does not entirely include the lesion. This method was evaluated on 100 dermoscopic images, and border detection error was calculated. Dermoscopic images for the initial consultation were analyzed by Argenziano et al. [9] and were compared with images from the last follow-up consultation and the symmetrical/asymmetrical structural changes. Xie and Bovik [10] implemented a dermoscopic image segmentation approach by integrating the genetic algorithm (GA) and self-generating neural network (SGNN). The GA was used to select the optimal samples as initial neuron trees, and then the SGNF was used to train the remaining samples. Accordingly, the number of clusters was determined by adjusting the SD of cluster validity. Thus, the clustering is accomplished by handling each neuron tree as a cluster. A comparative study between this method and other segmentation approaches—namely k-means, statistical region merging, Otsu’s thresholding, and the fuzzy c-means methods—has been conducted revealing that the optimized method provided improved segmentation and more accurate results.
Barata et al. [11] proposed a machine learning based, computer-aided diagnosis system for melanoma using features having medical importance. This system used text labels to detect several significant dermoscopic criteria, where, an image annotation scheme was applied to associate the image regions with the criteria (texture, color, and color structures). Features fusion was then used to combine the lesions’ diagnosis and the medical information. The proposed approach achieved 84.6% sensitivity and 74.2% specificity on 804 images of a multi-source data set.
Set theory, such as the fuzzy set method, has been successfully employed into image segmentation. Fuzzy sets have been introduced into image segmentation applications to handle uncertainty. Several researchers have been developing efficient clustering techniques for skin cancer segmentation and other applications based on fuzzy sets. Fuzzy c-means (FCM) uses the membership function to segment the images into one or several regions. Lee and Chen [12] proposed a segmentation technique on different skin cancer types using classical FCM clustering. An optimum threshold-based segmentation technique using type-2 fuzzy sets was applied to outline the skin cancerous areas. The results established the superiority of this method compared to Otsu’s algorithm, due its robustness to skin tone variations and shadow effects. Jaisakthi et al. [13] proposed an automated skin lesion segmentation technique in dermoscopic images using a semi-supervised learning algorithm. A k-means clustering procedure was employed to cluster the pre-processed skin images, where the skin lesions were identified from these clusters according to the color feature. However, the fuzzy set technique cannot assess the indeterminacy of each element in the set. Zhou et al. [14] introduced the fuzzy c-means (FCM) procedure based on mean shift for detecting regions within the dermoscopic images.
Recently, neutrosophy has provided a prevailing technique, namely the neutrosophic set (NS), to handle indeterminacy during the image processing. Guo and Sengur [15] integrated the NS and FCM frameworks to resolve the inability of FCM for handling uncertain data. A clustering approach called neutrosophic c-means (NCM) clustering was proposed to cluster typical data points. The results proved the efficiency of the NCM for image segmentation and data clustering. Mohan et al. [16] proposed automated brain tumor segmentation based on a neutrosophic and k-means clustering technique. A non-local neutrosophic Wiener filter was used to improve the quality of magnetic resonance images (MRI) before applying the k-means clustering approach. The results found detection rates of 100% with 98.37% accuracy and 99.52% specificity. Sengur and Guo [17] carried out an automated technique using a multiresolution wavelet transform and NS. The color/texture features have been mapped on the NS and wavelet domain. Afterwards, the c-k-means clustering approach was employed for segmentation. Nevertheless, wavelets [18] are sensitive to poor directionality during the analysis of supplementary functions in multi-dimensional applications. Hence, wavelets are relatively ineffectual to represent edges and anisotropic features in the dermoscopic images. Subsequently, enhanced multi-scale procedures have been established, including the curvelets and shearlets to resolve the limitations of wavelet analysis. These methods have the ability to encode directional information for multi-scale analysis. Shearlets provides a sparse representation of the two-dimensional information with edge discontinuities [19]. Shearlet-based techniques were established to be superior to wavelet-based methods [20].
Dermoscopic images include several artifacts such as hair, air bubbles, and other noise factors that are considered indeterminate information. The above-mentioned skin lesion segmentation methods either need a preprocessing to deal with the indeterminate information, or their detection results must be affected by them. To overcome this disadvantage, we introduce the neutrosophic set to deal with indeterminate information in dermoscopic images; we use a shearlet transform and the neutrosophic c-means (NCM) method along with an indeterminacy filter (IF) to eliminate the indeterminacy for accurate skin cancer segmentation. An adaptive region growing method is also employed to identify the lesions accurately.
The rest of the paper is organized as follows. In the second section, the proposed method is presented. Then the experimental results are discussed in the third section. The conclusions are drawn in the final section.

2. Methodology

The current work proposes a skin lesion detection algorithm using neutrosophic clustering and adaptive region growing in dermoscopic images. In this study, the red channel is used to detect the lesion, where healthy skin regions tend to be reddish, while darker pixels often occur in skin lesion regions [21]. First, the shearlet transform is employed on the red channel of dermoscopic image to extract the shearlet features. Then, the red channel of the image is mapped into the neutrosophic set domain, where the map functions are defined using the shearlet features. In the neutrosophic set, an indeterminacy filtering operation is performed to remove indeterminate information, such as noise and hair without using any de-noising or hair removal approaches. Then, the segmentation is performed through the neutrosophic c-means (NCM) clustering algorithm. Finally, the lesions are identified precisely using adaptive region growing on the segmentation results.

2.1. Shearlet Transform

Shearlets are based on a rigorous and simple mathematical framework for the geometric representation of multidimensional data and for multiresolution analysis [22]. The shearlet transform (ST) resolves the limitations of wavelet analysis; where wavelets fail to represent the geometric regularities and yield surface singularities due to their isotropic support. Shearlets include nearly parallel elongated functions to achieve surface anisotropy along the edges. The ST is an innovative two-dimensional wavelet transformation extension using directional and multiscale filter banks to capture smooth contours corresponding to the prevailing features in an image. Typically, the ST is a function with three parameters a, s, and t denoting the scale, shear, and translation parameters, respectively. The shearlet can fix both the locations of singularities and the singularities’ curve tracking automatically. For a > 0 , s R , t R 2 , the ST can be defined using the following expression [23]:
S T ς p ( a , s , t ) = < p , ς a , s , t ,
where ς a , s , t ( f ) = | det N a , s | 1 / 2 ς ( N a , s 1 ( f t ) ) and N a , s = [ a s 0 a ] . Each matrix N a , s can be defined as:
N a , s = V s D a ,
where the shear matrix is expressed by:
V s = [ 1 s 0 1 ]
and the anisotropic dilation matrix is given by:
D a = [ a s 0 a ] .
During the selection of a proper decomposition function for any τ = ( τ 1 , τ 2 ) R 2 , and τ 2 0 , ς can be expressed by:
ς ( τ ) = ς ( τ 1 , τ 2 ) = ς 1 ( τ 1 ) ς 2 ( τ 1 τ 2 ) ,
where ς 1 L 2 ( R ) and ς 2 L 2 = 1 .
From the preceding equations, the discrete shearlet transform (DST) is formed by translation, shearing, and scaling to provide the precise orientations and locations of edges in an image. The DST is acquired by sampling the continuous ST. It offers a decent anisotropic feature extraction. Thus, the DST system is properly definite by sampling the continuous ST on a discrete subset of the shearlet group as follows, where j , k , m Z × Z × Z 2 [24]:
S T ( ς ) = { ς j , k , m = a 3 4 ς ( D a 1 V s 1 ( . t ) ) : ( j , k , m ) } .
The DST can be divided into two steps: multi-scale subdivision and direction localization [25], where the Laplacian pyramid algorithm is first applied to an image in order to obtain the low-and-high-frequency components at any scale j, and then direction localization is achieved with a shear filter on a pseudo polar grid.

2.2. Neutrosophic Images

Neutrosophy has been successfully used for many applications to describe uncertain or indeterminate information. Every event in the neutrosophy set (NS) has a certain degree of truth (T), indeterminacy (I), and falsity (F), which are independent from each other. Previously reported studies have demonstrated the role of NS in image processing [26,27].
A pixel P ( i , j ) in an image is denoted as P N S ( i , j ) = { T ( i , j ) , I ( i , j ) , F ( i , j ) } in the NS domain, where T ( i , j ) , I ( i , j ) , and F ( i , j ) are the membership values belonging to the brightest pixel set, indeterminate set, and non-white set, respectively.
In the proposed method, the red channel of the dermoscopic image is transformed into the NS domain using shearlet feature values as follows:
T ( x , y ) = S T L ( x , y ) S T L min S T L max S T L min I ( x , y ) = S T H ( x , y ) S T H min S T H max S T H min
where T and I are the true and indeterminate membership values in the NS. S T L ( x , y ) is the low-frequency component of the shearlet feature at the current pixel P(x, y). In addition, S T L max and S T L min are the maximum and minimum of the low-frequency component of the shearlet feature in the whole image, respectively. S T H ( x , y ) is the high-frequency component of the shearlet feature at the current pixel P(x, y). Moreover, S T H max and S T H min are the maximum and minimum of the high-frequency component of the shearlet feature in the whole image, respectively. In the proposed method, we only use T and I for segmentation because we are only interested in the degree to which a pixel belongs to the high intensity set of the red channel.

2.3. Neutrosophic Indeterminacy Filtering

In an image, noise can be considered as indeterminate information, which can be handled efficiently using NS. Such noise and artifacts include the existence of hair, air bubbles, and blurred boundaries. In addition, NS can be integrated with different clustering approaches for image segmentation [16,28], where the boundary information, as well as the details, may be blurred due to the principal low-pass filter leading to inaccurate segmentation of the boundary pixels. A novel NS based clustering procedure, namely the NCM has been carried out for data clustering [15], which defined the neutrosophic membership subsets using attributes of the data. Nevertheless, when it is applied to the image processing area, it does not account for local spatial information. Several side effects can affect the image when using classical filters in the NS domain, leading to blurred edge information, incorrect boundary segmentation, and an inability to combine the local spatial information with the global intensity distribution.
After the red channel of the dermoscopic image is mapped into the NS domain, an indeterminacy filter (IF) is defined based on the neutrosophic indeterminacy value, and the spatial information is utilized to eliminate the indeterminacy. The IF is defined by using the indeterminate value I s ( x , y ) , which has the following kernel function [28]:
O I ( u , v ) = 1 2 π σ I 2 e u 2 + v 2 2 σ I 2
σ I ( x , y ) = f ( I ( x , y ) ) = r I ( x , y ) + q ,
where σ I represents the Gaussian distribution’s standard deviation, which is defined as a linear function f ( . ) associated with the indeterminacy degree. Since σ I becomes large with a high indeterminacy degree, the IF can create a smooth current pixel by using its neighbors. On the other hand, with a low indeterminacy degree, the value of σ I is small and the IF performs less smoothing on the current pixel with its neighbors.

2.4. Neutrosophic C-Means (NCM)

In the NCM algorithm, an objective function and membership are considered as follows [29]:
J ( T , I , F , A ) = i = 1 N j = 1 A ( ϖ 1 T i j ) m | | x i a j | | 2 + i = 1 N ( ϖ 2 I i ) m | | x i a ¯ i max | | 2 + i = 1 N δ 2 ( ϖ 3 F i ) m
a ¯ i max = a p i + a q i 2 p i = arg max j = 1 , 2 , , A ( T i j ) q i = arg max j p i j = 1 , 2 , , A ( T i j )
where m is a constant and usually equal to 2. The value of a ¯ i max is calculated, since p i and q i are identified as the cluster numbers with the largest and second largest values of T, respectively. The parameter δ is used for controlling the number of objects considered as outliers, and ϖ i is a weight factor.
In our NS domain, we only defined the membership values of T and I. Therefore, the objective function reduces to:
J ( T , I , F , A ) = i = 1 N j = 1 A ( ϖ 1 T i j ) m | | x i a j | | 2 + i = 1 N ( ϖ 2 I i ) m | | x i a ¯ i max | | 2 .
To minimize the objective function, three membership values are updated on each iteration as:
T i j = K ϖ 1 ( x i a j ) 2 m 1   I   i = K ϖ 2 ( x i a ¯ i max ) 2 m 1 K = [ 1 ϖ 1 j = 1 A ( x i a j ) 2 m 1 + 1 ϖ 2 ( x i a ¯ i max ) 2 m 1 ] 1
where a ¯ i max is calculated based on the indexes of the largest and the second largest value of Tij. The iteration does not stop until | T i j ( k + 1 ) T i j ( k ) | < ε , where ε is a termination criterion between 0 and 1, and k is the iteration step. In the proposed method, the neutrosophic image after indeterminacy filtering is used as the input for NCM algorithm, and the segmentation procedure is performed using the final clustering results. Since the pixels whose indeterminacy membership values are higher than their true membership values, it is hard to determine which group they belong to. To solve this problem, the indeterminacy filter is employed again on all pixels, and the group is determined according to their biggest true membership values for each cluster after the IF operation.

2.5. Lesion Detection

After segmentation, the pixels in an image are grouped into several groups according to their true membership values. Due to the fact that the lesions have low intensities, especially for the core part inside a lesion, the cluster with lowest true membership value is initially considered as the lesion candidate pixels. Then an adaptive region growing algorithm is employed to precisely detect the lesion boundary parts having higher intensity and lower contrast than the core ones. A contrast ratio is defined adaptively to control the growing speed:
D R ( t ) = mean ( R a R b ) mean ( R b ) ,
where DR(t) is the contrast ratio at the t-th iteration of growing, and Rb and Ra are the regions before and after the t-th iteration of growing, respectively.
A connected component analysis is taken to extract the components’ morphological features. Due to the fact that there is only one lesion in a dermoscopic image, the region with the biggest area is identified as the final lesion region. The block diagram of the proposed neutrosophic clustering and adaptive region growing (NCARG) method is illustrated in Figure 1.
Figure 1 illustrates the steps of the proposed skin lesion segmentation method (NCARG) using neutrosophic c-means and region growing algorithms. Initially, the red channel of the dermoscopic image is transformed using a shearlet transform, and the shearlet features of the image are used to map the image into the NS domain. In the NS domain, an indeterminacy filtering operation is taken to remove the indeterminate information. Afterward, the segmentation is performed through NCM clustering on the filtered image. Finally, the lesion is accurately identified using an adaptive region growing algorithm where the growing speed is controlled by a newly defined contrast ratio.
To illustrate the steps in the proposed method, we use an example to demonstrate the intermediate results in Figure 2. Figure 2a,b are the original image and its ground truth image of segmentation. Figure 2c is its red channel. Figure 2d,e are the results after indeterminacy filtering and the NCM. In Figure 2f, the final detection result is outlined in blue and ground truth in red where the detection result is very close to its ground truth result.

2.6. Evaluation Metrics

Several performance metrics are measured to evaluate the proposed skin cancer segmentation approach, namely the Jaccard index (JAC), Dice coefficient, sensitivity, specificity, and accuracy [30]. Each of these metric is defined in the remainder of this section. JAC is a statistical metric to compare diversity between the sample sets based upon the union and intersection operators as follows:
J A C ( Y , Q ) = A r Y A r Q A r Y A r Q ,
where and are the intersection and union of two sets, respectively. In addition, A r Y and A r Q are the automated segmented skin lesion area and the reference golden standard skin lesion area enclosed by the boundaries Y and Q ; respectively. Typically, a value of 1 specifies complete similarity, while a JAC value of 0 specifies no similarity.
The Dice index compares the similarity of two sets, which is given as following for two sets X and Y :
D S C = 2 | X Y | | X | + | Y |
Furthermore, the sensitivity, specificity, and accuracy are related to the detection of the lesion region. The sensitivity indicates the true positive rate, showing how well the algorithm successfully predicts the skin lesion region, which is expressed as follows:
Sensitivity = Number   of   true   positives Number   of   true   positives + Number   of   false   negatives .
The specificity indicates the true negative rate, showing how well the algorithm predicts the non-lesion regions, which is expressed as follows:
Specificity   = Number of true   negative Number of condition negative .
The accuracy is the proportion of true results (either positive or negative), which measures the reliability degree of a diagnostic test:
Accuracy = Number of true   positive   +   Number of true   negative Number of total   population .
These metrics are measured to evaluate the proposed NCARG method compared to another efficient segmentation algorithm that is based on the neutrosophic similarity score (NSS) and level set (LS), called NSSLS [31]. In the NSSLS segmentation method, the three membership subsets are used to transfer the input image to the NS domain, and then the NSS is applied to measure the fitting degree to the true tumor region. Finally, the LS method is employed to segment the tumor in the NSS image. In the current work, when the NSSLS is applied to the skin images, the images are interpreted using NSS, and the skin lesion boundary is extracted using the level set algorithm. Moreover, the statistical significance between the evaluated metrics using both segmentation methods is measured by calculating the significant difference value (p-value) to estimate the difference between the two methods. The p-value refers to the probability of error, where the two methods are considered statistically significant when p ≤ 0.05.

3. Experimental Results and Discussion

3.1. Dataset

The International Skin Imaging Collaboration (ISIC) Archive [32] contains over 13,000 dermoscopic images of skin lesions. Using the images in the ISIC Archive, the 2017 ISBI Challenge on Skin Lesion Analysis Towards Melanoma Detection was proposed to help participants develop image analysis tools to enable the automated diagnosis of melanoma from dermoscopic images. Image analysis of skin lesions includes lesion segmentation, detection and localization of visual dermoscopic features/patterns, and disease classification. All cases contain training, and binary mask images as ground truth files.
In our experiment, 50 images were selected to tune the parameters in the proposed NCARG algorithm and 500 images were used as the testing dataset. In the experiment, the parameters are set to r = 1, q = 0.05, w1 = 0.75, w2 = 0.25, and ε = 0.001 .

3.2. Detection Results

Skin lesions are visible by the naked eye; however, early-stage detection of melanomas is complex and difficult to distinguish from benign skin lesions with similar appearances. Detecting and recognizing melanoma at its earliest stages reduces melanoma mortality. Skin lesion digital dermoscopic images are employed in the present study to detect skin lesions for accurate automated diagnosis and clinical decision support. The ISIC images are used to test and to validate the proposed approach of skin imaging. Figure 3 demonstrates the detection results using the proposed NCARG approach compared to the ground truth images. In the Figure 3d, the boundary detection results are marked in blue and the ground truth results are in red. The detection results match the ground truth results, and their boundaries are very close. Figure 3 establishes that the proposed approach accurately detects skin lesion regions, even with lesions of different shapes and sizes.

3.3. Evaluation

Table 1 reports the average values as well as the standard deviations (SD) of the evaluation metrics on the proposed approach’s performance over 500 images.
Table 1 establishes that the proposed approach achieved a detection accuracy for the skin lesion regions of 95.3% with a 6% standard deviation, compared to the ground truth images. In addition, the mean values of the Dice index, Jaccard index, sensitivity, and specificity are 90.38%, 83.2%, 97.5%, and 88.8%; respectively, with standard deviations (SD) of 7.6%, 10.5%, 3.5%, and 11.4%; respectively. These reported experimental test results proved that the proposed NCARG approach correctly detects skin lesions of different shapes and sizes with high accuracy. Ten dermoscopic images were randomly selected; their segmentation results are shown in Figure 4, and the evaluation metrics are reported in Table 2.

3.4. Comparative Study with NSSLS Method

The proposed NCARG approach is compared with the NSSLS algorithm [31] for detecting skin lesions. Figure 4(a1–a10), Figure 4(b1–b10), Figure 4(c1–c10) and Figure 4(d1–d10) include the original dermoscopic images, the ground truth images, the segmented images using the NSSLS algorithm, and the NCARG proposed approach; respectively.
Figure 4 illustrates different samples from the test images with different size, shape, light illumination, skin surface roughness/smoothness, and the existence of hair and/or air bubbles. For these different samples, the segmented image using the proposed NCARG algorithm is matched with the ground truth; while, the NSSLS failed to accurately match the ground truth. Thus, Figure 4 demonstrates that the proposed approach accurately detects the skin lesion under the different cases compared with the NSSLS method. The superiority of the proposed approach is due to the ability of the NCM along with the IF to handle indeterminate information. In addition, shearlet transform achieved the surface anisotropic regularity along the edges leading the algorithm to capture the smooth contours corresponding to the dominant features in the image. For the same images in Figure 4, the comparative results of the previously mentioned evaluation metrics are plotted for the NCARG and NSSLS in Figure 5 and Figure 6; respectively. In both figures, the X-axis denotes the image name under study, and the Y-axis denotes the value of the corresponding metric in the bar graph.
Figure 5 along with Table 2 illustrate the accuracy of the proposed algorithm, which achieves an average accuracy of 97.638% for the segmentation of the different ten skin lesion samples, while Figure 6 illustrates about 44% average accuracy of the NSSLS method. Thus, Figure 5 and Figure 6 establish the superiority of the proposed approach compared with the NSSLS method, owing to the removal the indeterminate information and the efficiency of the shearlet transform. The same results are confirmed by measuring the same metrics using 500 images, as reported in Figure 7.
Figure 7 reports that the proposed method achieves about 15% improvement on the accuracy and about 25% improvement in the JAC over the NSSLS method. Generally, Figure 7 proves the superiority of the proposed method compared with the NSSLS method. In addition, Table 3 reports the statistical results on the testing images; it compares the detection performance with reference to the ground truth segmented images for the NSSLS and the proposed NCARG method. The p-values are used to estimate the differences between the metric results of the two methods. The statistical significance was set at a level of 0.05; a p-value of <0.05 refers to the statistically significant relation.
The p-values reported in Table 3 establish a significant difference in the performance metric values when using the proposed NCARG and NSSLS methods. The mean and standard deviation of the accuracy, Dice, JAC, sensitivity, and specificity for the NSSLS and NCARG methods, along with the p-values, establish that the proposed NCARG method improved skin lesion segmentation compared with the NSSLS method. Figure 7 along with Table 3 depicts that the NCARG achieved 95.3% average accuracy, which is superior to the 80.6% average accuracy of the NSSLS approach. Furthermore, the proposed algorithm achieved a 90.4% average Dice coefficient value, 83.2% average JAC value, 97.5% average sensitivity value, and 88.8% average specificity value. The segmentation accuracy improved from 80.6 ± 22.1 using the NSSLS to 95.3 ± 6 using the proposed method, which is a significant difference. The skin lesion segmentation improvement is statistically significant (p < 0.05) for all measured performances metrics by SPSS software.
The cumulative percentage is used to measure the percentage of images, which have a metric value less than a threshold value. The cumulative percentage (CP) curves of the measured metrics are plotted for comparing the performance of the NSSLS and NCARG algorithms. Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the cumulative percentage of images having five measurements less than a certain value; the X-axis represents the different threshold values on the metric and the Y-axis is the percentage of the number of images whose metric values are greater than this threshold value. These figures demonstrate the comparison of performances in terms of the cumulative percentage of the different metrics, namely the accuracy, Dice value, JAC, sensitivity, and specificity; respectively.
Figure 8 illustrates a comparison of performances in terms of the cumulative percentage of the NCARG and NSSLS segmentation accuracy. About 80% of the images have a 95% accuracy for the segmentation using the proposed NCARG, while the achieved cumulative accuracy percentage using the NSSLS is about 65% for 80% of the images.
Figure 9 compares the performances, in terms of the cumulative percentage of the Dice index values, of the NCARG and NSSLS segmentations. Figure 9 depicts that 100% of the images have about 82% Dice CP values using the NCARG method, while 58% of the images achieved the same 82% Dice CP values when using the NSSLS method.
Figure 10 compares the performances, in terms of the cumulative percentage of the JAC values, of the NCARG and NSSLS segmentation. About 50% of the images have 83% CP JAC values using the NCARG method, while the obtained CP JAC using the NSSLS for the same number of images is about 72%.
Figure 11 compares the performances, in terms of the cumulative percentage of the sensitivity, using the NCARG and NSSLS segmentation methods. About 50% of the images have 97% sensitivity value using the NCARG method, while the NSSLS achieves about 92% sensitivity value.
Figure 12 demonstrates the comparison of performances, in terms of the cumulative percentage of the specificity, using the NCARG and NSSLS segmentation methods. A larger number of images have accuracies in the range of 100% to 85% when using the NSSLS compared to the proposed method. However, about 100% of the images have 63% CP specificity values using the NCARG method, while the NSSLS achieved about 20% cumulative specificity values with 90% of the images. Generally, the cumulative percentage of each metric establishes the superiority of the proposed NCARG method compared with the NSSLS method.

3.5. Comparison with Other Segmentation Methods Using the ISIC Archive

In case of lesion segmentation, variability in the images is very high; therefore, performance results highly depend on the data set that is used in the experiments. Several studies and challenges have been conducted to resolve such trials [33]. In order to validate the performance of the proposed NCARG method, a comparison is conducted on the results of previously published studies on the same ISIC dermoscopic image data set. Yu et al. [34] leveraged very deep convolutional neural networks (CNN) for melanoma image recognition using the ISIC data set. The results proved that deeper networks, of more than 50 layers, provided more discriminating features with more accurate recognition. For accurate skin lesion segmentation, fully convolutional residual networks (FCRN) with a multi-scale contextual information integration structure were applied to the further classification stage. The network depth increase achieved enhanced discrimination capability of CNN. The FCRNs of 38 layers achieved 0.929 accuracy, 0.856 Dice, 0.785 JAC, and 0.882 sensitivity. Thus, our proposed NCARG provides superior performance in terms of these metrics. However, with an increased FCRN layer depth of 50, the performance improvement increased compared to our proposed method. However, the complexity also increases. In addition, Yu et al. have compared their study with other studies, namely the fully convolutional VGG-16 network [34,35] and the fully convolutional GoogleNet [34,36] establishing the superiority of our work compared to both of those studies. Table 4 reports a comparative study between the preceding studies, which have used the same ISIC data set, and the proposed NCARG method.
The preceding results and the comparative study establish the superiority of the proposed NCARG method compared with other methods. This superiority arises due to the effectiveness of the shearlet transform, the indeterminacy filtering, and the adaptive region growing, yielding an overall accuracy of 95.3%. Moreover, in comparison with previously conducted studies on the same ISIC dermoscopic image data set, the proposed method can be considered an effective method. In addition, the studies in References [37,38] can be improved and compared with the proposed method on the same dataset.

4. Conclusions

In this study, a novel skin lesion detection algorithm is proposed based on neutrosophic c-means and adaptive region growing algorithms applied to dermoscopic images. The dermoscopic images are mapped into the neutrosophic domain using the shearlet transform results of the image. An indeterminate filter is used for reducing the indeterminacy on the image, and the image is segmented via a neutrosophic c-means clustering algorithm. Finally, the skin lesion is accurately identified using a newly defined adaptive region growing algorithm. A public data set was employed to test the proposed method. Fifty images were selected randomly for tuning, and five hundred images were used to test the process. Several metrics were measured for evaluating the proposed method performance. The evaluation results demonstrate the proposed method achieves better performance to detect the skin lesions when compared to the neutrosophic similarity score and level set (NSSLS) segmentation approach.
The proposed NCARG approach achieved average 95.3% accuracy of 500 dermoscopic images including, ones with different shape, size, color, uniformity, skin surface roughness, light illumination during the image capturing process, and existence of air bubbles. The significant difference in the p-values of the measured metrics using the NSSLS and the proposed NCARG proved the superiority of the proposed method. This proposed method determines possible skin lesions in dermoscopic images which can be employed for further accurate automated diagnosis and clinical decision support.

Author Contributions

Yanhui Guo, Amira S. Ashour and Florentin Smarandache conceived and worked together to achieve this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marghoob, A.A.; Swindle, L.D.; Moricz, C.Z.; Sanchez, F.A.; Slue, B.; Halpern, A.C.; Kopf, A.W. Instruments and new technologies for the in vivo diagnosis of melanoma. J. Am. Acad. Dermatol. 2003, 49, 777–797. [Google Scholar] [CrossRef]
  2. Wolfe, J.M.; Butcher, S.J.; Lee, C.; Hyle, M. Changing your mind: On the contributions of top-down and bottom-up guidance in visual search for feature singletons. J. Exp. Psychol. Hum. Percept. Perform. 2003, 29, 483–502. [Google Scholar] [CrossRef] [PubMed]
  3. Binder, M.; Schwarz, M.; Winkler, A.; Steiner, A.; Kaider, A.; Wolff, K.; Pehamberger, M. Epiluminescence microscopy. A useful tool for the diagnosis of pigmented skin lesions for formally trained dermatologists. Arch. Dermatol. 1995, 131, 286–291. [Google Scholar] [CrossRef] [PubMed]
  4. Celebi, M.E.; Wen, Q.; Iyatomi, H.; Shimizu, K.; Zhou, H.; Schaefer, G. A State-of-the-Art Survey on Lesion Border Detection in Dermoscopy Images. In Dermoscopy Image Analysis; Celebi, M.E., Mendonca, T., Marques, J.S., Eds.; CRC Press: Boca Raton, FL, USA, 2015; pp. 97–129. [Google Scholar]
  5. Celebi, M.E.; Iyatomi, H.; Schaefer, G.; Stoecker, W.V. Lesion Border Detection in Dermoscopy Images. Comput. Med. Imaging Graph. 2009, 33, 148–153. [Google Scholar] [CrossRef] [PubMed]
  6. Garnavi, R.; Aldeen, M.; Celebi, M.E.; Varigos, G.; Finch, S. Border detection in dermoscopy images using hybrid thresholding on optimized color channels. Comput. Med. Imaging Graph. 2011, 35, 105–115. [Google Scholar] [CrossRef] [PubMed]
  7. Silveira, M.; Nascimento, J.C.; Marques, J.S.; Marçal, A.R.; Mendonça, T.; Yamauchi, S.; Maeda, J.; Rozeira, J. Comparison of segmentation methods for melanoma diagnosis in dermoscopy images. IEEE J. Sel. Top. Signal Process. 2009, 3, 35–45. [Google Scholar] [CrossRef]
  8. Celebi, M.E.; Aslandogan, Y.A.; Stoecker, W.V.; Iyatomi, H.; Oka, H.; Chen, X. Unsupervised border detection in dermoscopy images. Skin Res. Technol. 2007, 13, 454–462. [Google Scholar] [CrossRef] [PubMed]
  9. Argenziano, G.; Kittler, H.; Ferrara, G.; Rubegni, P.; Malvehy, J.; Puig, S.; Cowell, L.; Stanganelli, I.; de Giorgi, V.; Thomas, L.; et al. Slow-growing melanoma: A dermoscopy follow-up study. Br. J. Dermatol. 2010, 162, 267–273. [Google Scholar] [CrossRef] [PubMed]
  10. Xie, F.; Bovik, A.C. Automatic segmentation of dermoscopy images using self-generating neural networks seeded by genetic algorithm. Pattern Recognit. 2013, 46, 1012–1019. [Google Scholar] [CrossRef]
  11. Barata, C.; Celebi, M.E.; Marques, J.S. Development of a clinically oriented system for melanoma diagnosis. Pattern Recognit. 2017, 69, 270–285. [Google Scholar] [CrossRef]
  12. Lee, H.; Chen, Y.P.P. Skin cancer extraction with optimum fuzzy thresholding technique. Appl. Intell. 2014, 40, 415–426. [Google Scholar] [CrossRef]
  13. Jaisakthi, S.M.; Chandrabose, A.; Mirunalini, P. Automatic Skin Lesion Segmentation using Semi-supervised Learning Technique. arXiv, 2017; arXiv:1703.04301. [Google Scholar]
  14. Zhou, H.; Schaefer, G.; Sadka, A.H.; Celebi, M.E. Anisotropic mean shift based fuzzy c-means segmentation of dermoscopy images. IEEE J. Sel. Top. Signal Process. 2009, 3, 26–34. [Google Scholar] [CrossRef] [Green Version]
  15. Guo, Y.; Sengur, A. NCM: Neutrosophic c-means clustering algorithm. Pattern Recognit. 2015, 48, 2710–2724. [Google Scholar] [CrossRef]
  16. Mohan, J.; Krishnaveni, V.; Guo, Y. Automated Brain Tumor Segmentation on MR Images Based on Neutrosophic Set Approach. In Proceedings of the 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, India, 26–27 February 2015; pp. 1078–1083. [Google Scholar]
  17. Sengur, A.; Guo, Y. Color texture image segmentation based on neutrosophic set and wavelet transformation. Comput. Vis. Image Underst. 2011, 115, 1134–1144. [Google Scholar] [CrossRef]
  18. Khalid, S.; Jamil, U.; Saleem, K.; Akram, M.U.; Manzoor, W.; Ahmed, W.; Sohail, A. Segmentation of skin lesion using Cohen–Daubechies–Feauveau biorthogonal wavelet. SpringerPlus 2016, 5, 1603. [Google Scholar] [CrossRef] [PubMed]
  19. Guo, K.; Labate, D. Optimally Sparse Multidimensional Representation using Shearlets. SIAM J. Math. Anal. 2007, 39, 298–318. [Google Scholar] [CrossRef]
  20. Guo, K.; Labate, D. Characterization and analysis of edges using the continuous shearlet transform. SIAM J. Imaging Sci. 2009, 2, 959–986. [Google Scholar] [CrossRef]
  21. Cavalcanti, P.G.; Scharcanski, J. Automated prescreening of pigmented skin lesions using standard cameras. Comput. Med. Imaging Graph. 2011, 35, 481–491. [Google Scholar] [CrossRef] [PubMed]
  22. Labate, D.; Lim, W.; Kutyniok, G.; Weiss, G. Sparse Multidimensional Representation Using Shearlets. In Proceedings of the Wavelets XI, San Diego, CA, USA, 31 July–4 August 2005; SPIE: Bellingham, WA, USA, 2005; Volume 5914, pp. 254–262. [Google Scholar]
  23. Zhou, H.; Niu, X.; Qin, H.; Zhou, J.; Lai, R.; Wang, B. Shearlet Transform Based Anomaly Detection for Hyperspectral Image. In Proceedings of the 6th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optoelectronic Materials and Devices for Sensing, Imaging, and Solar Energy, Xiamen, China, 26–29 April 2012; Volume 8419. [Google Scholar]
  24. Theresa, M.M. Computer aided diagnostic (CAD) for feature extraction of lungs in chest radiograph using different transform features. Biomed. Res. 2017, S208–S213. Available online: http://www.biomedres.info/biomedical-research/computer-aided-diagnostic-cad-for-feature-extraction-of-lungs-in-chest-radiograph-using-different-transform-features.html (accessed on 26 March 2018).
  25. Liu, X.; Zhou, Y.; Wang, Y.J. Image fusion based on shearlet transform and regional features. AEU Int. J. Electron. Commun. 2014, 68, 471–477. [Google Scholar] [CrossRef]
  26. Mohan, J.; Krishnaveni, V.; Guo, Y. A new neutrosophic approach of Wiener filtering for MRI denoising. Meas. Sci. Rev. 2013, 13, 177–186. [Google Scholar] [CrossRef]
  27. Mohan, J.; Guo, Y.; Krishnaveni, V.; Jeganathan, K. MRI Denoising Based on Neutrosophic Wiener Filtering. In Proceedings of the 2012 IEEE International Conference on Imaging Systems and Techniques, Manchester, UK, 16–17 July 2012; pp. 327–331. [Google Scholar]
  28. Cheng, H.; Guo, Y.; Zhang, Y. A novel image segmentation approach based on neutrosophic set and improved fuzzy c-means algorithm. New Math. Nat. Comput. 2011, 7, 155–171. [Google Scholar] [CrossRef]
  29. Guo, Y.; Xia, R.; Şengür, A.; Polat, K. A novel image segmentation approach based on neutrosophic c-means clustering and indeterminacy filtering. Neural Comput. Appl. 2017, 28, 3009–3019. [Google Scholar] [CrossRef]
  30. Guo, Y.; Zhou, C.; Chan, H.P.; Chughtai, A.; Wei, J.; Hadjiiski, L.M.; Kazerooni, E.A. Automated iterative neutrosophic lung segmentation for image analysis in thoracic computed tomography. Med. Phys. 2013, 40. [Google Scholar] [CrossRef] [PubMed]
  31. Guo, Y.; Şengür, A.; Tian, J.W. A novel breast ultrasound image segmentation algorithm based on neutrosophic similarity score and level set. Comput. Methods Programs Biomed. 2016, 123, 43–53. [Google Scholar]
  32. ISIC. Available online: http://www.isdis.net/index.php/isic-project (accessed on 26 March 2018).
  33. Gutman, D.; Codella, N.C.; Celebi, E.; Helba, B.; Marchetti, M.; Mishra, N.; Halpern, A. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv, 2016; arXiv:1605.01397. [Google Scholar]
  34. Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans. Med. Imaging 2017, 36, 994–1004. [Google Scholar] [CrossRef] [PubMed]
  35. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
  36. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  37. Ma, Z.; Tavares, J.M.R. A novel approach to segment skin lesions in dermoscopic images based on a deformable model. IEEE J. Biomed. Health Inform. 2016, 20, 615–623. [Google Scholar]
  38. Codella, N.C.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), 2017, hosted by the international skin imaging collaboration (ISIC). arXiv, 2017; arXiv:1710.05006. [Google Scholar]
Figure 1. Flowchart of the proposed neutrosophic clustering and adaptive region growing (NCARG) skin lesion detection algorithm.
Figure 1. Flowchart of the proposed neutrosophic clustering and adaptive region growing (NCARG) skin lesion detection algorithm.
Symmetry 10 00119 g001
Figure 2. Intermediate results of an example image: ISIC_0000015: (a) Original skin lesion image; (b) Ground truth image; (c) Red channel of the original image; (d) Result after indeterminate filtering; (e) Result after NCM; (f) Detected lesion region after adaptive region growing, where the blue line is for the boundary of the detection result and the red line is the boundary of the ground truth result.
Figure 2. Intermediate results of an example image: ISIC_0000015: (a) Original skin lesion image; (b) Ground truth image; (c) Red channel of the original image; (d) Result after indeterminate filtering; (e) Result after NCM; (f) Detected lesion region after adaptive region growing, where the blue line is for the boundary of the detection result and the red line is the boundary of the ground truth result.
Symmetry 10 00119 g002
Figure 3. Detection results: (a) Skin cancer image number; (b) Original skin lesion image; (c) Ground truth image; and (d) Detected lesion region using the proposed approach.
Figure 3. Detection results: (a) Skin cancer image number; (b) Original skin lesion image; (c) Ground truth image; and (d) Detected lesion region using the proposed approach.
Symmetry 10 00119 g003
Figure 4. Comparative segmentation results, where (a1a10): original dermoscopic test images; (b1b10): ground truth images; (c1c10): segmented images using the neutrosophic similarity score and level set (NSSLS) algorithm, and (d1d10): NCARG proposed approach.
Figure 4. Comparative segmentation results, where (a1a10): original dermoscopic test images; (b1b10): ground truth images; (c1c10): segmented images using the neutrosophic similarity score and level set (NSSLS) algorithm, and (d1d10): NCARG proposed approach.
Symmetry 10 00119 g004aSymmetry 10 00119 g004b
Figure 5. Evaluation metrics of the ten test images using the proposed segmentation NCARG approach.
Figure 5. Evaluation metrics of the ten test images using the proposed segmentation NCARG approach.
Symmetry 10 00119 g005
Figure 6. Evaluation metrics of the ten test images using the NSSLS segmentation approach for comparison.
Figure 6. Evaluation metrics of the ten test images using the NSSLS segmentation approach for comparison.
Symmetry 10 00119 g006
Figure 7. Comparative results of the performance evaluation metrics of the proposed NCARG and NSSLS methods.
Figure 7. Comparative results of the performance evaluation metrics of the proposed NCARG and NSSLS methods.
Symmetry 10 00119 g007
Figure 8. Comparison of performances in terms of the cumulative percentage of the accuracy using the NCARG and NSSLS segmentation methods.
Figure 8. Comparison of performances in terms of the cumulative percentage of the accuracy using the NCARG and NSSLS segmentation methods.
Symmetry 10 00119 g008
Figure 9. Comparison of performances in terms of the cumulative percentage of the Dice values using the NCARG and NSSLS segmentation methods.
Figure 9. Comparison of performances in terms of the cumulative percentage of the Dice values using the NCARG and NSSLS segmentation methods.
Symmetry 10 00119 g009
Figure 10. Comparison of performances in terms of the cumulative percentage of the JAC values using NCARG and NSSLS segmentation methods.
Figure 10. Comparison of performances in terms of the cumulative percentage of the JAC values using NCARG and NSSLS segmentation methods.
Symmetry 10 00119 g010
Figure 11. Comparison of performances in terms of the cumulative percentage of the sensitivity using the NCARG and NSSLS segmentation methods.
Figure 11. Comparison of performances in terms of the cumulative percentage of the sensitivity using the NCARG and NSSLS segmentation methods.
Symmetry 10 00119 g011
Figure 12. Comparison of performances in terms of the cumulative percentage of the specificity using the NCARG and NSSLS segmentation methods.
Figure 12. Comparison of performances in terms of the cumulative percentage of the specificity using the NCARG and NSSLS segmentation methods.
Symmetry 10 00119 g012
Table 1. The performance of computer segmentation using the proposed NCARG method with reference to ground truth boundaries (Average ± SD).
Table 1. The performance of computer segmentation using the proposed NCARG method with reference to ground truth boundaries (Average ± SD).
Metric ValueAccuracy (%)Dice (%)JAC (%)Sensitivity (%)Specificity (%)
Average95.390.3883.297.588.8
Standard deviation67.610.53.511.4
Table 2. The performance of computer segmentation using the proposed method with reference to the ground truth boundaries (Average ± SD) of ten images during the test phase.
Table 2. The performance of computer segmentation using the proposed method with reference to the ground truth boundaries (Average ± SD) of ten images during the test phase.
Image IDAccuracy (%)Dice (%)JAC (%)Sensitivity (%)Specificity (%)
ISIC_001283699.781993.274787.39799.990987.851
ISIC_001391799.148590.485282.6237182.6237
ISIC_001464799.468492.864386.679199.792991.2339
ISIC_001464998.882395.226890.888698.831399.2854
ISIC_001477398.901797.367894.870798.629499.9692
ISIC_001496889.588889.226780.548981.703599.9913
ISIC_001499498.924293.061387.023187.023
ISIC_001501993.878893.968988.623988.621899.602
ISIC_001594199.768794.358989.3203189.3203
ISIC_001556398.034483.93972.323297.9281
Average (%)97.6377792.377486.029896.5497893.68998
SD (%)3.310693.73736.25496.260686.76053
Table 3. The average values (mean ± SD) of the evaluation metrics using the NCARG approach compared to the NSSLS approach.
Table 3. The average values (mean ± SD) of the evaluation metrics using the NCARG approach compared to the NSSLS approach.
MethodAccuracy (%)Dice (%)JAC (%)Sensitivity (%)Specificity (%)
NSSLS method80.6 ± 22.166.4 ± 32.657.9 ± 33.782.1 ± 2483.1 ± 30.4
Proposed NCARG method95.3 ± 690.4 ± 7.683.2 ± 10.597.5 ± 6.388.8 ± 11.4
p-value<0.0001<0.0001<0.0001<0.0001<0.0001
Table 4. Performance metrics comparison of different studies using the ISIC dataset for segmentation.
Table 4. Performance metrics comparison of different studies using the ISIC dataset for segmentation.
MethodAccuracy (%)Dice (%)JAC (%)Sensitivity (%)Specificity (%)
FCRNs of 38 layers [34]92.985.678.588.293.2
FCRNs of 101 layers [34]93.7 87.280.390.393.5
VGG-16 [34,35]90.379.470.779.694.5
GoogleNet [34,36]91.684.877.690.191.6
Proposed NCARG method95.390.483.297.588.8

Share and Cite

MDPI and ACS Style

Guo, Y.; Ashour, A.S.; Smarandache, F. A Novel Skin Lesion Detection Approach Using Neutrosophic Clustering and Adaptive Region Growing in Dermoscopy Images. Symmetry 2018, 10, 119. https://doi.org/10.3390/sym10040119

AMA Style

Guo Y, Ashour AS, Smarandache F. A Novel Skin Lesion Detection Approach Using Neutrosophic Clustering and Adaptive Region Growing in Dermoscopy Images. Symmetry. 2018; 10(4):119. https://doi.org/10.3390/sym10040119

Chicago/Turabian Style

Guo, Yanhui, Amira S. Ashour, and Florentin Smarandache. 2018. "A Novel Skin Lesion Detection Approach Using Neutrosophic Clustering and Adaptive Region Growing in Dermoscopy Images" Symmetry 10, no. 4: 119. https://doi.org/10.3390/sym10040119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop