Next Article in Journal
Turnpike Properties for Dynamical Systems Determined by Differential Inclusions
Previous Article in Journal
Temporal Dynamics of Event-Related Potentials during Inhibitory Control Characterize Age-Related Neural Compensation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Region-Growing and Segmentation Technology of Micro-Particle Microscopic Images Based on Color Features

School of Mechanical Engineering, Hubei University of Technology, Wuhan 430062, China
*
Author to whom correspondence should be addressed.
Submission received: 15 November 2021 / Revised: 28 November 2021 / Accepted: 29 November 2021 / Published: 4 December 2021

Abstract

:
Silkworm microparticle disease is a legal quarantine standard in the detection of silkworm disease all over the world. The current common detection method, the Pasteur manual microscopy method, has a low detection efficiency all over the world. The low efficiency of the current Pasteur manual microscopy detection method makes the application of machine vision technology to detect microparticle spores an important technology to advance silkworm disease research. For the problems of the low contrast, different illumination conditions and complex image background of microscopic images of the ellipsoidal symmetrical shape of silkworm microparticle spores collected in the detection solution, a region growth segmentation method based on microparticle color and grayscale information is proposed. In this method, the fuzzy contrast enhancement algorithm is used to enhance the color information of micro-particles and improve the discrimination between the micro-particles and background. In the HSV color space with stable color, the color information of micro-particles is extracted as seed points to eliminate the influence of light and reduce the interference of impurities to locate the distribution area of micro-particles accurately. Combined with the neighborhood gamma transformation, the highlight feature of the micro-particle target in the grayscale image is enhanced for region growing. Mea6nwhile, the accurate and complete micro-particle target is segmented from the complex background, which reduces the background impurity segmentation caused by a single feature in the complex background. In order to evaluate the segmentation performance, we calculate the IOU of the microparticle sample image segmented by this method with its corresponding true value image, and the experiments show that the combination of color and grayscale features using the region growth technique can accurately and completely segment the microparticle target in complex backgrounds with a segmentation accuracy IOU as high as 83.1%.

1. Introduction

Bombyx mori particulate disease is one of the five major infectious diseases of Bombyx mori. It causes devastating harm to Bombyx mori and is listed as a quarantine object by silkworm-breeding countries around the world [1]. Currently, manual microscopic examination is the main detection method for the prevention and control of silkworm micro-particle disease in China. The use of computer image processing technology to realize the rapid identification and detection of microscopic images of micro-particles is an inevitable choice to promote the industrialization of the sericulture industry.
The image of Bombyx mori micro-particles is an image obtained by grinding silkworm moths, centrifugally separating them and then placing them under a microscope. There is a large number of silkworm moth fragments, green pulp spores, bubbles, moth powder and other complex image backgrounds in the grinding fluid. Aimed at the segmentation technology of the microscopic image of silkworm micro-particles, in a pure sample and combined with the shape characteristics of micro-articles, Huang Honghua [2] used morphological segmentation to segment the image of micro-particles from the pure samples with the combination of shape features of micro-particles, resulting in a high segmentation rate of 88% in 50 samples. Zhang Xiangqin [3] used the histogram equalization method for contrast enhancement processing and used the fast two-dimensional OTSU thresholding method for image binarization to achieve the separation of micro-particles and impurities. The above methods are all based on a certain feature of micro-particles and combined with specific theories to complete the segmentation. Currently, there are few available data on silkworm micro-particle segmentation technology based on machine vision. Nevertheless, there are many examples of research in other fields of microscopic image segmentation, and most of them involve medical cell image segmentation. The methods used in medical cell image segmentation can be divided into four categories: threshold-based segmentation, edge-based segmentation, region-based segmentation and specific theory-based segmentation.
For threshold segmentation, Amer and Aimi [4] converted malaria cells and blood cells from the RGB space to the HSV space with the method of color conversion and then compared the cell images under each component to the segment cells in combination with the OTSU algorithm. In this article, the green component segmentation with the use of the RGB color space has been found to be the best method for segmenting leukemia images, reaching an accuracy rate of 83.84%. By the method of improving the limitations of OTSU’s single threshold, combined with the characteristics of white blood cell images, EP Mandyartha et al. [5] set multi-level automatic thresholds to segment the blood cell images. Through this method, the average Zijdenbos similarity index (ZSI) and recall accuracy reached 92.5% and 94.03%, respectively. These methods all rely on accurate threshold selection and do not have good adaptability.
For edge segmentation, Caya et al. [6] used the Canny edge extraction algorithm to segment red blood cells in urine, and the percentage error produced by automatic counting of this method was 9.561% compared with manual counting by medical technicians. Wang Y.L. and Y.Y. Pan [7] used a custom convolution kernel for edge extraction of images in cancer cell diagnosis and an enhanced edge to extract cancer cells combined with the top-hat method, reaching a result of a higher edge closure of 30%, at least compared with other edge detection algorithms (e.g., Roberts, Log, Canny and Prewitt). These methods require relatively high contrast cell images, and the edge segmentation is poor in low-contrast cell microscopic images.
For region segmentation, Kim Tae Hoon et al. [8] used the color information of stained cells and the watershed algorithm of distance transformation to accurately extract white blood cells, achieving an accuracy of about 94% for the single-cell segmentation of 47 DAPI-stained MKN-28 images, an improvement of about 6% over the existing algorithms. Monteiro A. et al. [9] used the color difference between red and white blood cells to segment them and then used the watershed algorithm to segment the red and white blood cells separately. The article used 30 blood picture samples, and the accuracy of detecting red and white blood cells was 73% and 60%, respectively, and had good timeliness. Before using the watershed algorithm, this method needs to build a suitable “dam”. When the background is not pure, it is easy to segment a similar part of the background into the targets.
In terms of specific theoretical segmentation, Pan et al. [10] used the bacterial foraging optimization algorithm to optimize the cell image after edge detection, highlight the strong edge and inhibit the weak edge to segment the cell images. In this method, the whole image needs to be solved iteratively, and the memory overhead is large.
Although the above segmentation techniques use different methods, they are all designed to improve the segmentation accuracy of the microscopic images. However, in cell image segmentation, the image is obtained after staining the specimens made of pure samples, which is not suitable for complex background image segmentation. In order to solve the problem of microscopic image segmentation in complex backgrounds, through the color characteristics and grayscale characteristics of silkworm micro-particles, an algorithm based on color space conversion and fuzzy contrast enhancement is proposed in this paper. The micro-particle targets in the image are extracted by color information, and the segmentation result is further optimized by a region-growing algorithm with gray information.

2. Characteristic Analysis of the Micro-Particle Image

The micro-particle spores are ellipsoidal and symmetrical with a size of 3–4 × 1.5–2.5 μm, are light green when observed under a microscope at 600 times magnification and have a certain refractive index [11]. The images collected by the micro-particles under the microscope are shown in Figure 1.
It can be seen from Figure 1 that the micro-particle image has the following characteristics: (1) under the microscope, the image background is green as a whole, and the difference from the target color is not obvious; (2) the micro-particle image contains many impurities; (3) under different extraction layers, the light is uneven, and the color is different; and (4) the micro-particles have refractive properties, and the brightness in the image is high.
In light of the above characteristics, combined with the actual situation of the micro-particle image, the algorithm flow chart is shown in Figure 2. In the segmentation process, considering the problem that the particles are not clearly distinguished from the background in the image, the image needs to be color-enhanced before segmentation. The color information is enhanced by the contrast enhancement method to make the target stand out in the image. In order to make full use of the color characteristics of the micro-particles, the HSV color space with stable color information is selected to carry out color pre-segmentation of the image. After the micro-particle color feature segmentation, the micro-particle target segmentation may not be complete, and the impurities cannot be completely removed. Therefore, according to the high-brightness characteristics of micro-particles in the image, the particle grayscale image is grayscaled. The color information is used to find the seed points, and the grayscale image is grown in the grayscale image to segment the complete micro-particle target from the complex background.

3. Pre-Segmentation of Micro-Particle Images Based on the Color Features

3.1. Fuzzy Enhancement Preprocessing of Color Micro-Particle Images

In light of the problem that the color difference between the micro-particle target and the background is not obvious, the color information is used to initially extract the particle target to distinguish the foreground and background colors in the color image. This requires image enhancement to some degree. According to the fuzzy theory proposed by Pal and King [12], the fuzzy contrast enhancement algorithm is used to process the image.
The membership function defined by the traditional fuzzy enhancement algorithm is as follows:
μ i j = G ( X i j ) = ( 1 + X max X i j F d ) F e
μ c = G ( X c ) = ( 1 + X max X i j F d ) F e = 0.5
F d = X max X c 2 1 F e 1
where X c , F d and F e are the fuzzy function crossing point, exponential fuzzification factor greater than zero and denominator fuzzification factor, respectively.
From Equations (1)–(3), the traditional fuzzy enhancement algorithm does not specify the value scheme for the crossing point X c during the enhancement process. The selection of exponential fuzzification factor F d and denominator fuzzification factor F e in the defined membership degree also need to be based on the crossing point. However, in the micro-particle image, the background of the image is similar to the target color, and it is difficult to select the crossing point. Thus, a global fuzzy contrast enhancement algorithm is adopted in this experiment. The algorithm steps are as follows:
1.
Determination of the fuzzy membership function.
Aimed at the problem that the crossing point of the traditional fuzzy membership function is not clear, the log membership function is used to map each spatial domain to the fuzzy domain. The defined log membership function is as follows:
μ i j = G ( X i j ) = log 2 ( 1 + X ¯ i j X min X max )
where X ¯ i j is the image pixel neighborhood average and μ i j is the membership degree corresponding to pixel X i j .
2.
Enhancement processing of fuzzy domain images.
Use the defined function to stretch the membership function μ i j :
T ( μ i j ) = { 2 ( μ i j ) 2 , 0 μ i j 0.5 1 2 ( 1 μ i j ) 2 , 0.5 μ i j 1
μ i j = T ( μ i j ) , T k ( μ i j ) = T k 1 ( μ i j )
3.
Restore the fuzzy domain to the spatial domain:
X i j = G 1 ( μ i j ) = X min + X max ( 2 μ i j 1 ) , 0 μ i j 1
According to the above results, the following can be seen:
(1)
When using the fuzzy contrast enhancement algorithm, the light green color feature of the micro-particles can be effectively enhanced in the image compared with the original image Figure 3a.
(2)
When using traditional contrast enhancement, the background color of Figure 3b is also enhanced during the enhancement process. Using color information for image segmentation processing can easily cause mis-segmentation of the background impurity images.
(3)
When using the global fuzzy contrast enhancement algorithm (as shown in Figure 3c), the background color also changes greatly while the particle color information is enhanced. Compared with Figure 3b, it has better discrimination and is convenient for extracting the micro-particle target image by using the color features to prepare for subsequent image segmentation.

3.2. Color Target Extraction from Silkworm Micro-Particle Images

3.2.1. Color Feature Space Selection of Micro-Particle Images

In micro-particle images, the color information of the micro-particles is an important basis for segmentation. Therefore, the correct selection of the color space is crucial for achieving accurate micro-particle segmentation.
When analyzing color images, the commonly used color models include the RGB color model and HSV color model. The RGB color space model is composed according to the principle of three primary colors, and it has three channels. These three channels are highly correlated and sensitive to light. Therefore, when the color features of the micro-particles are segmented, due to different depths in the extraction liquid layer, and different color depths easily affected by the light of some micro-particles, the RGB color model is not considered first in the segmentation process.
However, the HSV (the H component represents hue, S component represents saturation, and V component represents brightness) color space conforms to the color model of the human perception of color [13]. The three channels in the HSV color space model are independent from each other. When one dimension is processed, the other two dimensions will not be disturbed.
In this paper, the color characteristics of the micro-particle target in the image are used to carry out preliminary segmentation of the micro-particle target. The HSV color model with stable color information should be selected. The collected RGB image is converted to HSV space, and the color information is extracted by using the H and S components of the color dimension in HSV space. This can effectively avoid the influence of unstable illumination and the problem of different color depths of micro-particles in different layers.
The model for converting the RGB color space to HSV color space is as follows:
{ V = max ( R , G , B ) S = { V min ( R , G , B ) × 255 V , i f    v 0                      0 , i f     V = 0     H = { ( G B ) × 60 S , i f     V = R 180 ° + ( B R ) × 60 S , i f     V = G 240 ° + ( R G ) × 60 S , i f     V = B i f     H < 0 ° , t h e n     H = H + 360 °
where H is the tone, S is the saturation, V is the brightness, R is the red component, G is the green component and B is the blue component.
According to the color conversion model, the image color space is converted. Figure 4a is the HSV color space image after color conversion. Figure 4b is a color component diagram after the original image is converted to the HSV color space and the brightness component V is removed. Figure 4c is the color component diagram with the removal of the brightness component V after the original image is converted to the HSV color space with fuzzy contrast enhancement.
The results in Figure 4 show the following:
(1)
In the HSV space, as shown in Figure 3a, the color of the micro-particle target is significantly different from the background, indicating that the color feature can effectively segment the micro-particle target.
(2)
From Figure 3b and Figure 4c, in the HSV space, in the micro-particle image with the removal of the V component, the green feature of the micro-particles is dark. The edge information is not very different from the background and the edge information is blurred.
(3)
In the image with enhanced color information (Figure 4c), the color of the micro-particle target is well distinguished from the background, and the particle target is complete and has clear edges.

3.2.2. Determination of the Color Feature Extraction Criterion of Micro-Particle Images

After the color space is determined, the determination of the micro-particle color extraction criteria is also the key to accurately segmenting the micro-particles. According to the color characteristics of the micro-particles, a statistical method is used to process the collected images of some typical silkworm micro-particles. The experiment selected 50 micro-particle targets, transformed them to the HSV color space after enhancement and then counted the average value of the corresponding color components in the HSV color space in turn. In order to facilitate the calculation, each component was uniformly normalized to (0–255) for the statistics. The statistical results are shown in Figure 5.
According to the statistical results of the HSV color space components of some typical target objects, the following can be seen:
(1)
In the HSV space, as long as the H component fluctuated up and down at 65, the variation of the S component was mainly 60 < S < 140. The numerical values were relatively concentrated, and it was feasible to extract micro-particle images by using the color features.
(2)
The value of the brightness component V fluctuated stably above 200, indicating that the high gray feature brought about by particle refraction was reliable.
(3)
Since the luminance component V had nothing to do with the micro-particle color information, the range of the V component was not set when formulating the color extraction criteria.
Based on the above results, the color extraction results were as follows. The corresponding range of the H component was 50–80, and the corresponding range of the S component was 60–140. According to the color distribution, the micro-particle target could be extracted preliminarily:
f ( i , j ) = { f ( i , j ) , ( H , S , V ) D 0 , ( H , S , V ) D
D = { 50 H 80 , 60 S 140 , 0 V 255 }

4. Research on the Improved Micro-Particle Image Segmentation Algorithm Based on Region Growing

The basic idea of the region-growing algorithm is to combine pixels with similar characteristics. For each region, a seed point is first specified as the starting point of growing, and then the pixels in the neighborhood of the seed point are compared with the seed point. Finally, the points are combined with similar properties and continue to grow outward until all qualified pixels are included [14].
The key to segment the image using the region growing method is the selection of seed points and the determination of the growing criterion [15,16,17]. In light of the random distribution of micro-particles in the entire image and the complex backgrounds, the key to correctly and efficiently segmenting the image of the micro-particles is to select seed points automatically and accurately. Furthermore, the key to complete micro-particle segmentation is the reliability and stability of the growing criteria. In the traditional area growing algorithm, the selection of seed points is generally manual. Although manual selection can ensure the effectiveness of seed points, the detection efficiency is low. The growing criterion is usually based on the comparison of the gray values between the seed point and field to construct a threshold. It does not consider the shape, size or other parameters of the target, which often leads to over-segmentation and cannot meet the precise segmentation of the micro-particles. Therefore, this paper redefines the selection criteria from the selection of seed points and the growing criteria.

4.1. Automatic Selection of Multiple Sub-Points

According to the color features of the micro-particles in the complex background, and using the incomplete segmented images extracted from the color information of the micro-particles, the morphological method was used to remove the small speckle impurity images whose color features were not completely removed. The target area of the micro-particles was accurately located as a seed point.
The flow chart is shown in Figure 6.

4.2. Improved Growing Criteria

The traditional region growing method generally uses the same image to grow the target image and uses the set growing criteria to grow. When selecting seed points, they have been screened for a certain feature of the image. If the same image is used for region growing, it is easy to repeat the operation of a certain feature of the image, and the effect is not ideal. Therefore, this paper adopted the HSV image of the micro-particles for preliminary segmentation. It selected the seed points automatically and combined the refraction of micro-particles (i.e., the high gray characteristics of the micro-particles in the gray image) to carry out gamma transformation to enhance the gray information of the micro-particle targets. Meanwhile, an appropriate growing threshold was selected to grow the image. During the growing process, it was necessary not only to ensure the accuracy of the two-dimensional features of the color and the gray level of the micro-particles but also to ensure the integrity of the target image of the micro-particles. The pixel size of the micro-particle target in the image should have also been considered to ensure over-segmentation during the growing process.

4.2.1. Growing Image Determination Based on the G Component

In order to solve the problem of inconsistent micro-particle graying in the micro-particle image, gamma transformation was used because it could improve the image quality, correct the image graying effectively and was simple to calculate [18]. The micro-particles were light green in the image and had a certain degree of refraction; thus, in the RGB color model, the G component had richer gray information than the B and R components. The micro-particle target had a better degree of discrimination relative to the background. Therefore, in the RGB space, the grayscale image of the G component was selected to carry out gamma transformation on the image, and the image was pulled as the image to be grown for region growing. The RGB component diagram is shown in Figure 7.
Assuming that the pixel neighborhood is U, the formulas for calculating the gray mean f ¯ and the standard deviation σ 2 in the neighborhood are
f ¯ = 1 n ( i , j ) U f ( i , j )
σ = 1 n ( i , j ) U | f ( i , j ) f ¯ | 2
where f ( i , j ) is the gray value of the pixels in U and n is the number of pixels in U.
Based on the above two formulas, the improved gamma transformation function is
f ( i , j ) = 255 * ( f ¯ f max ) 2.5 ± σ f ¯
where f max is the image maximum (usually 255).
We selected the 3 × 3 size neighborhood to perform the above operations on the G component, and the effect diagram was as follows.
As shown in Figure 8a,b, the improved gamma transformation not only enhanced the high-brightness features of the micro-particles but also processed the gray information in the neighborhood. Using the average value in the neighborhood to replace the original pixels could reduce the image noise and improve the dispersion of the global enhanced gray level. By combining the mean and standard deviation, the stretch index was adjusted accordingly to make the image smoother, and it was convenient to set the threshold to grow the image area.

4.2.2. Determination of the Growing Criteria Based on the Grayscale Enhanced Images

Let R be the seed point area and A be the mapping of R on the image after grayscale enhancement. The average gray value of region A is f ¯ , Based on the gray level of the area to be grown, set the seed point threshold to eliminate color-based background impurities. Set the growing threshold T a , and compare the gray value of the point to be grown ( x , y ) with the gray average value of the area, namely:
{ f ( x , y ) T s | f ( x , y ) f ¯ | T a
where, f ( x , y ) is the gray value of the pixel to be grown in A; f ¯ is the grayscale of the grown area in A, and T a is the growing threshold. If the above formula is true (i.e., the region-growing condition is satisfied), then the point is merged into the growing region. Meanwhile, f ¯ is updated after each growing; otherwise, it will not grow.
Since the microparticle targets were small in the image (only about 10 pixels wide and high) and close to the background, it was considered that only the grayscale criteria could not avoid incorrect segmentation of the background region during segmentation. Therefore, an area determination criterion for determining the micro-particle size in combination with the particle area was used; that is, the pixel coordinate ( x , y ) was compared with the seed coordinate ( x , y ) during growing:
{ | x x | 10 | y y | 10
Based on the above two points, the modified growing criterion was as follows:
{ f ( x , y ) T s | f ( x , y ) f ¯ | T a | x x | 10 | y y | 10
When all the neighboring pixels of the seed point do not have a pixel point that meets the requirements, the growing is stopped, and the image segmentation is completed. Due to the limited area, the threshold T a can be appropriately increased to prevent the under-segmentation caused by grayscale changes. For an image with a gray level of (0,255), since the micro-particle target gray level is larger, T s is taken as 0.8 times the maximum pixel value, and the reasonable value range of T a is in (50,70).

5. Experimental Results and Analysis

In order to verify the effectiveness of the color feature-based micro-particle microscopic image region-growing segmentation algorithm proposed in this paper, this experiment was divided into two parts: (1) micro-particle color pre-segmentation based on enhanced color, and (2) combined with gray information, the pre-segmented image is grown in regions. Based on the above two points, this paper selected the collected micro-particle images under different visual fields (Figure 9) to compare and analyze the experimental results of the segmentation.
Figure 9 is the original micro-particle images with a complex background under different visual fields, collected under a 60× objective lens and 10× eyepiece. The images contain many impurities, and the background is complex. Aimed at the above processing methods, this experiment compared them with color pre-segmentation and the growing algorithms. These two aspects verify the proposed complex background micro-particle segmentation method based on the joint color gray information and region growing method, respectively.
  • Comparison of Color Pre-Segmentation Effects
As shown in Figure 10, Figure 10a–c shows the micro-particle image enhancement images collected under different visual fields. Figure 10d–f is a pre-segmented image after color enhancement, and Figure 10g–i is a color pre-segmentation result of an unenhanced image.
The results of Figure 10d–i show the following:
  • For microscopic micro-particle images with uneven illumination, low contrast and complex backgrounds, HSV-based color information extraction could effectively extract micro-particle targets.
  • After color enhancement, the micro-particle image segmented according to the color range can remove a large number of impurity targets compared with the original image, and the effect is better and more accurate.
  • There was a performance comparison of the improved area-growing algorithm.
Based on the above color pre-segmentation results, an incomplete image of the micro-particles in the image could be obtained. Region growth of the existing results was carried out, and the impurities in the seeds were removed according to the gray threshold to supplement the complete shape of the micro-particles. In this experiment, the traditional region-growing method and the automatic region-growing method proposed in [14] were used to segment the experimental image. The accuracy of the automatic selection of multi-seed points and micro-particle segmentation were compared and verified. The number of single micro-particle pixels was used to measure the integrity of the segmentation, and the growing time was used to verify the performance of the algorithm.
  • Comparison of Different Region-Growing Algorithms
In the experiment, when using the traditional region-growing method and the algorithm in [14], the growing image was the original gray image without gamma transformation. The difference between the target and the background was small, and the initial threshold was set to T = 25. The segmentation effect of the traditional region-growing method, the automatic region-growing method from [14] and the algorithm in this paper are shown in Figure 11a–f. In this experiment, the growing image was set as a grayscale image after neighborhood gamma transformation, and the growing threshold was T = 60. The experimental results are shown in Figure 11g–i.
It can be seen from Figure 11a–c,e–g that when the neighborhood gamma image was not used for the area’s growth, because the micro-particle target was too close to the background gray level, misjudgment occurred when judging the seed threshold. As a result, visual field 1 and visual field 3 had background mis-segmentation during region growing, as shown in Figure 11a,c,d.
  • Growing Algorithm Comparison
The comparison between the method in this paper and the other two algorithms is shown in Table 1 and Table 2.
It can be seen from Table 1 that the traditional algorithm used region growth due to the background and microparticle target being too close to each other, resulting in too many growth points, the microparticle edge distinction not being obvious, and the average IOU accuracy being 83.0%. Meanwhile, the algorithm from [14] used the aberration coefficient calculated by the standard deviation and seed mean to adjust the growth threshold and an additional gradient threshold to limit the growth conditions, and more restrictions in the growth process led to the average IOU accuracy being 76.7%. The algorithm in this paper had a better growth effect and clear edges due to the use of enhanced grayscale images for region growth and increased growth region restriction, and the average IOU accuracy was 83.1%, which was the highest among the three algorithms.
It can be seen from Table 2 that when performing region growing, the real-time performance of this algorithm was not much different from the traditional region-growing algorithm. The algorithm from [14] needed to update the gray mean value and standard deviation of the growing area frequently, which led to a prolonged algorithm time and failed to meet the real-time requirements.
Generally, the algorithm in this paper could independently and efficiently determine the particle target and the suspected target in the segmented image, saving the time of selecting seed points. It could also segment the size and shape of the micro-particle target more accurately, and it was not easy to be under-segmented or over-segmented. The reliability and accuracy of the algorithm were verified.

6. Conclusions

In this paper, we proposed a micro-particle image segmentation method based on combined color features and region growth by combining the characteristics of the microparticle images under a microscope, and we found the following:
  • In the extraction of color information, the use of global fuzzy contrast enhancement can improve the problem of microparticle targets not clearly being distinguished in the image, and the extraction of microparticle targets combined with the color-stabilized HSV color space can filter out most of the impurity target points.
  • After the results of color pre-segmentation, the subsequent segmentation of the image using the joint improved the gamma transform-enhanced grayscale image using the region growth algorithm was able to segment the complete and clear microparticle targets. The growth time for an image was only 0.792 s, and the IOU accuracy was 83.1%.
The disadvantage of this paper is that the original image line segmentation of microparticle adherent spores was not obtained in the segmentation follow-up. In future work, we will collect segmentation images of the adherent spores for study and develop a fast and sensitive focusing system for the acquisition of microparticle images.

Author Contributions

Conceptualization, X.H. and Q.C.; methodology, X.H. and Q.C.; software, Q.C.; validation, Y.T., Q.C. and X.Y.; formal analysis, J.Y. and X.Y.; investigation, X.H.; resources, X.H. and Q.C.; data curation, Y.T.; writing—original draft preparation, X.H. and Q.C.; writing—review and editing, Q.C. and Y.T.; visualization, J.Y. and Y.T.; supervision, X.H. and D.Z.; project administration, D.Z.; funding acquisition, X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 61976083).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data sets used or analyzed during the current study are available from the corresponding author upon reasonable request.

Acknowledgments

We are grateful to the Hubei University of Technology for creating good experimental conditions for us. We are grateful to the National Natural Science Foundation of China for the grant support (No. 61976083).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shang, X.; Wang, C. Microscopic examination technology and precautions of protogynous moths. North Seric. 2020, 41, 2. [Google Scholar]
  2. Huang, H.; Cai, J. Improvement of Computer Vision Technology in Detecting Pebrine in Silkworm. J. Jiangsu Univ. Nat. Sci. Ed. 2003, 05, 43–46. [Google Scholar]
  3. Zhang, X.; Fang, R.; Wang, P.; Cai, J.; Xu, L. Research on Image Recognition Technique for Pebrine in Silkworm. Trans. Chin. Soc. Agric. Mach. 2001, 32, 65–68. [Google Scholar]
  4. Din, A.F.; Nasir, A. Automated Cells Counting for Leukaemia and Malaria Detection Based on RGB and HSV Colour Spaces Analysis; Lecture Notes in Electrical Engineering. In Proceedings of the 11th National Technical Symposium on Unmanned System Technology, NUSYS 2019, Kuantan, Malaysia, 2–3 December 2019; Springer: Manhattan, NY, USA, 2020. [Google Scholar]
  5. Mandyartha, E.P.; Anggraeny, F.T.; Muttaqin, F.; Akbar, F.A. Global and Adaptive Thresholding Technique for White Blood Cell Image Segmentation. In Proceedings of the 3rd International Conference on Science and Technology 2019, ICST 2019, Surabaya, Indonesia, 17–18 October 2019; IOP Publishing Ltd.: Bristol, UK, 2020; Volume 1569, p. 022054. [Google Scholar]
  6. Caya, M.V.; Padilla, D.; Ombay, G.; Hernandez, A.J. Detection and Counting of Red Blood Cells in Human Urine using Canny Edge Detection and Circle Hough Transform Algorithms. In Proceedings of the 2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Laoag, Philippines, 29 November–1 December 2019; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2020. [Google Scholar]
  7. Wang, Y.L.; Pan, Y.Y.; Yu, X.L. Image Edge Detection of Medical Cell Based on Morphology. In Proceedings of the 2018 Eighth International Conference on Instrumentation & Measurement, Computer, Communication and Control (IMCCC), Harbin, China, 19–21 July 2018; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA. [Google Scholar]
  8. Kim, T.H.; Kim, D.; Li, S. Cell Counting Algorithm Using Radius Variation, Watershed and Distance Transform. J. Inf. Process. Syst. 2020, 16, 113–119. [Google Scholar]
  9. Monteiro, A.C.B.; Iano, Y.; Frana, R.P.; Arthur, R.; Estrela, V.V. A Comparative Study Between Methodologies Based on the Hough Transform and Watershed Transform on the Blood Cell Count. In Proceedings of the 4th Brazilian Technology Symposium (BTSym’18); Springer: Manhattan, NY, USA, 2019. [Google Scholar]
  10. Pan, Y.; Xia, Y.; Zhou, T.; Fulham, M.J. Cell image segmentation using bacterial foraging optimization. Appl. Soft Comput. 2017, 58, 770–782. [Google Scholar] [CrossRef]
  11. Cai, J.; Xu, L. Image analysis for shape properties of pebrine in Chinese silkworm. J. Jiangsu Univ. Sci. Technol. 1998, 05, 24–27. [Google Scholar]
  12. Jiang, T.; Zhao, C.; Chen, M.; Yang, X.; Sun, C. Fast Adaptive Image Fuzzy Enhancement Algorithm. Comput. Eng. 2011, 37, 213–214, 223. [Google Scholar]
  13. Wang, D.; Dong, S.; Cheng, F.; Zhao, Y.; Li, J. Improved Intuitionistic Fuzzy C-means Clustering Pork Image Detection in HSV Space. Acta Metrol. Sin. 2021, 42, 986–992. [Google Scholar]
  14. Peng, D.; Yin, L.; Qi, E.; Hu, J.; Yang, X. Power Plant Pipeline Defect Detection and Segmentation Based on Otsu’s and Region Growing Algorithms. Infrared Technol. 2021, 43, 502–509. [Google Scholar]
  15. Yang, Z.; Zhao, Y.; Liao, M.; Di, S.; Zeng, Y. Semi-automatic liver tumor segmentation with adaptive region growing and graph cuts. Biomed. Signal Process. Control. 2021, 68, 102670. [Google Scholar] [CrossRef]
  16. Zanaty, E.A.; El-Zoghdy, S.F. A novel approach for color image segmentation based on region growing. Int. J. Comput. Appl. 2017, 39, 123–139. [Google Scholar] [CrossRef]
  17. Charifi, R.; Essbai, N.; Mansouri, A.; Zennayi, Y. Comparative Study of Color Image Segmentation by the Seeded Region Growing Algorithm. In Proceedings of the 5th International Congress on Information Science and Technology, CiSt 2018, Marrakech, Morocco, 22–24 October 2018; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2018. [Google Scholar]
  18. Yang, X.; Jiang, X.; Du, J. Low illumination image enhancement algorithm based on gamma transformation and fractional order. Comput. Eng. Design. 2021, 42, 762–769. [Google Scholar]
Figure 1. Micro-particle image in a complex background.
Figure 1. Micro-particle image in a complex background.
Symmetry 13 02325 g001
Figure 2. Segmentation flowchart.
Figure 2. Segmentation flowchart.
Symmetry 13 02325 g002
Figure 3. Micro-particle blur enhancement pretreatment: (a) original image collected under the microscope; (b) the image enhanced by traditional contrast; and (c) the image enhanced by global fuzzy contrast.
Figure 3. Micro-particle blur enhancement pretreatment: (a) original image collected under the microscope; (b) the image enhanced by traditional contrast; and (c) the image enhanced by global fuzzy contrast.
Symmetry 13 02325 g003
Figure 4. HSV space picture: (a) HSV space, (b) original HS component and (c) enhanced HS component.
Figure 4. HSV space picture: (a) HSV space, (b) original HS component and (c) enhanced HS component.
Symmetry 13 02325 g004
Figure 5. Color component distribution: (a) H-component range; (b) S-component range; (c) V-component range.
Figure 5. Color component distribution: (a) H-component range; (b) S-component range; (c) V-component range.
Symmetry 13 02325 g005
Figure 6. Seed point selection.
Figure 6. Seed point selection.
Symmetry 13 02325 g006
Figure 7. RGB component diagram: (a) R component, (b) G component and (c) B component.
Figure 7. RGB component diagram: (a) R component, (b) G component and (c) B component.
Symmetry 13 02325 g007
Figure 8. Comparison of grayscale enhancement effects: (a) gamma transformation, (b) improved gamma transformation, (c) gamma transformation histogram and (d) improved gamma transformation histogram.
Figure 8. Comparison of grayscale enhancement effects: (a) gamma transformation, (b) improved gamma transformation, (c) gamma transformation histogram and (d) improved gamma transformation histogram.
Symmetry 13 02325 g008
Figure 9. Micro-particle images in different visual fields: (a) visual field 1, (b) visual field 2 and (c) visual field 3.
Figure 9. Micro-particle images in different visual fields: (a) visual field 1, (b) visual field 2 and (c) visual field 3.
Symmetry 13 02325 g009
Figure 10. Color pre segmentation effect: (a) color enhancement map of visual field 1, (b) color enhancement map of visual field 2, (c) color enhancement map of visual field 3, (d) enhanced color pre-segmentation of visual field 1, (e) enhanced color pre-segmentation of visual field 2, (f) enhanced color pre-segmentation of visual field 3, (g) original color pre-segmentation of visual field 1, (h) original color pre-segmentation of visual field 2 and (i) original color pre-segmentation of visual field 3.
Figure 10. Color pre segmentation effect: (a) color enhancement map of visual field 1, (b) color enhancement map of visual field 2, (c) color enhancement map of visual field 3, (d) enhanced color pre-segmentation of visual field 1, (e) enhanced color pre-segmentation of visual field 2, (f) enhanced color pre-segmentation of visual field 3, (g) original color pre-segmentation of visual field 1, (h) original color pre-segmentation of visual field 2 and (i) original color pre-segmentation of visual field 3.
Symmetry 13 02325 g010
Figure 11. Comparison of segmentation effects: (a) traditional regional growing of visual field 1, (b) traditional regional growing of visual field 2, (c) traditional regional growing of visual field 3, (d) self-growing from [14] of visual field 1, (e) self-growing from [14] of visual field 2, (f) self-growing from [14] of visual field 3, (g) algorithm in this paper for visual field 1, (h) algorithm in this paper for visual field 2 and (i) algorithm in this paper for visual field 3.
Figure 11. Comparison of segmentation effects: (a) traditional regional growing of visual field 1, (b) traditional regional growing of visual field 2, (c) traditional regional growing of visual field 3, (d) self-growing from [14] of visual field 1, (e) self-growing from [14] of visual field 2, (f) self-growing from [14] of visual field 3, (g) algorithm in this paper for visual field 1, (h) algorithm in this paper for visual field 2 and (i) algorithm in this paper for visual field 3.
Symmetry 13 02325 g011
Table 1. Growing region pixel and IOU.
Table 1. Growing region pixel and IOU.
AlgorithmVisual Field 1Visual Field 2Visual Field 3
Traditional algorithmTotal points: 13181113906
Average points: 9492100
Average IOU: 81.76%83.69%83.81%
Algorithm from [14]Total points: 717628471
Average points: 515252
Average IOU: 74.11%77.39%78.6%
This paper’s algorithm Total points: 10371067723
Average points: 748880
Average IOU: 80.58%83.31%85.41%
Table 2. Algorithm performance comparison.
Table 2. Algorithm performance comparison.
AlgorithmGrowing ImageGrowing Time
Traditional algorithmOriginal grayscale0.087 s
Algorithm from [14]Original grayscale2.416 s
This paper’s algorithmEnhanced grayscale0.792 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, X.; Chen, Q.; Ye, X.; Zhang, D.; Tang, Y.; Ye, J. Research on the Region-Growing and Segmentation Technology of Micro-Particle Microscopic Images Based on Color Features. Symmetry 2021, 13, 2325. https://doi.org/10.3390/sym13122325

AMA Style

Hu X, Chen Q, Ye X, Zhang D, Tang Y, Ye J. Research on the Region-Growing and Segmentation Technology of Micro-Particle Microscopic Images Based on Color Features. Symmetry. 2021; 13(12):2325. https://doi.org/10.3390/sym13122325

Chicago/Turabian Style

Hu, Xinyu, Qi Chen, Xuhui Ye, Daode Zhang, Yuxuan Tang, and Jun Ye. 2021. "Research on the Region-Growing and Segmentation Technology of Micro-Particle Microscopic Images Based on Color Features" Symmetry 13, no. 12: 2325. https://doi.org/10.3390/sym13122325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop