Academia.eduAcademia.edu

Smoke Removal and Image Enhancement of Laparoscopic Images by An Artificial Multi-Exposure Image Fusion Method

2021

In laparoscopic surgery, image quality is often degraded by surgical smoke or by side effects of the illumination system, such as reflections, specularities, and non-uniform illumination. The degraded images complicate the work of the surgeons and may lead to errors in image-guided surgery. Existing enhancement algorithms mainly focus on enhancing global image contrast, overlooking local contrast. Here, we propose a new Patch Adaptive Structure Decomposition utilizing the Multi-Exposure Fusion (PASD-MEF) technique to enhance the local contrast of laparoscopic images for better visualization. The set of under-exposure level images are obtained from a single input blurred image by using gamma correction. Spatial linear saturation is applied to enhance image contrast and to adjust the image saturation. The Multi-Exposure Fusion (MEF) is used on a series of multi-exposure images to obtain a single clear and smoke-free fused image. MEF is applied by using adaptive structure decomposition...

Smoke Removal and Image Enhancement of Laparoscopic Images by An Arti cial MultiExposure Image Fusion Method Muhammad Adeel Azam University of Genoa: Universita degli Studi di Genova Khan Bahadar Khan (  [email protected] ) Islamia University of Bahawalpur https://orcid.org/0000-0003-1409-7571 Eid Rehman Foundation University Islamabad Sana Ullah Khan KUST: Kohat University of Science and Technology Research Article Keywords: Arti cial multi-exposure fusion, Smoke removal, Laparoscopic Images, Image fusion and enhancement Posted Date: October 26th, 2021 DOI: https://doi.org/10.21203/rs.3.rs-975713/v1 License:   This work is licensed under a Creative Commons Attribution 4.0 International License. Read Full License Version of Record: A version of this preprint was published at Soft Computing on April 11th, 2022. See the published version at https://doi.org/10.1007/s00500-022-06990-4. Type of the Paper (Article) 1 Smoke Removal and Image Enhancement of Laparoscopic Images by an Artificial Multi-Exposure Image Fusion Method Muhammad Adeel Azam 1, Khan Bahadar Khan 2,*, Eid Rehman3 and Sana Ullah Khan4 Department of Advanced Robotics, Istituto Italiano di Tecnologia, Via S. Quirico, 19d, 16163 Genova, Italy; [email protected] , 1 Department of Informatics, Bioengineering, Robotics, and System Engineering, University of Genoa, Genoa, Italy; [email protected] 2 Department of Telecommunication Engineering, Faculty of Engineering, The Islamia University of Bahawalpur, 63100, Pakistan; [email protected] 3 Department of Software Engineering, Foundation University, Rawalpindi Campus, Pakistan; [email protected] 4 Institute Institute of Computing, Kohat University of Science and technology Kohat (KUST), KPK, Pakistan. [email protected] * Correspondence: [email protected] ; 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Abstract: In laparoscopic surgery, image quality is often degraded by surgical smoke or by side effects of the illumination system, such as reflections, specularities, and non-uniform illumination. The degraded images complicate the work of the surgeons and may lead to errors in image-guided surgery. Existing enhancement algorithms mainly focus on enhancing global image contrast, overlooking local contrast. Here, we propose a new Patch Adaptive Structure Decomposition utilizing the Multi-Exposure Fusion (PASD-MEF) technique to enhance the local contrast of laparoscopic images for better visualization. The set of under-exposure level images are obtained from a single input blurred image by using gamma correction. Spatial linear saturation is applied to enhance image contrast and to adjust the image saturation. The Multi-Exposure Fusion (MEF) is used on a series of multi-exposure images to obtain a single clear and smoke-free fused image. MEF is applied by using adaptive structure decomposition on all image patches. Image entropy based on the texture energy is used to calculate image energy strength. The texture entropy energy determined the patch size that is useful in the decomposition of image structure. The proposed method effectively eliminate smoke and enhance the degraded laparoscopic images. The qualitative results showed that the visual quality of the resultant images is improved and smoke-free. Furthermore, the quantitative scores computed of the metrics: FADE, Blur, JNBM, and Edge Intensity are significantly improved as compared to other existing methods. 32 Keywords: Artificial multi-exposure fusion; Smoke removal; Laparoscopic Images; Image fusion and enhancement; 34 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 33 35 https://doi.org/10.3390/xxxxx 1. Introduction 36 Laparoscopic imaging modalities play a significant role in navigation during operation and treatment planning. Medical surgeons always focus on the quality of images that determine the best medical decision for the operating environment [1]. In laparoscopic surgery, a small size camera is injected into the human body through a small incision. All the internal body structural and functional information can be seen and monitored with the help of an LCD screen placed in the operation room [2]. The CO2 gas is inserted into the human abdominal area to expand the internal space so that surgical instruments can be easily operated on. The CO2 gas and dissection deformation of tissues produce smoke that causes the invisibility of organs [3]. The degradation and artifacts in laparoscopic images produce also due to many other factors such as dynamic homogenous internal structure, blood flow, dynamic illumination factor, optical instruments reflection, etc. [4]. The 37 38 39 40 41 42 43 44 45 46 47 FOR PEER REVIEW 2 of 15 smoke effect during laparoscopic can severely degrade the image quality and also its effects on radiance information of image patches. The degraded and blurred images could reduce the visibility of the surgeon for diagnosis and also increase the probability of error during surgery. The smoke removal could reduce not only the surgery time but also important for surgery planning and treatment. Therefore, an accurate smoke removal algorithm is required for better visualization of laparoscopic images [3-5]. There are many clinical applications of laparoscopy images and it can help to diagnose multiple diseases at a very early stage. The smoke removal method is considered as image de-hazing that existed in literature [6-7]. The image de-hazing algorithms are classified into two groups: image restoration and image enhancement [8-9]. In the image restoration category, the haze-free image is obtained by using atmospheric degradation methods utilizing prior knowledge of image depth information. The prior information of hazy image derived first then by applying physical degradation model to obtain haze-free images. He et al. [8] proposed Dark Channel Prior (DCP) technique that is based on the restoration domain. While in the image enhancement domain, there is no need of using an atmospheric physical model and prior estimation of depth information in images. In this method, the correlation algorithms are mostly used to enhance the local contrast of the images for better visualization [10]. In this category, some of the techniques are the Retinex algorithm [12], fusion-based [13], Histogram equalization [12-13], and waveletbased algorithms [16]. In fusion-based methods, the resultant enhanced image is obtained by fusing input blurred images. However, the required detailed information at a high level of accuracy in smoke-free images is still a challenging task. Gamma correction is utilized to split single input blurry and smoky images into different multi-exposure images then the MEF technique is implemented to fuse these multi-exposure images. The image contrast and saturation are used as image fusion weights during the fusion process. MEF techniques are used for enhancing the visual quality of degraded images [17]. In the enhancement domain, existing smoke removal algorithms are used to enhance the saturation and global contrast of images while neglecting the local contrast information. Image visual quality is affected due to the missing of many local pixels during the calculation of global contrast. In this article, we proposed a laparoscopic smoke removal method that removes the smoke effect and also enhanced the quality of the degraded images. The proposed method is based on the PASD-MEF technique. The MEF technique enhanced the local detail information of input laparoscopic images. A series of gamma correction is used to remove the blurry patches in the images and also effectively increase the local contrast of the images. Whereas, the Spatial Linear Saturation (SLS) is used to increase the color saturation of the laparoscopic images. Then, a set of images with under-exposure levels are formed. These under-exposure images now have high color saturation and enhanced contrast but low exposure levels. The proposed algorithm implemented a patch adaptive structure (PAS) technique that works on MEF. The advantage of using PAS and MEF is that it preserved the structure of laparoscopic images. The significant contribution of the proposed methodology is highlighted as follows: • Development of smoke removal self-fusion algorithm on smoky and blurry input images in a spatial domain. The smoke effect is removed with the help of contrast and saturation correction. SLS is implemented to increase the saturation contrast of images. • PASD algorithm is proposed for the spatial domain, MEF to enhance the visual quality of the degraded blur laparoscopic images. The adaptive selection of different patched size in images are obtained by using an implementation of block size and texture energy. Adaptive selection avoids the error of loss of information in both local structure and texture detail information of images during the smoke removal procedure. • The proposed algorithm PASD-MEF is verified both in a qualitative as well as quantitative manner. The article demonstrated that the proposed algorithm not only removes the smoke but also enhances the visual quality of the laparoscopic image for better visualization and diagnostic purposes. 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 FOR PEER REVIEW 3 of 15 • The proposed algorithm is compared with other state-of-the-art smoke removal methods and the proposed method showed significantly improved performance in terms of visual and statistical evaluation metrics. The article arrangement is as follows: In Sec. 2, related works associated with haze and de-smoke is presented while Sec. 3 describes the proposed methodology. In Sec. 4, the quantitative and qualitative results are encapsulated and the conclusion is drawn in Sec. 5. 105 2. Related Works 109 There are many techniques presented in the literature for de-smoke of laparoscopic image [3-5]. A novel Bayesian inference that consists of a probabilistic graphical technique is applied on laparoscopic images [3]. The model includes a prior model and is implemented on transmission map images. The transmission map is useful for color attenuation that is caused by smoke. Then, this work is extended in [4], to achieve smoke-free, noiseless, and remove the specular effect in images. There are many other methods in the literature that are related to laparoscopic smoke removal. These techniques use the atmospheric scattering model and work relatively the same as the dehazing techniques in the literature. The atmospheric model depends on the depth of images or the transmission map [6], [16-17]. He at el. proposed a DCP technique that relies on statistical observation and implemented on outdoor hazy images [6], [16-17]. In this method, it is observed that mostly pixels having very low intensities values in at minimum single-color channel. In the DCP method, a prior estimation knowledge of image depth detail and transmission map is implemented. The density of the hazing scene acquired and high-quality non-hazy images are formed. This algorithm not effectively works on outdoor images that have a very high white radiance effect. However, some other methods do not require estimation of transmission map or image depth information. Tan et al. [9] directly enhance the local detail of images without any use of a transmission map. In [13], a fusion-based method is proposed that relies on white balance phenomena to enhance the input images. A Laplacian pyramid representation technique is used for fusion purposes and this method works on per-pixel. The multi-scale fusion is implemented on hazy images and derived single resultant image. Most of the image smoke removal methods work as image restoration and on smoke removal. Koschmieder [20] proposed an atmospheric scattering scheme to solve the problem of degradation in images caused by smoke. This model is described in Eq.1. 110 𝐼 (𝑦) = 𝑡 (𝑦). 𝐽 (𝑦) + 𝐴. (1 − 𝑡 (𝑦)) 𝐼(𝑦) − 𝐴 + 𝐴 𝑡 (𝑦) 104 106 107 108 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 (1) Where I(y) represent the degraded images while J(y) is the haze-free image. The t(y) denotes the transmission medium and represents the quantity of light that spreads toward the target. In the above equation, the A denotes global atmospheric light. The product of t (y).J (y) represents the scene radiance. The term 𝐴. (1 − 𝑡 (𝑦)) in Eq. 1 denote the airlight. Air light produced by smoke dispersion increases the intensity of the object, which is assumed to be the primary cause of the color shift of the scene. This term for air light, especially for thick smoke, would dominate the strength of the scene. By rearranging the above equation the haze-free image J(y) will be achieved. The haze-free image only is obtained when the value of A and t(y) are already achieved using apriori information and from the estimation solution. Eq.2 represents the rearranged form of equation (1). The common limitation J (x) can also be limited by implementing the maximum local contrast and saturation or distributing the specific color pixels in RGB space. 𝐽(𝑦) = 102 103 (2) 135 136 137 138 139 140 141 142 143 144 145 146 FOR PEER REVIEW 4 of 15 In this paper, we proposed a multi-exposure image fusion method for smoke removal, adjustment of saturation, and contrast of images. The Multi-exposure fusion techniques are also used in many image processing tasks where different sensors sequence of images fused to obtained resultant single image [15], [19-20]. The existence of image fusion methods discussed in the literature are based on sparse representation [21-23], guided filtering techniques [26], Multi-scale decomposition fusion techniques [25-27], patch structure decomposition [30], and multi-exposure image fusion. However, MEF methods are not only used for image smoke removal but also image enhancement [10, 31]. Galdran introduced multi-exposure fusion based on Laplacian pyramid fusion (LPF) for haze removal [10], then, in the space domain, the haze removal is converted to increase image contrast and saturation effect. The LPF weight is used to manipulate image contrast and saturation that enhance the visual quality of images. The gamma correction and image enhancement work in spatial domain, the gamma correction method is widely used in literature for image enhancement [10]. Histogram Equalization is added to gamma correction to increase the image contrast. Whereas traditional image enhancement methods are used for global contrast and saturation transformation of images. In the proposed methodology, the Adaptive Gamma Correction (AGC) technique is used to increase the transmission map t(x) that is used in Equation (1) by the Koschmieder model. For further improvement of AGC, we used Laplacian based solutions. Contrast adjustment solution integrated with AGC to remove the blurred effect in images. 166 3. Proposed Methodology 167 To avoid the estimation effect of atmospheric light and transmittance described in Equation (1), the contrast enhancement and saturation adjustment technique in the spatial domain is suggested to achieve smoke-free laparoscopic images. According to Koschmieder model the intensity range of input blurred images I(y) lies between values 0 to 1. The following condition J(y) ≤ I (y) ∀ y needs to satisfy to obtain smoke-free image J(y). In this paper, we first make a set of under-exposed images U= {I1(y), I2(y), I3(y)…...Ik(y)} from the original smoke input image I(y). The under-exposed images always reduce the intensity variation in images. The under-exposure image I(y) inset of multiple under-exposure images contain high contrast and saturation but skip small detail structure information. These under-exposure images now have low exposure levels. We implemented a MEF technique to fuse all the under-exposed set of images U= {I1(y), I2(y), I3(y)…...Ik(y)} into a single image to extract local detail information. The MEF technique fused different regions of images with good contrast and saturation level to obtain smokefree single image J(y). The flowchart of the proposed methodology is shown in Fig.1. First, the set of multi-exposure images are obtained with the help of gamma correction. The linear adjustment associated with spatial saturation is also implemented on the image to increase the visual quality. Gamma correction is implemented for contrast level adjustment of images. The increase of the contrast of blurred areas in the images decreased the sharpness level of that area. To overcome this problem, we utilized a MEF technique that extracts those corresponding areas from multiple images and fused them into a single image with better contrast and saturation. For better fusion, it is important to maintain texture and color detail as same as the original image which is achieved by applying MEF with adaptive structure decomposition (ASD) of the image patch. In the proposed methodology, the texture information components of the image are obtained by using cartoon texture decomposition [32]. The image texture entropy is calculated from the gray difference technique. The texture entropy value and image block size are treated in an image decomposition block. The overall image block is sub-divided into three independent components. Each component process individually to give the resultant fused smoke-free image. The proposed methodology is explained in the following sections. 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 FOR PEER REVIEW 5 of 15 199 Figure 1. Proposed methodology PASD-MEF framework. 200 3.1. Gamma Correction and Contrast Adjustment 201 The overall image intensity of degraded image I(y) is adjusted by using gamma correction and modify the intensity of the image by a power function as shown in Equation (3). 203 202 𝐼(𝑦) → 𝛽. 𝐼(𝑦)µ (3) Where the term β and µ represent the positive constant. The visual differences are more prominent in the dark areas as compared to bright areas. The value of µ has chosen less than one µ < 1 for compressed bright intensities while it increases dark intensities in images for better visual detail. The value of µ > 1, more bright intensities are allotted in a more extensive range after transformation, and dark intensities are compressed for that value range. The contrast of the image region can be expressed in Equation (4). 𝐶(𝜔) = 𝜔 𝐼𝑚𝑎𝑥 − 𝜔 𝐼𝑚𝑖𝑛 204 205 206 207 208 209 210 211 (4) 𝜔 𝜔 = max {I(y) | y ϵ ω} and 𝐼𝑚𝑖𝑛 = min {I(y) | y ϵ ω}. In Fig. 3(e) and 4(e), the Where 𝐼𝑚𝑎𝑥 image shows overexposure and there is contrast detail information missing in both images. After applying the µ > 1 operation, the contrast detail of the image in Fig. 3(g) and 4(g) increases. In our proposed algorithm, the adjustment of gamma correction is used to modify the local contrast detail of input images. Gamma correction also removes the blurred effect in images as shown in Fig. 2(h) and 3(h). In Figs. 2-3, different exposure level laparoscopic images are shown. The left side images are over-exposure images while the move toward the right side the exposure level of images decreases. The resultant fused MEF images are shown on the rightmost side of Figs. 2-3. 212 213 214 215 216 217 218 219 220 221 222 FOR PEER REVIEW 6 of 15 Figure 2. Multi-exposure laparoscopic images of video 1 with smoke Level 3. (a) Over-exposed (b) Normal exposed image (c) under-exposed image (d) Resultant fused image obtained from images (a)-(c). (e) Zoom-in over-exposed image (f) Zoom-in of normally exposed image (g) Zoom-in under-exposed (h) Zoom-in of the fused image. 223 224 225 226 227 228 Figure 3. Multi-exposure laparoscopic images of video 10 with smoke Level 4. (a) Over-exposed (b) Normal exposed image (c) under-exposed image (d) Resultant fused image obtained from images (a)-(c). (e) Zoom-in over-exposed image (f) Zoom-in of normally exposed image (g) Zoom-in under-exposed (h) Zoom-in of the fused image. 229 230 231 232 3.2. Artificial Multi-Exposure Fusion 233 After the contrast enhancement, the Spatial Linear Saturation (SLS) is implemented on multi-exposure laparoscopic images. The visual quality of images is improved by using the adjustment of local contrast and brightness of the images. The sequence of multi-exposure images U= {I1(y), I2(y), I3(y)…... Ik(y)} from input image I(y) is obtained with the help of gamma correction. For every image 𝑈𝑘𝑅 (𝑦), 𝑈𝑘𝐺 (𝑦), 𝑈𝑘𝐵 (𝑦)} in the set of multi-exposure the minimum and maximum components value of three-channel R, G, and B can be manipulated by using Equation (5) and (6). When ∆ = (RGBmax – RGBmin)/255 > 0, then saturation of every pixel can be manipulated by using Equation (7). 234 RGBmax = max (max (R, G), B) RGBmin = min (min (R, G), B) (5) (6) 235 236 237 238 239 240 241 FOR PEER REVIEW 7 of 15 ∆ 𝑆 = {𝑣𝑎𝑙𝑢𝑒 ∆ 2 − 𝑣𝑎𝑙𝑢𝑒 𝐿 < 0.5 (7) 𝐿 ≥ 0.5 242 243 The term value and L can be defined in Equation (8). When the saturation of every pixel value is computed then this operation is applied on each channel of image RGB described as in Equation (9). We have taken the adjustment range of saturation for an image as [0,100]. 𝑣𝑎𝑙𝑢𝑒 = 𝑅𝐺𝐵𝑚𝑎𝑥 + 𝑅𝐺𝐵𝑚𝑖𝑛 , 255 𝑊ℎ𝑒𝑟𝑒 𝐿 = 𝑣𝑎𝑙𝑢𝑒/2 𝑈𝐾′ (𝑦) = 𝑈𝑘 (𝑦) + (𝑈𝑘 (𝑦) − 𝐿 ∗ 255) ∗ 𝛽 1 (𝑆 − 1) 𝛽= 1 {(−𝑝𝑒𝑟𝑐𝑒𝑛𝑡) 𝑝𝑒𝑟𝑐𝑒𝑛𝑡 + 𝑆 ≥ 1 𝐺′ 𝐵′ 𝑈𝐾′ (𝑦) = (𝑈𝑘 (𝑦), 𝑈𝑘 (𝑦), 𝑈𝑘 (𝑦)) 246 247 (8) 248 (9) 249 (10) 𝑒𝑙𝑠𝑒 The final image obtained after the saturation operation applied on each channel of the image is described in Equation (11). 𝑅′ 244 245 250 251 (11) When the image saturation process is completed then MEF is applied to obtain the local detail information of the laparoscopic images. The proposed MEF scheme works on adaptive decomposition based on patch structure. The adaptive patch of an image determines using image texture entropy and patch size. The resultant fuse image is obtained by combining decompose patch images. The image cartoon decomposition is used for the analysis of structural information in an image [21] while texture components of the image give detailed information [25]. In the proposed work, the Vese Osher (VO) model is implemented based on variational image decomposition [25] [33] to the source images. The cartoon-texture decomposition determines by using Vese Osher (VO) model. 260 3.3. Adaptive Patch Structure and Image Intensity 261 When the texture component is determined, the gray difference technique is implemented to compute the image entropy value using texture features. Then adaptive path size selection of the image is selected. If pixel point is located at point (x, y) then a point 𝑝 = (∆𝑥, ∆𝑦) far away from pixel point is represented as(𝑥 + ∆𝑥, 𝑦 + ∆𝑦). The gray scale based on different value can be calculated as in Equation (12). 𝑚∆ (𝑥, 𝑦) = 𝑚(𝑥, 𝑦) − 𝑚(𝑥 + ∆𝑥, 𝑦 + ∆𝑦) 𝑛 𝑖=0 254 255 256 257 258 259 262 263 264 265 266 (12) Where m(x, y) denoted gray scale value and 𝑚∆ (𝑥, 𝑦) represent the difference in gray scale value. The entropy value of laparoscopic images can be determined by using Equation (13). 𝐸 = ∑ 𝑝(𝑖)𝑙𝑜𝑔2 [𝑝(𝑖)] 252 253 267 268 269 (13) For complete image texture, the values of entropies can be calculated in the form of set E= {E1, E2, E3……., Ek,}, where E1, E2…... Ek is the entropy value of each image. Then 270 271 FOR PEER REVIEW 8 of 15 final entropy value can be calculated by using the mean of all entropy values represented in Equation (14). 𝑘 1 𝐸= ∑ 𝐸𝑖 𝐾 (14) 𝑖=0 The adaptive patch size scheme preserved more detailed information during the fusion process. The optimal block size of each image can be calculated by using Equation (15). 𝐸 −𝐸 𝐸 𝐸 ) − (− ) 10 𝑊𝑠 = 𝑃𝑠 (0.1) 𝑥 ( 10 𝐸 ) + 𝑃𝑠 (𝑒 −𝐸 𝑥 (0.1) 𝐸 −𝐸 𝐸 ( ) + (− ) 10 10 ( 𝐶̂ = The desired contrast strength in the fused image was obtained by merging the 𝑘 highest contrast of all source sets of image patches with the same spatial position. ̂𝑘 = The desired signal structure fused block can be calculated by assigning 𝑆 weighted average value to image block contrast using input structure vector. 𝐼̂𝑘 = To obtained mean intensity components, the global and local mean intensity of the current source image is used as an input. ̂ When 𝐶̂ components are calculated then fused image patch 𝑋̂ obtained 𝑘 , 𝑆𝑘 , 𝐼𝑘 and new vector can be represented as shown in Eq. 16. The proposed MEF gives smokefree, well exposed, and high contrast images by artificially under-exposed/over-exposed images. The smoke in the image represented in Equation (1) always reduces the intensity level of the images. The proposed algorithm works only on under-exposed images. Furthermore, if the exposure value increased then gamma correction can adjust the contrast of images and increase the visual quality of blurred laparoscopic images. 𝐽(𝑥) = ∑ 𝑥̂𝑖 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 (16) Multiple image patches of a fused image can be obtained by sliding the window, the pixels in covering patches are found the average value to give output. At that point, the fused image is formed by using Eq.17. 𝑛 274 275 (15) And, 𝑊𝑠 is image patch size. The optimal block size can be achieved using the image entropy value. E in the above equation represents the Entropy value of a given image, these parameters are set for calculated image patch size. When the optimal value of 𝑊𝑠 achieved then set of multi-exposure images decompose into sub-image of Ws x Ws size blocks. Structure decomposition algorithm [17] is implemented on each patch size of the image that further divided into the following components: I) Ck, signal contrast strength II) signal structure strength Sk and III) mean intensity Ik. These three parameters have proceeded further to achieve the desired fused image patches ̂𝑋. To obtain an appropriate ̂̂ fused patch image we need three desired parameters that are 𝐶̂ 𝑘 , 𝑆𝑘 , 𝐼, these parameters are explain below; ̂ 𝑋̂ = 𝐶̂ 𝑘 . 𝑆𝑘 + 𝐼𝑘 272 273 302 303 304 (17) 𝑖=1 4. Experimental Results In this section, the dataset details and the proposed methodology subjective/qualitative and objective/quantitative results compared with other state-of-the-art techniques such as Dark Channel Prior (DCP) [8], Multilayer Perceptron Method (MPM) [34], Color Attenuation Prior (CAP) [19] is presented. The proposed method is implemented on 305 306 307 308 309 FOR PEER REVIEW 9 of 15 MATLAB 2018a software where the hardware specification is Intel® Core i3-4010U CPU of clock speed 1.7GHz and RAM is 4GB. 310 311 4.1. Dataset 312 The dataset taken is a part of the ICIP LVQ Challenge dataset. That is a collection of a total of 800 distorted videos created using a set of 20 reference videos, each 10 seconds long [35], [36]. Obtain these videos from the Cholec80 dataset (http://camma.ustrasbg.fr/datasets). The whole dataset consists of ten category videos group such that smoke videos, blurry, white Gaussian noise videos, etc. All videos with a 16:9 aspect ratio have a resolution 512 by 288 and 25 fps frame rate. We collected videos from the smoke group comprising 80 videos arising from applying SMOKE as distortion to each reference video at 4 different stages. Then each video frame is extracted from 25 different smoke videos as shown in Fig. 4. The resolution of images to test the proposed algorithm is 512 by 288. 313 314 315 316 317 318 319 320 321 322 323 Figure 4. Sample dataset videos frames (a1~a4) frames of video 1 where a1 represent level1 smoke and smoke increase from left to right a4 represent dense smoke of level 4 (b1~b4) frames of video 5 (c1~c4) frames of video 10 (d1~d4) frames extracted from video 15 while (e1~e4) frames of video 20. 324 325 326 FOR PEER REVIEW 10 of 15 4.1.1. Qualitative visual results 327 The visual results of smoke images with level 3 smoke distortion are shown in Fig. 5 while the smoke images with level 4 distortion are shown in Fig. 6. It is observed that the DCP method can remove the smoke effect but the contrast and saturation balance of images reduces. In the CAP method, it is noticed that smoke is not well removed, and an unbalance natural color of images is also seen. While the MPM method, removes the smoke but local detail information of laparoscopic images is not visible. The proposed method not only removes the smoke from images but also enhanced the local contrast information of the images and the good saturation color are seen. 335 4.1.1. Quantitative Evaluation 336 In objective evaluation, we choose non-reference image quality metrics because reference or any ground truth images are absent. The evaluation of the proposed method is performed by computing four metrics: FADE, JNBM, Blur, and Edge intensity. Fog Aware Density Evaluator (FADE) metric is used for analyzing smoke in the images [37-38]. The perceptual fog density in the laparoscopic images can be computed by computing the FADE metric. If the value of FADE is lower, then it means that fog density is lower, for better smoke removal its value should be lower. The JNBM non-reference metric is based on sharpness and works best for blurry images [39-40]. This metric evaluates the quantity level of visual sharpness in the images. The higher value indicated that images are highly sharp and best for perceptual view. Furthermore, an Edge intensity metric is implemented, this metric gives information about the edge intensities that are not visible in source images. The higher value represented good edge intensity [41]. The non-reference blur perceptual metric is used to analyze blurriness in the image. Crete et al. [42] proposed this metric for the first time in image processing. Tab. 1 shows all the statistical results computed by these four non-reference metrics. The proposed method shows a significantly improved result as compared to other state of art techniques. The bold values indicated better performance results. The graphical objective evaluation results of smoke level 3 and level 4 images are shown in Figs. 5-6. The bar-plot result of FADE, JNBM, Blur, and Edge intensity metrics is shown in Figs.7-8. 337 355 Table 1. Quantitative/objective evaluation results of the smoke-free images. 356 Video Smoke ID frame Level-3 1 Level-4 Level-3 5 Level-4 Method FADE Blur JNBM Edge Intensity DCP 0.334 0.257 3.3802 69.124 CAP 0.443 0.261 3.3795 58.767 MPM 0.271 0.253 3.4095 78.458 Proposed 0.176 0.248 3.5073 79.536 DCP 0.354 0.263 3.3161 66.767 CAP 0.457 0.265 3.3736 57.458 MPM 0.296 0.257 3.3960 75.598 Proposed 0.189 0.253 3.4551 77.325 DCP 0.337 0.252 3.0253 68.498 CAP 0.468 0.255 3.1207 51.945 MPM 0.369 0.252 3.1151 66.230 Proposed 0.196 0.246 3.4417 62.743 DCP 0.391 0.256 2.8429 65.644 CAP 0.556 0.261 3.1690 49.168 MPM 0.440 0.258 3.0726 62.196 328 329 330 331 332 333 334 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 FOR PEER REVIEW 11 of 15 Level-3 10 Level-4 Level-3 15 Level-4 Level-3 20 Level-4 Proposed 0.228 0.251 3.3052 59.926 DCP 0.263 0.271 2.7363 86.330 CAP 0.385 0.278 2.7743 63.755 MPM 0.278 0.267 2.8444 83.162 Proposed 0.145 0.265 2.8172 85.386 DCP 0.276 0.274 2.8540 84.565 CAP 0.402 0.281 2.8672 62.315 MPM 0.308 0.272 2.9426 79.911 Proposed 0.163 0.269 2.8681 81.597 DCP 0.329 0.270 3.3597 55.406 CAP 0.508 0.278 3.1900 46.009 MPM 0.305 0.260 3.2100 66.943 Proposed 0.197 0.251 3.3964 58.358 DCP 0.347 0.282 3.1051 55.445 CAP 0.558 0.291 2.9624 45.261 MPM 0.356 0.276 2.9541 62.988 Proposed 0.220 0.266 3.1330 57.523 DCP 0.417 0.319 2.5731 38.031 CAP 0.561 0.317 2.5305 37.504 MPM 0.419 0.299 2.6118 47.585 Proposed 0.188 0.288 2.7140 55.808 DCP 0.450 0.309 2.5195 37.749 CAP 0.624 0.304 2.4998 37.508 MPM 0.474 0.288 2.4910 46.795 Proposed 0.212 0.276 2.7138 55.012 357 5. Conclusions 358 The proposed method of PASD-MEF is based on multi-exposure image fusion. The MEF works on the adaptive structure decomposition technique. A sequence of under-exposed images is extracted from the input single smoke and burry image. The Gamma correction is implemented to achieve a set of under-exposed images while the SLA scheme is applied for saturation adjustment. Adaptive structure decomposition (ASD) is used during the MEF procedure. The adaptive patch decomposition integrates all common regions from a series of images that have better contrast and saturation. Whereas MEF fused these sets of images into a single de-smoke image. The qualitative as well as quantitative results showed that the proposed method significantly improves the visual quality of images and also reduces the smoke from images. The main goal of this paper is to remove smoke and enhanced laparoscopic images. The improved quality of images is useful in image-guided surgery and also helpful for surgeons for better visibility during surgery. 359 360 361 362 363 364 365 366 367 368 369 370 371 FOR PEER REVIEW 12 of 15 372 Figure 5. Qualitative visual results of smoke level 3 laparoscopic images (a) Input smoke and blur laparoscopic images where (b) ~ (e) images are resultant smoke-free and enhanced images. (b) DCP [8] (c) CAP [19] (d) MPM [34] (e) Proposed method. 373 374 375 FOR PEER REVIEW 13 of 15 Figure 6. Qualitative visual results of smoke level 4 laparoscopic images (a) Input smoke and blur Laparoscopic images where (b) ~ (e) images are resultant smoke-free and enhanced images. (b) DCP [8] (c) CAP [19] (d) MPM [34] (e) Proposed method. 376 377 FADE and Blur Metric (lowest score represents better) Level-3 Level-4 Level-3Level-4 1 Level-3Level-4 5 MPM DCP MPM DCP MPM Level-3Level-4 10 FADE DCP MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Level-3Level-4 15 20 Blur 378 Figure 7. Graphical objective evaluation results of FADE and blur metric. 379 JNBM and Edge Metrics (Highest score reprsents better) 4 3.5 3 2.5 2 1.5 1 0.5 0 Level-3 Level-4 1 Level-3Level-4 5 Level-3Level-4 10 JNBM Level-3Level-4 MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP MPM DCP 100 90 80 70 60 50 40 30 20 10 0 Level-3Level-4 15 20 Edge Intensity 380 Figure 8. Graphical objective evaluation result of JNBM and Edge intensity metric. 381 Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: title, Table S1: title, Video S1: title. 382 383 Author Contributions: For research articles with several authors, a short paragraph specifying their individual contributions must be provided. The following statements should be used “Conceptualization, X.X. and Y.Y.; methodology, X.X.; software, X.X.; validation, X.X., Y.Y. and Z.Z.; formal analysis, X.X.; investigation, X.X.; resources, X.X.; data curation, X.X.; writing—original draft preparation, X.X.; writing—review and editing, X.X.; visualization, X.X.; supervision, X.X.; project administration, X.X.; funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript.” Please turn to the CRediT taxonomy for the term explanation. Authorship must be limited to those who have contributed substantially to the work reported. 384 385 386 387 388 389 390 391 Funding: This research received no external funding. 392 FOR PEER REVIEW 14 of 15 Institutional Review Board Statement: In this section, please add the Institutional Review Board Statement and approval number for studies involving humans or animals. Please note that the Editorial Office might ask you for further information. Please add “The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of NAME OF INSTITUTE (protocol code XXX and date of approval).” OR “Ethical review and approval were waived for this study, due to REASON (please provide a detailed justification).” OR “Not applicable.” for studies not involving humans or animals. You might also choose to exclude this statement if the study did not involve humans or animals. 393 394 395 396 397 398 399 400 Informed Consent Statement: Any research article describing a study involving humans should contain this statement. Please add “Informed consent was obtained from all subjects involved in the study.” OR “Patient consent was waived due to REASON (please provide a detailed justification).” OR “Not applicable.” for studies not involving humans. You might also choose to exclude this statement if the study did not involve humans. Written informed consent for publication must be obtained from participating patients who can be identified (including by the patients themselves). Please state “Written informed consent has been obtained from the patient(s) to publish this paper” if applicable. 401 402 403 404 405 406 407 408 Data Availability Statement: In this section, please provide details regarding where data supporting reported results can be found, including links to publicly archived datasets analyzed or generated during the study. Please refer to suggested Data Availability Statements in section “MDPI Research Data Policies” at https://www.mdpi.com/ethics. You might choose to exclude this statement if the study did not report any data. 409 410 411 412 413 Acknowledgments: In this section, you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments). 414 415 416 Conflicts of Interest: The authors declare no conflict of interest. 417 References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. D. A. S. Toyanov, “Surgical Vision,” Ann. Biomed. Eng., vol. 40, no. 2, pp. 332–345, 2012. K. Y. Hahn and D. W. Kang, “Removal of Hazardous Surgical Smoke Using a Built-in-Filter Trocar : A Study in Laparoscopic Rectal Resection,” Surg. Laparosc. Endosc. Percutan. Tech., vol. 27, no. 4, pp. 341–345, 2017. H. Alberto-moreno, J. Ortiz-echeverri, and G. Flores, “Desmoking laparoscopy surgery images using an image-to-image translation guided by an embedded dark channel,” arXiv Prepr. arXiv, pp. 1–9, 2020. B. Sdiri, A. Beghdadi, F. A. Cheikh, M. Pedersen, and O. J. Elle, “An adaptive contrast enhancement method for stereo endoscopic images combining binocular just noticeable difference model and depth information,” Electron. Imaging, vol. 13, no. 2016, pp. 1–7, 2016. X. Luo, A. J. Mcleod, S. Member, S. E. Pautler, C. M. Schlachta, and T. M. Peters, “Vision-Based Surgical Field Defogging,” IEEE Trans. Med. Imaging, vol. 0062, no. MARCH, pp. 1–10, 2017. A. Baid, A. Kotwal, R. Bhalodia, S. N. Merchant, and S. P. Awate, “JOINT DESMOKING, SPECULARITY REMOVAL, AND DENOISING OF LAPAROSCOPY IMAGES VIA GRAPHICAL MODELS AND BAYESIAN INFERENCE Indian Institute of Technology (IIT) Bombay. University of Utah.,” in IEEE 14th International Symposium on Biomedical Imaging, 2017, pp. 732– 736. A. Kotwal, “JOINT DESMOKING AND DENOISING OF LAPAROSCOPY IMAGES Department of Electrical Engineering Indian Institute of Technology ( IIT ) Bombay Department of Computer Science and Engineering Indian Institute of Technology ( IIT ) Bombay,” in IEEE 13th International Symposium on Biomedical Imaging (ISBI), 2016, pp. 1050–1054. K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353, 2011. R. T. Tan, “Visibility in Bad Weather from a Single Image,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8. A. Galdran, “AC PT US CR,” Signal Processing, 2018. Y. Li, Q. Miao, R. Liu, J. Song, Y. Quan, and Y. Huang, “Neurocomputing A multi-scale fusion scheme based on haze-relevant features for single image dehazing,” Neurocomputing, vol. 283, pp. 73–86, 2018. Z. Rahman, D. J. Jobson, G. A. Woodell, and C. Science, “Retinex Processing for Automatic Image Enhancement,” in Human Vision and Electronic Imaging VII, 2004, pp. 390–401. C. O. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Trans. Image Process., vol. 22, no. 8, pp. 3271–3282, 2013. G. Thomas, D. Flores-tapia, and S. Pistorius, “Histogram Specification : A Fast and Flexible Method to Process Digital Images,” IEEE Trans. Instrum. Meas. •, vol. 609(5), no. January 2014, pp. 1565–1578, 2011. 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 FOR PEER REVIEW 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 15 of 15 Z. Yu, “A Fast And Adaptive Method For Image Contrast Enhancement,” in International Conference on Image Processing, 2004, 2015, no. September, pp. 1001–1004. Z. Rong and W. L. Jun, “Improved wavelet transform algorithm for single image dehazing,” Opt. - Int. J. Light Electron Opt., vol. 125, no. 13, pp. 2013–2015, 2014. S. Member, S. Member, and S. Member, “Robust Multi-Exposure Image Fusion : A Structural Patch Decomposition Approach,” IEEE Trans. Image Process., vol. 26, no. 5, pp. 2519–2532, 2017. J. Tarel and N. Hauti, “Fast Visibility Restoration from a Single Color or Gray Level Image,” in IEEE 12th International Conference on Computer Vision, 2009, pp. 2201–2208. Q. Zhu, J. Mai, L. Shao, and S. Member, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 3522–3533, 2015. A. G. B, J. Vazquez-corral, D. Pardo, and M. Bertalm, “A Variational Framework for Single Image Dehazing,” in InEuropean Conference on Computer Vision, 2015, vol. 1, pp. 259–270. Y. Li, Y. Sun, M. Zheng, X. Huang, and G. Qi, “A Novel Multi-Exposure Image Fusion Method Based on Adaptive Patch Structure,” ENTROPY, vol. 20, no. 12, pp. 1–17, 2018. Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Medical image fusion via convolutional sparsity based morphological component analysis,” IEEE Signal Process. Lett., vol. 26, no. 3, pp. 485–489, 2019. I. Fusion, “A Novel Geometric Dictionary Construction Approach for Sparse Representation Based Image Fusion,” Entropy, vol. 19, no. 7, pp. 1–17, 2017. Z. Zhu, H. Yin, Y. Chai, Y. Li, and G. Qi, “A novel multi-modality image fusion method based on image decomposition and sparse representation,” Inf. Sci. (Ny)., vol. 432, pp. 516–529, 2018. Z. Zhu, Y. Chai, H. Yin, and Y. Li, “Author ’ s Accepted Manuscript modality medical image fusion Reference : To appear in : Neurocomputing,” Neurocomputing, vol. 214, pp. 471–482, 2016. K. He, J. Sun, and X. Tang, “LNCS 6311 - Guided Image Filtering,” Eur. Conf. Comput. Vision, vol. 6311, pp. 1–14, 2010. H. Li, Y. Wang, Z. Yang, R. Wang, X. Li, and D. Tao, “Discriminative dictionary learning-based multiple component decomposition for detail-preserving noisy image fusion,” IEEE Trans. Instrum. Meas., vol. 69, no. 4, pp. 1082–1102, 2019. H. Li, X. He, D. Tao, Y. Tang, and R. Wang, “Joint medical image fusion , denoising and enhancement via discriminative lowrank sparse dictionaries learning,” Pattern Recognit., vol. 79, pp. 130–146, 2018. H. Li, H. Qiu, Z. Yu, and Y. Zhang, “Infrared Physics & Technology Infrared and visible image fusion scheme based on NSCT and low-level visual features,” INFRARED Phys. Technol., vol. 76, pp. 174–184, 2016. F. U. Jin and J. Sim, “A Novel Image Fusion Framework Based on Sparse Representation and Pulse Coupled Neural Network,” IEEE Access, vol. 7, pp. 98290–98305, 2019. G. Qi, L. Chang, Y. Luo, Y. Chen, Z. Zhu, and S. Wang, “A Precise Multi-Exposure Image Fusion Method Based on Low-level Features,” sensors Artic., vol. 20, no. 6, pp. 1–18, 2020. B. L. T. Characterization, S. Ono, S. Member, and T. Miyata, “Cartoon-Texture Image Decomposition Using,” vol. 23, no. 3, pp. 1128–1142, 2014. L. A. Vese and S. J. Osher, “Modeling Textures with Total Variation Minimization and Oscillating Patterns in Image Processing,” J. ofScientific Comput., vol. 19, no. December, pp. 553–572, 2003. R.-A. J. Salazar-Colores S, Cruz-Aceves I, “Single image dehazing using a multilayer perceptron,” J. Electron. Imaging, vol. 27(4), p. 043022, 2018. A. P. Twinanda, S. Shehata, D. Mutter, J. Marescaux, M. De Mathelin, and N. Padoy, “EndoNet : A Deep Architecture for Recognition Tasks on Laparoscopic Videos,” pp. 1–11. Z. Amjad, A. Beghdadi, F. Alaya, and M. Kaaniche, “Towards a Video Quality Assessment based Framework for Enhancement of Laparoscopic Videos.” “‘Image & video quality assessment at live,.’” [Online]. Available: https://live.ece.utexas.edu/research/fog/. [Accessed: 29-Aug2020]. L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Trans. IMAGE Process., vol. 24, no. 11, pp. 3888–3901, 2015. R. Ferzli and L. J. Karam, “A NO-REFERENCE OBJECTIVE IMAGE SHARPNESS METRIC BASED ON JUST-NOTICEABLE BLUR AND PROBABILITY SUMMATION,” in IEEE International Conference on Image Processing, 2007, pp. 445–448. R. Ferzli, L. J. Karam, and S. Member, “A No-Reference Objective Image Sharpness Metric Based on the Notion of Just Noticeable Blur ( JNB ),” IEEE Trans. Image Process., vol. 18, no. 4, pp. 717–728, 2009. N. I. H. Autière, J. E. A. N. H. T. Arel, D. I. A. Ubert, and É. R. I. C. D. Umont, “BLIND CONTRAST ENHANCEMENT ASSESSMENT BY GRADIENT RATIOING AT VISIBLE EDGES,” Image Anal Stereol 2008;2787-95, vol. 27, no. 1, pp. 87–95, 2008. P. Ladret and M. Nicolas, “The Blur Effect : Perception and Estimation with a New No-Reference Perceptual the Blur Effect : Perception and Estimation with a New No-Reference Perceptual Blur Metric,” Hum. Vis. Electron. Imaging XII, vol. 6492, no. September 2014, p. 64920I, 2007. 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505