Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = K-CGAN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 10237 KiB  
Article
Conditional Generative Adversarial Networks with Optimized Machine Learning for Fault Detection of Triplex Pump in Industrial Digital Twin
by Amged Sayed, Samah Alshathri and Ezz El-Din Hemdan
Processes 2024, 12(11), 2357; https://doi.org/10.3390/pr12112357 - 27 Oct 2024
Cited by 1 | Viewed by 960
Abstract
In recent years, digital twin (DT) technology has garnered significant interest from both academia and industry. However, the development of effective fault detection and diagnosis models remains challenging due to the lack of comprehensive datasets. To address this issue, we propose the use [...] Read more.
In recent years, digital twin (DT) technology has garnered significant interest from both academia and industry. However, the development of effective fault detection and diagnosis models remains challenging due to the lack of comprehensive datasets. To address this issue, we propose the use of Generative Adversarial Networks (GANs) to generate synthetic data that replicate real-world data, capturing essential features indicative of health-related information without directly referencing actual industrial DT systems. This paper introduces an intelligent fault detection and diagnosis framework for industrial triplex pumps, enhancing fault recognition capabilities and offering a robust solution for real-time industrial applications within the DT paradigm. The proposed framework leverages Conditional GANs (CGANs) alongside the Harris Hawk Optimization (HHO) as a metaheuristic method to optimize feature selection from input data to enhance the performance of machine learning (ML) models such as Bagged Ensemble (BE), AdaBoost (AD), Support Vector Machine (SVM), K-Nearest Neighbors (KNNs), Decision Tree (DT), and Naive Bayes (NB). The efficacy of the approach is evaluated using key performance metrics such as accuracy, precision, recall, and F-measure on a triplex pump dataset. Experimental results indicate that hybrid-optimized ML algorithms (denoted by “ML-HHO”) generally outperform or match their classical counterparts across these metrics. BE-HHO achieves the highest accuracy at 95.24%, while other optimized models also demonstrate marginal improvements, highlighting the framework’s effectiveness for real-time fault detection in DT systems, where SVM-HHO attains 94.86% accuracy, marginally higher than SVM’s 94.48%. KNN-HHO outperforms KNNs with 94.73% accuracy compared to 93.14%. Both DT-HHO and DT achieve 94.73% accuracy, with DT-HHO exhibiting slightly better precision and recall. NB-HHO and NB show near-equivalent performance, with NB-HHO at 94.73% accuracy versus NB’s 94.6%. Overall, the optimized algorithms demonstrate consistent, albeit marginal, improvements over their classical versions. Full article
(This article belongs to the Special Issue Fault Diagnosis Process and Evaluation in Systems Engineering)
Show Figures

Figure 1

18 pages, 6007 KiB  
Article
Instantaneous Extraction of Indoor Environment from Radar Sensor-Based Mapping
by Seonmin Cho, Seungheon Kwak and Seongwook Lee
Remote Sens. 2024, 16(3), 574; https://doi.org/10.3390/rs16030574 - 2 Feb 2024
Viewed by 1614
Abstract
In this paper, we propose a method for extracting the structure of an indoor environment using radar. When using the radar in an indoor environment, ghost targets are observed through the multipath propagation of radio waves. The presence of these ghost targets obstructs [...] Read more.
In this paper, we propose a method for extracting the structure of an indoor environment using radar. When using the radar in an indoor environment, ghost targets are observed through the multipath propagation of radio waves. The presence of these ghost targets obstructs accurate mapping in the indoor environment, consequently hindering the extraction of the indoor environment. Therefore, we propose a deep learning-based method that uses image-to-image translation to extract the structure of the indoor environment by removing ghost targets from the indoor environment map. In this paper, the proposed method employs a conditional generative adversarial network (CGAN), which includes a U-Net-based generator and a patch-generative adversarial network-based discriminator. By repeating the process of determining whether the structure of the generated indoor environment is real or fake, CGAN ultimately returns a structure similar to the real environment. First, we generate a map of the indoor environment using radar, which includes ghost targets. Next, the structure of the indoor environment is extracted from the map using the proposed method. Then, we compare the proposed method, which is based on the structural similarity index and structural content, with the k-nearest neighbors algorithm, Hough transform, and density-based spatial clustering of applications with noise-based environment extraction method. When comparing the methods, our proposed method offers the advantage of extracting a more accurate environment without requiring parameter adjustments, even when the environment is changed. Full article
Show Figures

Graphical abstract

11 pages, 2472 KiB  
Article
Synthetic Megavoltage Cone Beam Computed Tomography Image Generation for Improved Contouring Accuracy of Cardiac Pacemakers
by Hana Baroudi, Xinru Chen, Wenhua Cao, Mohammad D. El Basha, Skylar Gay, Mary Peters Gronberg, Soleil Hernandez, Kai Huang, Zaphanlene Kaffey, Adam D. Melancon, Raymond P. Mumme, Carlos Sjogreen, January Y. Tsai, Cenji Yu, Laurence E. Court, Ramiro Pino and Yao Zhao
J. Imaging 2023, 9(11), 245; https://doi.org/10.3390/jimaging9110245 - 8 Nov 2023
Viewed by 2140
Abstract
In this study, we aimed to enhance the contouring accuracy of cardiac pacemakers by improving their visualization using deep learning models to predict MV CBCT images based on kV CT or CBCT images. Ten pacemakers and four thorax phantoms were included, creating a [...] Read more.
In this study, we aimed to enhance the contouring accuracy of cardiac pacemakers by improving their visualization using deep learning models to predict MV CBCT images based on kV CT or CBCT images. Ten pacemakers and four thorax phantoms were included, creating a total of 35 combinations. Each combination was imaged on a Varian Halcyon (kV/MV CBCT images) and Siemens SOMATOM CT scanner (kV CT images). Two generative adversarial network (GAN)-based models, cycleGAN and conditional GAN (cGAN), were trained to generate synthetic MV (sMV) CBCT images from kV CT/CBCT images using twenty-eight datasets (80%). The pacemakers in the sMV CBCT images and original MV CBCT images were manually delineated and reviewed by three users. The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were used to compare contour accuracy. Visual inspection showed the improved visualization of pacemakers on sMV CBCT images compared to original kV CT/CBCT images. Moreover, cGAN demonstrated superior performance in enhancing pacemaker visualization compared to cycleGAN. The mean DSC, HD95, and MSD for contours on sMV CBCT images generated from kV CT/CBCT images were 0.91 ± 0.02/0.92 ± 0.01, 1.38 ± 0.31 mm/1.18 ± 0.20 mm, and 0.42 ± 0.07 mm/0.36 ± 0.06 mm using the cGAN model. Deep learning-based methods, specifically cycleGAN and cGAN, can effectively enhance the visualization of pacemakers in thorax kV CT/CBCT images, therefore improving the contouring precision of these devices. Full article
Show Figures

Figure 1

15 pages, 3567 KiB  
Article
End-to-End Deep Learning of Joint Geometric Probabilistic Shaping Using a Channel-Sensitive Autoencoder
by Yuzhe Li, Huan Chang, Ran Gao, Qi Zhang, Feng Tian, Haipeng Yao, Qinghua Tian, Yongjun Wang, Xiangjun Xin, Fu Wang and Lan Rao
Electronics 2023, 12(20), 4234; https://doi.org/10.3390/electronics12204234 - 13 Oct 2023
Cited by 4 | Viewed by 1578
Abstract
In this paper, we propose an innovative channel-sensitive autoencoder (CSAE)-aided end-to-end deep learning (E2EDL) technique for joint geometric probabilistic shaping. The pretrained conditional generative adversarial network (CGAN) is introduced in the CSAE which performs differentiable substitution of the optical fiber channel model under [...] Read more.
In this paper, we propose an innovative channel-sensitive autoencoder (CSAE)-aided end-to-end deep learning (E2EDL) technique for joint geometric probabilistic shaping. The pretrained conditional generative adversarial network (CGAN) is introduced in the CSAE which performs differentiable substitution of the optical fiber channel model under variable input optical power (IOP) levels. This enables the CSAE-aided E2EDL to design optimal joint geometric probabilistic shaping schemes for optical fiber communication systems at varying IOPs. The results of the proposed CSAE-aided E2EDL technique show that for a dual-polarization 64-Gbaud signal with a transmission distance of 5 × 80 km, when the modulation format is a 64-quadrature amplitude modulation (QAM) or a 128-QAM, the maximum generalized mutual information (GMI) level learned via CSAE-aided E2EDL is 5.9826 or 6.8384 bits/symbol under varying IOPs, respectively. In addition, the pretrained CGAN, as a substitution for optical fiber transmission model, accurately characterizes the distortion of signals with different IOPs, with an average bit error ratio (BER) difference of only 1.83%, an average mean square error (MSE) of 0.0041 and an average K-L divergence of 0.0046. In summary, this paper delivers new insights into the application of E2EDL and demonstrates the feasibility of joint geometric probabilistic shaping-based E2EDL for fiber optic communication systems with varying IOPs. Full article
(This article belongs to the Special Issue High-Speed Optical Communication and Information Processing)
Show Figures

Figure 1

11 pages, 5290 KiB  
Proceeding Paper
Quality-Aware Conditional Generative Adversarial Networks for Precipitation Nowcasting
by Jahnavi Jonnalagadda and Mahdi Hashemi
Eng. Proc. 2023, 39(1), 11; https://doi.org/10.3390/engproc2023039011 - 28 Jun 2023
Cited by 1 | Viewed by 1056
Abstract
Accurate precipitation forecasting is essential for emergency management, aviation, and marine agencies to prepare for potential weather impacts. However, traditional radar echo extrapolation has limitations in capturing sudden weather changes caused by convective systems. Deep learning models, an alternative to radar echo extrapolation, [...] Read more.
Accurate precipitation forecasting is essential for emergency management, aviation, and marine agencies to prepare for potential weather impacts. However, traditional radar echo extrapolation has limitations in capturing sudden weather changes caused by convective systems. Deep learning models, an alternative to radar echo extrapolation, have shown promise in precipitation nowcasting. However, the quality of the forecasted radar images deteriorates as the forecast lead time increases due to mean absolute error (MAE, a.k.a L1) or mean squared error (MSE, a.k.a L2), which do not consider the perceptual quality of the image, such as the sharpness of the edges, texture, and contrast. To improve the quality of the forecasted radar images, we propose using the Structural Similarity (SSIM) metric as a regularization term for the Conditional Generative Adversarial Network (CGAN) objective function. Our experiments on satellite images over the region 83° W–76.5° W and 33° S–40° S in 2020 show that the CGAN model trained with both L1 and SSIM regularization outperforms CGAN models trained with only L1, L2, or SSIM regularizations alone. Moreover, the forecast accuracy of CGAN is compared with other state-of-the-art models, such as U-Net and Persistence. Persistence assumes that rainfall remains constant for the next few hours, resulting in higher forecast accuracies for shorter lead times (i.e., <2 h) measured by the critical success index (CSI), probability of detection (POD), and Heidtke skill score (HSS). In contrast, CGAN trained with L1 and SSIM regularization achieves higher CSI, POD, and HSS for lead times greater than 2 h and higher SSIM for all lead times. Full article
(This article belongs to the Proceedings of The 9th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

18 pages, 3279 KiB  
Article
A Machine Learning-Based Model for Flight Turbulence Identification Using LiDAR Data
by Zibo Zhuang, Hui Zhang, Pak-Wai Chan, Hongda Tai and Zheng Deng
Atmosphere 2023, 14(5), 797; https://doi.org/10.3390/atmos14050797 - 27 Apr 2023
Cited by 2 | Viewed by 1808
Abstract
By addressing the imbalanced proportions of the data category samples in the velocity structure function of the LiDAR turbulence identification model, we propose a flight turbulence identification model utilizing both a conditional generative adversarial network (CGAN) and extreme gradient boosting (XGBoost). This model [...] Read more.
By addressing the imbalanced proportions of the data category samples in the velocity structure function of the LiDAR turbulence identification model, we propose a flight turbulence identification model utilizing both a conditional generative adversarial network (CGAN) and extreme gradient boosting (XGBoost). This model can fully learn small- and medium-sized turbulence samples, reduce the false alarm rate, improve robustness, and maintain model stability. Model training involves constructing a balanced dataset by generating samples that conform to the original data distribution via the CGAN. Subsequently, the XGBoost model is iteratively trained on the sample set to obtain the flight turbulence classification level. Experiments show that the turbulence recognition accuracy achieved on the CGAN-generated augmented sample set improves by 15%. Additionally, when incorporating LiDAR-obtained wind field data, the performance of the XGBoost model surpasses that of traditional classification algorithms such as K-nearest neighbours, support vector machines, and random forests by 14%, 8%, and 5%, respectively, affirming the excellence of the model for turbulence classification. Moreover, a comparative analysis conducted on a Zhongchuan Airport flight crew report showed that the model achieved a 78% turbulence identification accuracy, indicating enhanced recognition ability under data-imbalanced conditions. In conclusion, our CGAN/XGBoost model effectively addresses the proportion imbalance issue. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

16 pages, 2594 KiB  
Article
A Deep Learning Framework for Cardiac MR Under-Sampled Image Reconstruction with a Hybrid Spatial and k-Space Loss Function
by Walid Al-Haidri, Igor Matveev, Mugahed A. Al-antari and Mikhail Zubkov
Diagnostics 2023, 13(6), 1120; https://doi.org/10.3390/diagnostics13061120 - 15 Mar 2023
Cited by 1 | Viewed by 2654
Abstract
Magnetic resonance imaging (MRI) is an efficient, non-invasive diagnostic imaging tool for a variety of disorders. In modern MRI systems, the scanning procedure is time-consuming, which leads to problems with patient comfort and causes motion artifacts. Accelerated or parallel MRI has the potential [...] Read more.
Magnetic resonance imaging (MRI) is an efficient, non-invasive diagnostic imaging tool for a variety of disorders. In modern MRI systems, the scanning procedure is time-consuming, which leads to problems with patient comfort and causes motion artifacts. Accelerated or parallel MRI has the potential to minimize patient stress as well as reduce scanning time and medical costs. In this paper, a new deep learning MR image reconstruction framework is proposed to provide more accurate reconstructed MR images when under-sampled or aliased images are generated. The proposed reconstruction model is designed based on the conditional generative adversarial networks (CGANs) where the generator network is designed in a form of an encoder–decoder U-Net network. A hybrid spatial and k-space loss function is also proposed to improve the reconstructed image quality by minimizing the L1-distance considering both spatial and frequency domains simultaneously. The proposed reconstruction framework is directly compared when CGAN and U-Net are adopted and used individually based on the proposed hybrid loss function against the conventional L1-norm. Finally, the proposed reconstruction framework with the extended loss function is evaluated and compared against the traditional SENSE reconstruction technique using the evaluation metrics of structural similarity (SSIM) and peak signal to noise ratio (PSNR). To fine-tune and evaluate the proposed methodology, the public Multi-Coil k-Space OCMR dataset for cardiovascular MR imaging is used. The proposed framework achieves a better image reconstruction quality compared to SENSE in terms of PSNR by 6.84 and 9.57 when U-Net and CGAN are used, respectively. Similarly, it demonstrates SSIM of the reconstructed MR images comparable to the one provided by the SENSE algorithm when U-Net and CGAN are used. Comparing cases where the proposed hybrid loss function is used against the cases with the simple L1-norm, the reconstruction performance can be noticed to improve by 6.84 and 9.57 for U-Net and CGAN, respectively. To conclude this, the proposed framework using CGAN provides the best reconstruction performance compared with U-Net or the conventional SENSE reconstruction techniques. The proposed framework seems to be useful for the practical reconstruction of cardiac images since it can provide better image quality in terms of SSIM and PSNR. Full article
(This article belongs to the Special Issue Artificial Intelligence Advances for Medical Computer-Aided Diagnosis)
Show Figures

Figure 1

27 pages, 11471 KiB  
Article
Improving Classification Performance in Credit Card Fraud Detection by Using New Data Augmentation
by Emilija Strelcenia and Simant Prakoonwit
AI 2023, 4(1), 172-198; https://doi.org/10.3390/ai4010008 - 31 Jan 2023
Cited by 29 | Viewed by 10077
Abstract
In many industrialized and developing nations, credit cards are one of the most widely used methods of payment for online transactions. Credit card invention has streamlined, facilitated, and enhanced internet transactions. It has, however, also given criminals more opportunities to commit fraud, which [...] Read more.
In many industrialized and developing nations, credit cards are one of the most widely used methods of payment for online transactions. Credit card invention has streamlined, facilitated, and enhanced internet transactions. It has, however, also given criminals more opportunities to commit fraud, which has raised the rate of fraud. Credit card fraud has a concerning global impact; many businesses and ordinary users have lost millions of US dollars as a result. Since there is a large number of transactions, many businesses and organizations rely heavily on applying machine learning techniques to automatically classify or identify fraudulent transactions. As the performance of machine learning techniques greatly depends on the quality of the training data, the imbalance in the data is not a trivial issue. In general, only a small percentage of fraudulent transactions are presented in the data. This greatly affects the performance of machine learning classifiers. In order to deal with the rarity of fraudulent occurrences, this paper investigates a variety of data augmentation techniques to address the imbalanced data problem and introduces a new data augmentation model, K-CGAN, for credit card fraud detection. A number of the main classification techniques are then used to evaluate the performance of the augmentation techniques. These results show that B-SMOTE, K-CGAN, and SMOTE have the highest Precision and Recall compared with other augmentation methods. Among those, K-CGAN has the highest F1 Score and Accuracy. Full article
Show Figures

Figure 1

16 pages, 7209 KiB  
Article
Adversarial Resolution Enhancement for Electrical Capacitance Tomography Image Reconstruction
by Wael Deabes, Alaa E. Abdel-Hakim, Kheir Eddine Bouazza and Hassan Althobaiti
Sensors 2022, 22(9), 3142; https://doi.org/10.3390/s22093142 - 20 Apr 2022
Cited by 18 | Viewed by 2682
Abstract
High-quality image reconstruction is essential for many electrical capacitance tomography (CT) applications. Raw capacitance measurements are used in the literature to generate low-resolution images. However, such low-resolution images are not sufficient for proper functionality of most systems. In this paper, we propose a [...] Read more.
High-quality image reconstruction is essential for many electrical capacitance tomography (CT) applications. Raw capacitance measurements are used in the literature to generate low-resolution images. However, such low-resolution images are not sufficient for proper functionality of most systems. In this paper, we propose a novel adversarial resolution enhancement (ARE-ECT) model to reconstruct high-resolution images of inner distributions based on low-quality initial images, which are generated from the capacitance measurements. The proposed model uses a UNet as the generator of a conditional generative adversarial network (CGAN). The generator’s input is set to the low-resolution image rather than the typical random input signal. Additionally, the CGAN is conditioned by the input low-resolution image itself. For evaluation purposes, a massive ECT dataset of 320 K synthetic image–measurement pairs was created. This dataset is used for training, validating, and testing the proposed model. New flow patterns, which are not exposed to the model during the training phase, are used to evaluate the feasibility and generalization ability of the ARE-ECT model. The superiority of ARE-ECT, in the efficient generation of more accurate ECT images than traditional and other deep learning-based image reconstruction algorithms, is proved by the evaluation results. The ARE-ECT model achieved an average image correlation coefficient of more than 98.8% and an average relative image error about 0.1%. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop