Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,081)

Search Parameters:
Keywords = computer-aided diagnosis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2250 KiB  
Article
Training State-of-the-Art Deep Learning Algorithms with Visible and Extended Near-Infrared Multispectral Images of Skin Lesions for the Improvement of Skin Cancer Diagnosis
by Laura Rey-Barroso, Meritxell Vilaseca, Santiago Royo, Fernando Díaz-Doutón, Ilze Lihacova, Andrey Bondarenko and Francisco J. Burgos-Fernández
Abstract
An estimated 60,000 people die annually from skin cancer, predominantly melanoma. The diagnosis of skin lesions primarily relies on visual inspection, but around half of lesions pose diagnostic challenges, often necessitating a biopsy. Non-invasive detection methods like Computer-Aided Diagnosis (CAD) using Deep Learning [...] Read more.
An estimated 60,000 people die annually from skin cancer, predominantly melanoma. The diagnosis of skin lesions primarily relies on visual inspection, but around half of lesions pose diagnostic challenges, often necessitating a biopsy. Non-invasive detection methods like Computer-Aided Diagnosis (CAD) using Deep Learning (DL) are becoming more prominent. This study focuses on the use of multispectral (MS) imaging to improve skin lesion classification of DL models. We trained two convolutional neural networks (CNNs)—a simple CNN with six two-dimensional (2D) convolutional layers and a custom VGG-16 model with three-dimensional (3D) convolutional layers—using a dataset of MS images. The dataset included spectral cubes from 327 nevi, 112 melanomas, and 70 basal cell carcinomas (BCCs). We compared the performance of the CNNs trained with full spectral cubes versus using only three spectral bands closest to RGB wavelengths. The custom VGG-16 model achieved a classification accuracy of 71% with full spectral cubes and 45% with RGB-simulated images. The simple CNN achieved an accuracy of 83% with full spectral cubes and 36% with RGB-simulated images, demonstrating the added value of spectral information. These results confirm that MS imaging provides complementary information beyond traditional RGB images, contributing to improved classification performance. Although the dataset size remains a limitation, the findings indicate that MS imaging has significant potential for enhancing skin lesion diagnosis, paving the way for further advancements as larger datasets become available. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

28 pages, 3337 KiB  
Article
Lung and Colon Cancer Classification Using Multiscale Deep Features Integration of Compact Convolutional Neural Networks and Feature Selection
by Omneya Attallah
Viewed by 363
Abstract
The automated and precise classification of lung and colon cancer from histopathological photos continues to pose a significant challenge in medical diagnosis, as current computer-aided diagnosis (CAD) systems are frequently constrained by their dependence on singular deep learning architectures, elevated computational complexity, and [...] Read more.
The automated and precise classification of lung and colon cancer from histopathological photos continues to pose a significant challenge in medical diagnosis, as current computer-aided diagnosis (CAD) systems are frequently constrained by their dependence on singular deep learning architectures, elevated computational complexity, and their ineffectiveness in utilising multiscale features. To this end, the present research introduces a CAD system that integrates several lightweight convolutional neural networks (CNNs) with dual-layer feature extraction and feature selection to overcome the aforementioned constraints. Initially, it extracts deep attributes from two separate layers (pooling and fully connected) of three pre-trained CNNs (MobileNet, ResNet-18, and EfficientNetB0). Second, the system uses the benefits of canonical correlation analysis for dimensionality reduction in pooling layer attributes to reduce complexity. In addition, it integrates the dual-layer features to encapsulate both high- and low-level representations. Finally, to benefit from multiple deep network architectures while reducing classification complexity, the proposed CAD merges dual deep layer variables of the three CNNs and then applies the analysis of variance (ANOVA) and Chi-Squared for the selection of the most discriminative features from the integrated CNN architectures. The CAD is assessed on the LC25000 dataset leveraging eight distinct classifiers, encompassing various Support Vector Machine (SVM) variants, Decision Trees, Linear Discriminant Analysis, and k-nearest neighbours. The experimental results exhibited outstanding performance, attaining 99.8% classification accuracy with cubic SVM classifiers employing merely 50 ANOVA-selected features, exceeding the performance of individual CNNs while markedly diminishing computational complexity. The framework’s capacity to sustain exceptional accuracy with a limited feature set renders it especially advantageous for clinical applications where diagnostic precision and efficiency are critical. These findings confirm the efficacy of the multi-CNN, multi-layer methodology in enhancing cancer classification precision while mitigating the computational constraints of current systems. Full article
Show Figures

Figure 1

20 pages, 5206 KiB  
Article
Explainable AI for Bipolar Disorder Diagnosis Using Hjorth Parameters
by Mehrnaz Saghab Torbati, Ahmad Zandbagleh, Mohammad Reza Daliri, Amirmasoud Ahmadi, Reza Rostami and Reza Kazemi
Diagnostics 2025, 15(3), 316; https://doi.org/10.3390/diagnostics15030316 - 29 Jan 2025
Viewed by 477
Abstract
Background: Despite the prevalence and severity of bipolar disorder (BD), current diagnostic approaches remain largely subjective. This study presents an automatic diagnostic framework using electroencephalography (EEG)-derived Hjorth parameters (activity, mobility, and complexity), aiming to establish objective neurophysiological markers for BD detection and provide [...] Read more.
Background: Despite the prevalence and severity of bipolar disorder (BD), current diagnostic approaches remain largely subjective. This study presents an automatic diagnostic framework using electroencephalography (EEG)-derived Hjorth parameters (activity, mobility, and complexity), aiming to establish objective neurophysiological markers for BD detection and provide insights into its underlying neural mechanisms. Methods: Using resting-state eyes-closed EEG data collected from 20 BD patients and 20 healthy controls (HCs), we developed a novel diagnostic approach based on Hjorth parameters extracted across multiple frequency bands. We employed a rigorous leave-one-subject-out cross-validation strategy to ensure robust, subject-independent assessment, combined with explainable artificial intelligence (XAI) to identify the most discriminative neural features. Results: Our approach achieved remarkable classification accuracy (92.05%), with the activity Hjorth parameters from beta and gamma frequency bands emerging as the most discriminative features. XAI analysis revealed that anterior brain regions in these higher frequency bands contributed most significantly to BD detection, providing new insights into the neurophysiological markers of BD. Conclusions: This study demonstrates the exceptional diagnostic utility of Hjorth parameters, particularly in higher frequency ranges and anterior brain regions, for BD detection. Our findings not only establish a promising framework for automated BD diagnosis but also offer valuable insights into the neurophysiological basis of bipolar and related disorders. The robust performance and interpretability of our approach suggest its potential as a clinical tool for objective BD diagnosis. Full article
(This article belongs to the Special Issue A New Era in Diagnosis: From Biomarkers to Artificial Intelligence)
Show Figures

Figure 1

17 pages, 16539 KiB  
Article
A Novel Framework for Whole-Slide Pathological Image Classification Based on the Cascaded Attention Mechanism
by Dehua Liu and Bin Hu
Sensors 2025, 25(3), 726; https://doi.org/10.3390/s25030726 - 25 Jan 2025
Viewed by 338
Abstract
This study introduces an innovative deep learning framework to address the limitations of traditional pathological image analysis and the pressing demand for medical resources in tumor diagnosis. With the global rise in cancer cases, manual examination by pathologists is increasingly inadequate, being both [...] Read more.
This study introduces an innovative deep learning framework to address the limitations of traditional pathological image analysis and the pressing demand for medical resources in tumor diagnosis. With the global rise in cancer cases, manual examination by pathologists is increasingly inadequate, being both time-consuming and subject to the scarcity of professionals and individual subjectivity, thus impacting diagnostic accuracy and efficiency. Deep learning, particularly in computer vision, offers significant potential to mitigate these challenges. Automated models can rapidly and accurately process large datasets, revolutionizing tumor detection and classification. However, existing methods often rely on single attention mechanisms, failing to fully exploit the complexity of pathological images, especially in extracting critical features from whole-slide images. We developed a framework incorporating a cascaded attention mechanism, enhancing meaningful pattern recognition while suppressing irrelevant background information. Experiments on the Camelyon16 dataset demonstrate superior classification accuracy, model generalization, and result interpretability compared to state-of-the-art techniques. This advancement promises to enhance diagnostic efficiency, reduce healthcare costs, and improve patient outcomes. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 15983 KiB  
Article
Advanced Deep Learning Models for Melanoma Diagnosis in Computer-Aided Skin Cancer Detection
by Ranpreet Kaur, Hamid GholamHosseini and Maria Lindén
Sensors 2025, 25(3), 594; https://doi.org/10.3390/s25030594 - 21 Jan 2025
Viewed by 465
Abstract
The most deadly type of skin cancer is melanoma. A visual examination does not provide an accurate diagnosis of melanoma during its early to middle stages. Therefore, an automated model could be developed that assists with early skin cancer detection. It is possible [...] Read more.
The most deadly type of skin cancer is melanoma. A visual examination does not provide an accurate diagnosis of melanoma during its early to middle stages. Therefore, an automated model could be developed that assists with early skin cancer detection. It is possible to limit the severity of melanoma by detecting it early and treating it promptly. This study aims to develop efficient approaches for various phases of melanoma computer-aided diagnosis (CAD), such as preprocessing, segmentation, and classification. The first step of the CAD pipeline includes the proposed hybrid method, which uses morphological operations and context aggregation-based deep neural networks to remove hairlines and improve poor contrast in dermoscopic skin cancer images. An image segmentation network based on deep learning is then used to extract lesion regions for detailed analysis and calculate the optimized classification features. Lastly, a deep neural network is used to distinguish melanoma from benign lesions. The proposed approaches use a benchmark dataset named International Skin Imaging Collaboration (ISIC) 2020. In this work, two forms of evaluations are performed with the classification model. The first experiment involves the incorporation of the results from the preprocessing and segmentation stages into the classification model. The second experiment involves the evaluation of the classifier without employing these stages i.e., using raw images. From the study results, it can be concluded that a classification model using segmented and cleaned images contributes more to achieving an accurate classification rate of 93.40% with a 1.3 s test time on a single image. Full article
Show Figures

Figure 1

15 pages, 3242 KiB  
Article
Deep Transfer Learning for Classification of Late Gadolinium Enhancement Cardiac MRI Images into Myocardial Infarction, Myocarditis, and Healthy Classes: Comparison with Subjective Visual Evaluation
by Amani Ben Khalifa, Manel Mili, Mezri Maatouk, Asma Ben Abdallah, Mabrouk Abdellali, Sofiene Gaied, Azza Ben Ali, Yassir Lahouel, Mohamed Hedi Bedoui and Ahmed Zrig
Diagnostics 2025, 15(2), 207; https://doi.org/10.3390/diagnostics15020207 - 17 Jan 2025
Viewed by 572
Abstract
Background/Objectives: To develop a computer-aided diagnosis (CAD) method for the classification of late gadolinium enhancement (LGE) cardiac MRI images into myocardial infarction (MI), myocarditis, and healthy classes using a fine-tuned VGG16 model hybridized with multi-layer perceptron (MLP) (VGG16-MLP) and assess our model’s performance [...] Read more.
Background/Objectives: To develop a computer-aided diagnosis (CAD) method for the classification of late gadolinium enhancement (LGE) cardiac MRI images into myocardial infarction (MI), myocarditis, and healthy classes using a fine-tuned VGG16 model hybridized with multi-layer perceptron (MLP) (VGG16-MLP) and assess our model’s performance in comparison to various pre-trained base models and MRI readers. Methods: This study included 361 LGE images for MI, 222 for myocarditis, and 254 for the healthy class. The left ventricle was extracted automatically using a U-net segmentation model on LGE images. Fine-tuned VGG16 was performed for feature extraction. A spatial attention mechanism was implemented as a part of the neural network architecture. The MLP architecture was used for the classification. The evaluation metrics were calculated using a separate test set. To compare the VGG16 model’s performance in feature extraction, various pre-trained base models were evaluated: VGG19, DenseNet121, DenseNet201, MobileNet, InceptionV3, and InceptionResNetV2. The Support Vector Machine (SVM) classifier was evaluated and compared to MLP for the classification task. The performance of the VGG16-MLP model was compared with a subjective visual analysis conducted by two blinded independent readers. Results: The VGG16-MLP model allowed high-performance differentiation between MI, myocarditis, and healthy LGE cardiac MRI images. It outperformed the other tested models with 96% accuracy, 97% precision, 96% sensitivity, and 96% F1-score. Our model surpassed the accuracy of Reader 1 by 27% and Reader 2 by 17%. Conclusions: Our study demonstrated that the VGG16-MLP model permits accurate classification of MI, myocarditis, and healthy LGE cardiac MRI images and could be considered a reliable computer-aided diagnosis approach specifically for radiologists with limited experience in cardiovascular imaging. Full article
(This article belongs to the Special Issue Diagnostic AI and Cardiac Diseases)
Show Figures

Figure 1

13 pages, 2472 KiB  
Article
Ischemic Stroke Lesion Segmentation on Multiparametric CT Perfusion Maps Using Deep Neural Network
by Ankit Kandpal, Rakesh Kumar Gupta and Anup Singh
Viewed by 549
Abstract
Background: Accurate delineation of lesions in acute ischemic stroke is important for determining the extent of tissue damage and the identification of potentially salvageable brain tissues. Automatic segmentation on CT images is challenging due to the poor contrast-to-noise ratio. Quantitative CT perfusion images [...] Read more.
Background: Accurate delineation of lesions in acute ischemic stroke is important for determining the extent of tissue damage and the identification of potentially salvageable brain tissues. Automatic segmentation on CT images is challenging due to the poor contrast-to-noise ratio. Quantitative CT perfusion images improve the estimation of the perfusion deficit regions; however, they are limited by a poor signal-to-noise ratio. The study aims to investigate the potential of deep learning (DL) algorithms for the improved segmentation of ischemic lesions. Methods: This study proposes a novel DL architecture, DenseResU-NetCTPSS, for stroke segmentation using multiparametric CT perfusion images. The proposed network is benchmarked against state-of-the-art DL models. Its performance is assessed using the ISLES-2018 challenge dataset, a widely recognized dataset for stroke segmentation in CT images. The proposed network was evaluated on both training and test datasets. Results: The final optimized network takes three image sequences, namely CT, cerebral blood volume (CBV), and time to max (Tmax), as input to perform segmentation. The network achieved a dice score of 0.65 ± 0.19 and 0.45 ± 0.32 on the training and testing datasets. The model demonstrated a notable improvement over existing state-of-the-art DL models. Conclusions: The optimized model combines CT, CBV, and Tmax images, enabling automatic lesion identification with reasonable accuracy and aiding radiologists in faster, more objective assessments. Full article
Show Figures

Figure 1

32 pages, 3661 KiB  
Systematic Review
Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It
by Yasir Hafeez, Khuhed Memon, Maged S. AL-Quraishi, Norashikin Yahya, Sami Elferik and Syed Saad Azhar Ali
Diagnostics 2025, 15(2), 168; https://doi.org/10.3390/diagnostics15020168 - 13 Jan 2025
Viewed by 817
Abstract
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in [...] Read more.
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans have been widely and very effectively employed by radiologists and neurologists for the differential diagnoses of neurological disorders for decades, yet no AI-powered systems to analyze such scans have been incorporated into the standard operating procedures of healthcare systems. Why? It is absolutely understandable that in diagnostic medicine, precious human lives are on the line, and hence there is no room even for the tiniest of mistakes. Nevertheless, with the advent of explainable artificial intelligence (XAI), the old-school black boxes of deep learning (DL) systems have been unraveled. Would XAI be the turning point for medical experts to finally embrace AI in diagnostic radiology? This review is a humble endeavor to find the answers to these questions. Methods: In this review, we present the journey and contributions of AI in developing systems to recognize, preprocess, and analyze brain MRI scans for differential diagnoses of various neurological disorders, with special emphasis on CAD systems embedded with explainability. A comprehensive review of the literature from 2017 to 2024 was conducted using host databases. We also present medical domain experts’ opinions and summarize the challenges up ahead that need to be addressed in order to fully exploit the tremendous potential of XAI in its application to medical diagnostics and serve humanity. Results: Forty-seven studies were summarized and tabulated with information about the XAI technology and datasets employed, along with performance accuracies. The strengths and weaknesses of the studies have also been discussed. In addition, the opinions of seven medical experts from around the world have been presented to guide engineers and data scientists in developing such CAD tools. Conclusions: Current CAD research was observed to be focused on the enhancement of the performance accuracies of the DL regimens, with less attention being paid to the authenticity and usefulness of explanations. A shortage of ground truth data for explainability was also observed. Visual explanation methods were found to dominate; however, they might not be enough, and more thorough and human professor-like explanations would be required to build the trust of healthcare professionals. Special attention to these factors along with the legal, ethical, safety, and security issues can bridge the current gap between XAI and routine clinical practice. Full article
Show Figures

Figure 1

21 pages, 6639 KiB  
Article
Efficient Generative-Adversarial U-Net for Multi-Organ Medical Image Segmentation
by Haoran Wang, Gengshen Wu and Yi Liu
J. Imaging 2025, 11(1), 19; https://doi.org/10.3390/jimaging11010019 - 12 Jan 2025
Viewed by 474
Abstract
Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial [...] Read more.
Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial U-Net (EGAUNet), designed to facilitate rapid and accurate multi-organ labeling. To enhance the model’s capability to comprehend spatial information, we propose the Global Spatial-Channel Attention Mechanism (GSCA). This mechanism enables the model to concentrate more effectively on regions of interest. Additionally, we have integrated Efficient Mapping Convolutional Blocks (EMCB) into the feature-learning process, allowing for the extraction of multi-scale spatial information and the adjustment of feature map channels through optimized weight values. Moreover, the proposed framework progressively enhances its performance by utilizing a generative-adversarial learning strategy, which contributes to improvements in segmentation accuracy. Consequently, EGAUNet demonstrates exemplary segmentation performance on public multi-organ datasets while maintaining high efficiency. For instance, in evaluations on the CHAOS T2SPIR dataset, EGAUNet achieves approximately 2% higher performance on the Jaccard metric, 1% higher on the Dice metric, and nearly 3% higher on the precision metric in comparison to advanced networks such as Swin-Unet and TransUnet. Full article
Show Figures

Figure 1

15 pages, 4352 KiB  
Article
Automatic Lower-Limb Length Measurement Network (A3LMNet): A Hybrid Framework for Automated Lower-Limb Length Measurement in Orthopedic Diagnostics
by Se-Yeol Rhyou, Yongjin Cho, Jaechern Yoo, Sanghoon Hong, Sunghoon Bae, Hyunjae Bae and Minyung Yu
Viewed by 540
Abstract
Limb Length Discrepancy (LLD) is a common condition that can result in gait abnormalities, pain, and an increased risk of early degenerative osteoarthritis in the lower extremities. Epidemiological studies indicate that mild LLD, defined as a discrepancy of 10 mm or less, affects [...] Read more.
Limb Length Discrepancy (LLD) is a common condition that can result in gait abnormalities, pain, and an increased risk of early degenerative osteoarthritis in the lower extremities. Epidemiological studies indicate that mild LLD, defined as a discrepancy of 10 mm or less, affects approximately 60–90% of the population. While more severe cases are less frequent, they are associated with secondary conditions such as low back pain, scoliosis, and osteoarthritis of the hip or knee. LLD not only impacts daily activities, but may also lead to long-term complications, making early detection and precise measurement essential. Current LLD measurement methods include physical examination and imaging techniques, with physical exams being simple and non-invasive but prone to operator-dependent errors. To address these limitations and reduce measurement errors, we have developed an AI-based automated lower-limb length measurement system. This method employs semantic segmentation to accurately identify the positions of the femur and tibia and extracts key anatomical landmarks, achieving a margin of error within 4 mm. By automating the measurement process, this system reduces the time and effort required for manual measurements, enabling clinicians to focus more on treatment and improving the overall quality of care. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

13 pages, 15384 KiB  
Article
Computational Mutagenesis of GPx7 and GPx8: Structural and Stability Insights into Rare Genetic and Somatic Missense Mutations and Their Implications for Cancer Development
by Adebiyi Sobitan, Nosimot Buhari, Zainab Youssri, Fayuan Wen, Dawit Kidane and Shaolei Teng
Viewed by 533
Abstract
Background/Objectives: Somatic and genetic mutations in glutathione peroxidases (GPxs), including GPx7 and GPx8, have been linked to intellectual disability, microcephaly, and various tumors. GPx7 and GPx8 evolved the latest among the GPx enzymes and are present in the endoplasmic reticulum. Although lacking a [...] Read more.
Background/Objectives: Somatic and genetic mutations in glutathione peroxidases (GPxs), including GPx7 and GPx8, have been linked to intellectual disability, microcephaly, and various tumors. GPx7 and GPx8 evolved the latest among the GPx enzymes and are present in the endoplasmic reticulum. Although lacking a glutathione binding domain, GPx7 and GPx8 possess peroxidase activity that helps the body respond to cellular stress. However, the protein mutations in these peroxidases remain relatively understudied. Methods: By elucidating the structural and stability consequences of missense mutations, this study aims to provide insights into the pathogenic mechanisms involved in different cancers, thereby aiding clinical diagnosis, treatment strategies, and the development of targeted therapies. We performed saturated computational mutagenesis to analyze 2926 and 3971 missense mutations of GPx7 and GPx8, respectively. Results: The results indicate that G153H and G153F in GPx7 are highly destabilizing, while E93M and W142F are stabilizing. In GPx8, N74W and G173W caused the most instability while S70I and S119P increased stability. Our analysis shows that highly destabilizing somatic and genetic mutations are more likely pathogenic compared to stabilizing mutations. Conclusions: This comprehensive analysis of missense mutations in GPx7 and GPx8 provides critical insights into their impact on protein structure and stability, contributing to a deeper understanding of the roles of somatic mutations in cancer development and progression. These findings can inform more precise clinical diagnostics and targeted treatment approaches for cancers. Full article
Show Figures

Figure 1

18 pages, 8911 KiB  
Article
Diagnosis of Autism Spectrum Disorder (ASD) by Dynamic Functional Connectivity Using GNN-LSTM
by Jun Tang, Jie Chen, Miaojun Hu, Yao Hu, Zixi Zhang and Liuming Xiao
Sensors 2025, 25(1), 156; https://doi.org/10.3390/s25010156 - 30 Dec 2024
Viewed by 693
Abstract
Early detection of autism spectrum disorder (ASD) is particularly important given its insidious qualities and the high cost of the diagnostic process. Currently, static functional connectivity studies have achieved significant results in the field of ASD detection. However, with the deepening of clinical [...] Read more.
Early detection of autism spectrum disorder (ASD) is particularly important given its insidious qualities and the high cost of the diagnostic process. Currently, static functional connectivity studies have achieved significant results in the field of ASD detection. However, with the deepening of clinical research, more and more evidence suggests that dynamic functional connectivity analysis can more comprehensively reveal the complex and variable characteristics of brain networks and their underlying mechanisms, thus providing more solid scientific support for computer-aided diagnosis of ASD. To overcome the lack of time-scale information in static functional connectivity analysis, in this paper, we proposes an innovative GNN-LSTM model, which combines the advantages of long short-term memory (LSTM) and graph neural networks (GNNs). The model captures the spatial features in fMRI data by GNN and aggregates the temporal information of dynamic functional connectivity using LSTM to generate a more comprehensive spatio-temporal feature representation of fMRI data. Further, a dynamic graph pooling method is proposed to extract the final node representations from the dynamic graph representations for classification tasks. To address the variable dependence of dynamic feature connectivity on time scales, the model introduces a jump connection mechanism to enhance information extraction between internal units and capture features at different time scales. The model achieves remarkable results on the ABIDE dataset, with accuracies of 80.4% on the ABIDE I and 79.63% on the ABIDE II, which strongly demonstrates the effectiveness and potential of the model for ASD detection. This study not only provides new perspectives and methods for computer-aided diagnosis of ASD but also provides useful references for research in related fields. Full article
Show Figures

Figure 1

18 pages, 6407 KiB  
Article
ViT-Based Face Diagnosis Images Analysis for Schizophrenia Detection
by Huilin Liu, Runmin Cao, Songze Li, Yifan Wang, Xiaohan Zhang, Hua Xu, Xirong Sun, Lijuan Wang, Peng Qian, Zhumei Sun, Kai Gao and Fufeng Li
Brain Sci. 2025, 15(1), 30; https://doi.org/10.3390/brainsci15010030 - 29 Dec 2024
Viewed by 740
Abstract
Objectives: Computer-aided schizophrenia (SZ) detection methods mainly depend on electroencephalogram and brain magnetic resonance images, which both capture physical signals from patients’ brains. These inspection techniques take too much time and affect patients’ compliance and cooperation, while difficult for clinicians to comprehend the [...] Read more.
Objectives: Computer-aided schizophrenia (SZ) detection methods mainly depend on electroencephalogram and brain magnetic resonance images, which both capture physical signals from patients’ brains. These inspection techniques take too much time and affect patients’ compliance and cooperation, while difficult for clinicians to comprehend the principle of detection decisions. This study proposes a novel method using face diagnosis images based on traditional Chinese medicine principles, providing a non-invasive, efficient, and interpretable alternative for SZ detection. Methods: An innovative face diagnosis image analysis method for SZ detection, which learns feature representations based on Vision Transformer (ViT) directly from face diagnosis images. It provides a face features distribution visualization and quantitative importance of each facial region and is proposed to supplement interpretation and to increase efficiency in SZ detection while keeping a high detection accuracy. Results: A benchmarking platform comprising 921 face diagnostic images, 6 benchmark methods, and 4 evaluation metrics was established. The experimental results demonstrate that our method significantly improves SZ detection performance with a 3–10% increase in accuracy scores. Additionally, it is found that facial regions rank in descending order according to importance in SZ detection as eyes, mouth, forehead, cheeks, and nose, which is exactly consistent with the clinical traditional Chinese medicine experience. Conclusions: Our method fully leverages semantic feature representations of first-introduced face diagnosis images in SZ, offering strong interpretability and visualization capabilities. It not only opens a new path for SZ detection but also brings new tools and concepts to the research and application in the field of mental illness. Full article
(This article belongs to the Section Neuropsychiatry)
Show Figures

Figure 1

19 pages, 2762 KiB  
Review
The Role of Advanced Cardiac Imaging in Monitoring Cardiovascular Complications in Patients with Extracardiac Tumors: A Descriptive Review
by Annamaria Tavernese, Valeria Cammalleri, Rocco Mollace, Giorgio Antonelli, Mariagrazia Piscione, Nino Cocco, Myriam Carpenito, Carmelo Dominici, Massimo Federici and Gian Paolo Ussia
J. Cardiovasc. Dev. Dis. 2025, 12(1), 9; https://doi.org/10.3390/jcdd12010009 - 29 Dec 2024
Viewed by 712
Abstract
Cardiac involvement in cancer is increasingly important in the diagnosis and follow-up of patients. A thorough cardiovascular evaluation using multimodal imaging is crucial to assess any direct cardiac involvement from oncological disease progression and to determine the cardiovascular risk of patients undergoing oncological [...] Read more.
Cardiac involvement in cancer is increasingly important in the diagnosis and follow-up of patients. A thorough cardiovascular evaluation using multimodal imaging is crucial to assess any direct cardiac involvement from oncological disease progression and to determine the cardiovascular risk of patients undergoing oncological therapies. Early detection of cardiac dysfunction, particularly due to cardiotoxicity from chemotherapy or radiotherapy, is essential to establish the disease’s overall prognostic impact. Comprehensive cardiovascular imaging should be integral to the clinical management of cancer patients. Echocardiography remains highly effective for assessing cardiac function, including systolic performance and ventricular filling pressures, with speckle-tracking echocardiography offering early insights into chemotoxicity-related myocardial damage. Cardiac computed tomography (CT) provides precise anatomical detail, especially for cardiac involvement due to metastasis or adjacent mediastinal or lung tumors. Coronary assessment is also important for initial risk stratification and monitoring potential coronary artery disease progression after radiotherapy or chemotherapeutic treatment. Finally, cardiac magnetic resonance (CMR) is the gold standard for myocardial tissue characterization, aiding in the differential diagnosis of cardiac masses. CMR’s mapping techniques allow for early detection of myocardial inflammation caused by cardiotoxicity. This review explores the applicability of echocardiography, cardiac CT, and CMR in cancer patients with extracardiac tumors. Full article
Show Figures

Figure 1

20 pages, 3238 KiB  
Article
Enhanced Disc Herniation Classification Using Grey Wolf Optimization Based on Hybrid Feature Extraction and Deep Learning Methods
by Yasemin Sarı and Nesrin Aydın Atasoy
Viewed by 611
Abstract
Due to the increasing number of people working at computers in professional settings, the incidence of lumbar disc herniation is increasing. Background/Objectives: The early diagnosis and treatment of lumbar disc herniation is much more likely to yield favorable results, allowing the hernia to [...] Read more.
Due to the increasing number of people working at computers in professional settings, the incidence of lumbar disc herniation is increasing. Background/Objectives: The early diagnosis and treatment of lumbar disc herniation is much more likely to yield favorable results, allowing the hernia to be treated before it develops further. The aim of this study was to classify lumbar disc herniations in a computer-aided, fully automated manner using magnetic resonance images (MRIs). Methods: This study presents a hybrid method integrating residual network (ResNet50), grey wolf optimization (GWO), and machine learning classifiers such as multi-layer perceptron (MLP) and support vector machine (SVM) to improve classification performance. The proposed approach begins with feature extraction using ResNet50, a deep convolutional neural network known for its robust feature representation capabilities. ResNet50’s residual connections allow for effective training and high-quality feature extraction from input images. Following feature extraction, the GWO algorithm, inspired by the social hierarchy and hunting behavior of grey wolves, is employed to optimize the feature set by selecting the most relevant features. Finally, the optimized feature set is fed into machine learning classifiers (MLP and SVM) for classification. The use of various activation functions (e.g., ReLU, identity, logistic, and tanh) in MLP and various kernel functions (e.g., linear, rbf, sigmoid, and polynomial) in SVM allows for a thorough evaluation of the classifiers’ performance. Results: The proposed methodology demonstrates significant improvements in metrics such as accuracy, precision, recall, and F1 score, outperforming traditional approaches in several cases. These results highlight the effectiveness of combining deep learning-based feature extraction with optimization and machine learning classifiers. Conclusions: Compared to other methods, such as capsule networks (CapsNet), EfficientNetB6, and DenseNet169, the proposed ResNet50-GWO-SVM approach achieved superior performance across all metrics, including accuracy, precision, recall, and F1 score, demonstrating its robustness and effectiveness in classification tasks. Full article
Show Figures

Figure 1

Back to TopTop