Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleJanuary 2025
Interpretable neuro-cognitive diagnostic approach incorporating multidimensional features
Knowledge-Based Systems (KNBS), Volume 304, Issue Chttps://doi.org/10.1016/j.knosys.2024.112432Highlights- Tri-channel fusion in neurocognitive diagnostic model.
- Attention mechanism enhances cognitive feature fusion.
- Improved accuracy in modeling high-dimensional features.
Cognitive diagnostics is a pivotal area within educational data mining, focusing on deciphering students’ cognitive status via their academic performance. Traditionally, cognitive diagnostic models (CDMs) have evolved from manually designed ...
- research-articleNovember 2024
DCEnt‐PredictiveNet: A novel explainable hybrid model for time series forecasting
Neurocomputing (NEUROC), Volume 608, Issue Chttps://doi.org/10.1016/j.neucom.2024.128389AbstractThis work presents a novel hybrid framework called DCEnt-PredictiveNet (deep convolutional neural network (DCNN) + entropy + support vector regressor (SVR)) that concatenate both deep and handcrafted features for time series data analysis and ...
Highlights- Explainable hybrid framework called DCEnt-PredictiveNet for Time Series Forecasting (TSF) is proposed.
- Proposed architecture consists of Discrete Wavelet Transform, Convolutional Neural Network, Entropy and Support Vector Regression.
- research-articleOctober 2024
HGGN: Prediction of microRNA-Mediated drug sensitivity based on interpretable heterogeneous graph global-attention network
Future Generation Computer Systems (FGCS), Volume 160, Issue CPages 274–282https://doi.org/10.1016/j.future.2024.06.010AbstractDrug sensitivity significantly influences therapeutic outcomes. Recent discoveries have highlighted the pivotal role of miRNAs in regulating drug sensitivity by modulating genes associated with drug metabolism and action. As crucial regulators of ...
Highlights- We design a dual-channel feature representation for heterogeneous networks to predict miRNA-drug sensitivity associations.
- The global attention mechanism of HGGN enhanced the performance of algorithms in data-limited scenarios.
- ...
- research-articleJanuary 2025
AMCW-DFFNSA: An interpretable deep feature fusion network for noise-robust machinery fault diagnosis
Knowledge-Based Systems (KNBS), Volume 301, Issue Chttps://doi.org/10.1016/j.knosys.2024.112361AbstractDeep learning is applicable in mechanical fault diagnosis, ensuring the secure operation of mechanical systems. However, the lack of interpretability and noise robustness in deep learning methods has been a common challenge faced by academia and ...
- research-articleNovember 2024
Nondestructive in-ovo sexing of Hy-Line Sonia eggs by EggFormer using hyperspectral imaging
Computers and Electronics in Agriculture (COEA), Volume 225, Issue Chttps://doi.org/10.1016/j.compag.2024.109298AbstractEarly identification of egg gender during incubation is crucial for animal welfare and commercial poultry production, as nowadays day-old male chicks are often culled due to low economic value. Hyperspectral imaging (HSI) recognition presents a ...
Highlights- Hyperspectral images of Hy-Line Sonia eggs were collected on even days from day 0 to 14 for gender identification.
- Feature bands were selected by RF, PCA, SPA, CARS and then the recombined images were processed by ViT-Base16.
- ...
-
- research-articleJuly 2024
Intelligent medical diagnosis and treatment for diabetes with deep convolutional fuzzy neural networks
Information Sciences: an International Journal (ISCI), Volume 677, Issue Chttps://doi.org/10.1016/j.ins.2024.120802AbstractThe advent of smart healthcare has significantly heightened the importance of computer technologies in supporting medical diagnosis and treatment. Nevertheless, the challenges of mining latent knowledge within diagnostic data and explaining ...
- research-articleJuly 2024
SurvivalLVQ: Interpretable supervised clustering and prediction in survival analysis via Learning Vector Quantization
Pattern Recognition (PATT), Volume 153, Issue Chttps://doi.org/10.1016/j.patcog.2024.110497AbstractIdentifying subgroups with similar survival outcomes is a pivotal challenge in survival analysis. Traditional clustering methods often neglect the outcome variable, potentially leading to inaccurate representation of risk profiles. To address ...
Highlights- SurvivalLVQ: Learning Vector Quantization is adapted to survival analysis.
- SurvivalLVQ groups cases by survival probability and assigns unique survival curves.
- SurvivalLVQ demonstrates strong clustering and competitive predictive ...
- research-articleJuly 2024
An interpretable lightweight deep network with ℓ p ( 0 < p < 1 ) model-driven for single image super-resolution
Neurocomputing (NEUROC), Volume 580, Issue Chttps://doi.org/10.1016/j.neucom.2024.127521AbstractIn order to address the expensive computation cost of deep networks, some Single Image Super-Resolution (SISR) methods tried to design the lightweight networks by means of recursion or expert prior. However, they discuss the theoretical ...
- ArticleMarch 2024
eval-rationales: An End-to-End Toolkit to Explain and Evaluate Transformers-Based Models
Advances in Information RetrievalPages 212–217https://doi.org/10.1007/978-3-031-56069-9_20AbstractState-of-the-art (SOTA) transformer-based models in the domains of Natural Language Processing (NLP) and Information Retrieval (IR) are often characterized by their opacity in terms of decision-making processes. This limitation has given rise to ...
- research-articleApril 2024
Opinion Mining with Interpretable Random Density Forests
ICMLSC '24: Proceedings of the 2024 8th International Conference on Machine Learning and Soft ComputingPages 66–72https://doi.org/10.1145/3647750.3647761Interpreting and explaining complex models such as ensemble machine learning models for opinion mining is essential to increase the level of transparency fairness and reliability of positive and negative opinion prediction results. Although ensemble ...
- research-articleJanuary 2024
Interpretable Task-inspired Adaptive Filter Pruning for Neural Networks Under Multiple Constraints
International Journal of Computer Vision (IJCV), Volume 132, Issue 6Pages 2060–2076https://doi.org/10.1007/s11263-023-01972-xAbstractExisting methods for filter pruning mostly rely on specific data-driven paradigms but lack the interpretability. Besides, these approaches usually assign layer-wise compression ratios automatically only under given FLOPs by neural architecture ...
- research-articleNovember 2023
One and one make eleven: An interpretable neural network for image recognition
Knowledge-Based Systems (KNBS), Volume 279, Issue Chttps://doi.org/10.1016/j.knosys.2023.110926AbstractAlthough non-interpretable (black-box) deep learning models are well known for their accuracy, interpretable deep learning models should be used for high stake decisions, such as: healthcare. In this paper, we present a novel technique of ...
Highlights- Comb-ProtoPNet introduces a novel technique to form ensemble algorithms.
- Comb-ProtoPNet combines the algorithms themselves instead of combining their outputs.
- Comb-ProtoPNet uses prototypes with rectangular and square spatial ...
- research-articleDecember 2023
Research on Interpretable Fake News Detection Technology Based on Co-Attention Mechanism
ICCVIT '23: Proceedings of the 2023 International Conference on Computer, Vision and Intelligent TechnologyArticle No.: 41, Pages 1–6https://doi.org/10.1145/3627341.3630379The wide dissemination of false news is increasingly threatening both individuals and society. Aiming at the problem of insufficient interpretability in false news detection task, the model TCIN based on collaborative attention mechanism is proposed. The ...
- research-articleJune 2023
SR-AttNet: An Interpretable Stretch–Relax Attention based Deep Neural Network for Polyp Segmentation in Colonoscopy Images
Computers in Biology and Medicine (CBIM), Volume 160, Issue Chttps://doi.org/10.1016/j.compbiomed.2023.106945Abstract Background:Colorectal polyp is a common structural gastrointestinal (GI) anomaly, which can in certain cases turn malignant. Colonoscopic image inspection is, thereby, an important step for isolating the polyps as well as removing them if ...
Highlights- Utilization of both un-dilated and dilated filters in the encoder and decoder.
- Stretch–Relax type Attention system within encoder and decoder pipeline.
- Additional Feature-to-Mask Pipeline for feature aggregation and prediction.
- research-articleFebruary 2023
Explainable AI: To Reveal the Logic of Black-Box Models
New Generation Computing (NEWG), Volume 42, Issue 1Pages 53–87https://doi.org/10.1007/s00354-022-00201-2AbstractArtificial intelligence (AI) is continuously evolving; however, in the last 10 years, it has gotten considerably more difficult to explain AI models. With the help of explanations, end users can understand the outcomes generated by AI models. The ...
- research-articleJanuary 2023
Why did AI get this one wrong? — Tree-based explanations of machine learning model predictions
- Enea Parimbelli,
- Tommaso Mario Buonocore,
- Giovanna Nicora,
- Wojtek Michalowski,
- Szymon Wilk,
- Riccardo Bellazzi
Artificial Intelligence in Medicine (AIIM), Volume 135, Issue Chttps://doi.org/10.1016/j.artmed.2022.102471AbstractIncreasingly complex learning methods such as boosting, bagging and deep learning have made ML models more accurate, but harder to interpret and explain, culminating in black-box machine learning models. Model developers and users ...
Highlights- A novel XAI method for local, model-agnostic, post-hoc explanations is presented.
- research-articleOctober 2022
Inverted papilloma and nasal polyp classification using a deep convolutional network integrated with an attention mechanism
Computers in Biology and Medicine (CBIM), Volume 149, Issue Chttps://doi.org/10.1016/j.compbiomed.2022.105976Abstract BackgroundInverted papilloma (IP) is a common sinus neoplasm with a probability of malignant transformation. Nasal polyps (NP) are the most frequent masses in the sinus. The classification of IP and NP using computed tomography (CT) is highly ...
Highlights- A deep CNN with attention mechanism composed of DenseNet and SENet, is applied for classifying the IP and NP in CT images.
- The classification performance of SE-DenseNet is evaluated, and the accuracy of our method can achieve 88.4% ...
- ArticleMarch 2023
Benchmarking Considerations for Trustworthy and Responsible AI (Panel)
Performance Evaluation and BenchmarkingPages 110–119https://doi.org/10.1007/978-3-031-29576-8_8AbstractContinuing growth of Artificial Intelligence (AI) adoption across enterprises and governments around the world has fueled the demand for trustworthy AI systems and applications. The need ranges from the so-called Explainable or Interpretable AI to ...
- research-articleJuly 2022
Think positive: An interpretable neural network for image recognition
Neural Networks (NENE), Volume 151, Issue CPages 178–189https://doi.org/10.1016/j.neunet.2022.03.034AbstractThe COVID-19 pandemic is an ongoing pandemic and is placing additional burden on healthcare systems around the world. Timely and effectively detecting the virus can help to reduce the spread of the disease. Although, RT-PCR is still a ...
Highlights- Quasi-ProtoPNet uses the positive reasoning process.
- The positive reasoning ...