Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleNovember 2024
Soft-GNN: towards robust graph neural networks via self-adaptive data utilization
Frontiers of Computer Science: Selected Publications from Chinese Universities (FCS), Volume 19, Issue 4https://doi.org/10.1007/s11704-024-3575-5AbstractGraph neural networks (GNNs) have gained traction and have been applied to various graph-based data analysis tasks due to their high performance. However, a major concern is their robustness, particularly when faced with graph data that has been ...
- research-articleNovember 2024
Robust domain adaptation with noisy and shifted label distribution
Frontiers of Computer Science: Selected Publications from Chinese Universities (FCS), Volume 19, Issue 3https://doi.org/10.1007/s11704-024-3810-0AbstractUnsupervised Domain Adaptation (UDA) intends to achieve excellent results by transferring knowledge from labeled source domains to unlabeled target domains in which the data or label distribution changes. Previous UDA methods have acquired great ...
- research-articleNovember 2024
KD-Crowd: a knowledge distillation framework for learning from crowds
Frontiers of Computer Science: Selected Publications from Chinese Universities (FCS), Volume 19, Issue 1https://doi.org/10.1007/s11704-023-3578-7AbstractRecently, crowdsourcing has established itself as an efficient labeling solution by distributing tasks to crowd workers. As the workers can make mistakes with diverse expertise, one core learning task is to estimate each worker’s expertise, and ...
- research-articleOctober 2024
ERASE: Error-Resilient Representation Learning on Graphs for Label Noise Tolerance
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 270–280https://doi.org/10.1145/3627673.3679552Deep learning has achieved remarkable success in graph-related tasks, yet this accomplishment heavily relies on large-scale high-quality annotated datasets. However, acquiring such datasets can be cost-prohibitive, leading to the practical use of labels ...
- ArticleOctober 2024
Noise-Robust Conformal Prediction for Medical Image Classification
Machine Learning in Medical ImagingPages 159–168https://doi.org/10.1007/978-3-031-73290-4_16AbstractConformal Prediction (CP) quantifies network uncertainty by building a small prediction set with a pre-defined probability that the correct class is within this set. In this study we tackle the problem of CP calibration based on a validation set ...
-
- ArticleNovember 2024
Distributed Active Client Selection With Noisy Clients Using Model Association Scores
Computer Vision – ECCV 2024Pages 75–92https://doi.org/10.1007/978-3-031-73027-6_5AbstractActive client selection (ACS) strategically identifies clients for model updates during each training round of federated learning. In scenarios with limited communication resources, ACS emerges as a superior alternative to random client selection,...
- research-articleAugust 2024
Graph Cross Supervised Learning via Generalized Knowledge
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 4083–4094https://doi.org/10.1145/3637528.3671830The success of GNNs highly relies on the accurate labeling of data. Existing methods of ensuring accurate labels, such as weakly-supervised learning, mainly focus on the existing nodes in the graphs. However, in reality, new nodes always continuously ...
- research-articleJune 2024
GBRAIN: Combating Textual Label Noise by Granular-ball based Robust Training
ICMR '24: Proceedings of the 2024 International Conference on Multimedia RetrievalPages 357–365https://doi.org/10.1145/3652583.3658084Most natural language processing tasks rely on massive labeled data to train an outstanding neural network model. However, the label noise (i.e., wrong label) is inevitably introduced when annotating large-scale text datasets, which significantly ...
- research-articleDecember 2023
A robust optimization method for label noisy datasets based on adaptive threshold: Adaptive-k
Frontiers of Computer Science: Selected Publications from Chinese Universities (FCS), Volume 18, Issue 4https://doi.org/10.1007/s11704-023-2430-4AbstractThe use of all samples in the optimization process does not produce robust results in datasets with label noise. Because the gradients calculated according to the losses of the noisy samples cause the optimization process to go in the wrong ...
- research-articleOctober 2023
ALEX: Towards Effective Graph Transfer Learning with Noisy Labels
MM '23: Proceedings of the 31st ACM International Conference on MultimediaPages 3647–3656https://doi.org/10.1145/3581783.3612026Graph Neural Networks (GNNs) have garnered considerable interest due to their exceptional performance in a wide range of graph machine learning tasks. Nevertheless, the majority of GNN-based approaches have been examined using well-annotated benchmark ...
- ArticleOctober 2023
Improving Medical Image Classification in Noisy Labels Using only Self-supervised Pretraining
Data Engineering in Medical ImagingPages 78–90https://doi.org/10.1007/978-3-031-44992-5_8AbstractNoisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors. For natural image classification training with noisy labeled data, model initialization ...
- research-articleAugust 2023
To Aggregate or Not? Learning with Separate Noisy Labels
KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 2523–2535https://doi.org/10.1145/3580305.3599522The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e.g., via crowdsourcing). A typical way of using these separate labels is to first aggregate them into one and apply standard training ...
- ArticleJuly 2023
On the Impact of Noisy Labels on Supervised Classification Models
Computational Science – ICCS 2023Pages 111–119https://doi.org/10.1007/978-3-031-36021-3_8AbstractThe amount of data generated daily grows tremendously in virtually all domains of science and industry, and its efficient storage, processing and analysis pose significant practical challenges nowadays. To automate the process of extracting useful ...
- research-articleSeptember 2023
GBSMOTE: A Robust Sampling Method Based on Granular-ball Computing and SMOTE for Class Imbalance
ICMAI '23: Proceedings of the 2023 8th International Conference on Mathematics and Artificial IntelligencePages 19–24https://doi.org/10.1145/3594300.3594304In recent years, the imbalanced classification problem has received much attention. SMOTE is one of the most popular methods to improve the performance of unbalanced data classification models. SMOTE changes the data distribution of unbalanced data ...
- research-articleMarch 2023
TP-FER: An Effective Three-phase Noise-tolerant Recognizer for Facial Expression Recognition
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), Volume 19, Issue 3Article No.: 113, Pages 1–17https://doi.org/10.1145/3570329Single-label facial expression recognition (FER), which aims to classify single expression for facial images, usually suffers from the label noisy and incomplete problem, where manual annotations for partial training images exist wrong or incomplete ...
- research-articleFebruary 2023
Robust Training of Graph Neural Networks via Noise Governance
WSDM '23: Proceedings of the Sixteenth ACM International Conference on Web Search and Data MiningPages 607–615https://doi.org/10.1145/3539597.3570369Graph Neural Networks (GNNs) have become widely-used models for semi-supervised learning. However, the robustness of GNNs in the presence of label noise remains a largely under-explored problem. In this paper, we consider an important yet challenging ...
- research-articleMarch 2024
Random feature amplification: feature learning and generalization in neural networks
The Journal of Machine Learning Research (JMLR), Volume 24, Issue 1Article No.: 303, Pages 14357–14405In this work, we provide a characterization of the feature-learning process in two-layer ReLU networks trained by gradient descent on the logistic loss following random initialization. We consider data with binary labels that are generated by an XOR-like ...
- research-articleMarch 2024
Risk bounds for positive-unlabeled learning under the selected at random assumption
The Journal of Machine Learning Research (JMLR), Volume 24, Issue 1Article No.: 107, Pages 4853–4883Positive-Unlabeled learning (PU learning) is a special case of semi-supervised binary classification where only a fraction of positive examples is labeled. The challenge is then to find the correct classifier despite this lack of information. Recently, ...
- short-paperOctober 2022
LCD: Adaptive Label Correction for Denoising Music Recommendation
CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge ManagementPages 3903–3907https://doi.org/10.1145/3511808.3557625Music recommendation is usually modeled as a Click-Through Rate (CTR) prediction problem, which estimates the probability of a user listening a recommended song. CTR prediction can be formulated as a binary classification problem where the played songs ...
- research-articleOctober 2022
FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning
CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge ManagementPages 972–981https://doi.org/10.1145/3511808.3557322Robustness is becoming another important challenge of federated learning in that the data collection process in each client is naturally accompanied by noisy labels. However, it is far more complex and challenging owing to varying levels of data ...