skip to main content
10.1145/3664647.3680835acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach

Published: 28 October 2024 Publication History

Abstract

Label noise, an inevitable issue in various real-world datasets, tends to impair the performance of deep neural networks. A large body of literature focuses on symmetric co-training, aiming to enhance model robustness by exploiting interactions between models with distinct capabilities. However, the symmetric training processes employed in existing methods often culminate in model consensus, diminishing their efficacy in handling noisy labels. To this end, we propose an Asymmetric Co-Training (ACT) method to mitigate the detrimental effects of label noise. Specifically, we introduce an asymmetric training framework in which one model (i.e., RTM) is robustly trained with a selected subset of clean samples while the other (i.e., NTM) is conventionally trained using the entire training set. We propose two novel criteria based on agreement and discrepancy between models, establishing asymmetric sample selection and mining. Moreover, a metric, derived from the divergence between models, is devised to quantify label memorization, guiding our method in determining the optimal stopping point for sample mining. Finally, we propose to dynamically re-weight identified clean samples according to their reliability inferred from historical information. We additionally employ consistency regularization to achieve further performance improvement. Extensive experimental results on synthetic and real-world datasets demonstrate the effectiveness and superiority of our method.

References

[1]
Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, and Kevin McGuinness. 2019. Unsupervised Label Noise Modeling and Loss Correction. In International Conference on Machine Learning. 312--321.
[2]
Devansh Arpit, Stanislaw Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron C. Courville, Yoshua Bengio, and Simon Lacoste-Julien. 2017. A Closer Look at Memorization in Deep Networks. In International Conference on Machine Learning. 233--242.
[3]
Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, and Tongliang Liu. 2021. Understanding and Improving Early Stopping for Learning with Noisy Labels. In Advances in Neural Information Processing Systems. 24392--24403.
[4]
Avrim Blum and Tom M. Mitchell. [n.,d.]. Combining Labeled and Unlabeled Data with Co-Training. In Proceedings of the Eleventh Annual Conference on Computational Learning. 92--100.
[5]
Fadi Boutros, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper. 2022. ElasticFace: Elastic Margin Loss for Deep Face Recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 1577--1586.
[6]
Xinhao Cai, Qiuxia Lai, Yuwei Wang, Wenguan Wang, Zeren Sun, and Yazhou Yao. 2024. Poly Kernel Inception Network for Remote Sensing Detection. In IEEE Conference on Computer Vision and Pattern Recognition.
[7]
Tao Chen, Yazhou Yao, and Jinhui Tang. 2023. Multi-Granularity Denoising and Bidirectional Alignment for Weakly Supervised Semantic Segmentation. IEEE Transactions on Multimedia, Vol. 32 (2023), 2960--2971.
[8]
Tao Chen, Yazhou Yao, Lei Zhang, Qiong Wang, Guo-Sen Xie, and Fumin Shen. 2023. Saliency Guided Inter- and Intra-Class Relation Constraints for Weakly Supervised Semantic Segmentation. IEEE Transactions on Multimedia, Vol. 25 (2023), 1727--1737.
[9]
De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, and Masashi Sugiyama. 2022. Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation. In IEEE Conference on Computer Vision and Pattern Recognition. 16609--16618.
[10]
Hao Cheng, Zhaowei Zhu, Xing Sun, and Yang Liu. 2023. Mitigating Memorization of Noisy Labels via Regularization between Representations. In International Conference on Learning Representations.
[11]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition. 248--255.
[12]
Tongtong Fang, Nan Lu, Gang Niu, and Masashi Sugiyama. 2020. Rethinking Importance Weighting for Deep Learning under Distribution Shift. In Advances in Neural Information Processing Systems.
[13]
Robert Fergus, Li Fei-Fei, Pietro Perona, and Andrew Zisserman. 2010. Learning Object Categories From Internet Image Searches. Proc. IEEE (2010), 1453--1466.
[14]
Jacob Goldberger and Ehud Ben-Reuven. 2017. Training deep neural-networks using a noise adaptation layer. In International Conference on Learning Representations.
[15]
Jacob Goldberger and Ehud Ben-Reuven. 2017. Training deep neural-networks using a noise adaptation layer. In International Conference on Learning Representations.
[16]
Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor W. Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Systems. 8536--8546.
[17]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770--778.
[18]
Shenwang Jiang, Jianan Li, Ying Wang, Bo Huang, Zhang Zhang, and Tingfa Xu. 2022. Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data. In AAAI Conference on Artificial Intelligence. 7024--7032.
[19]
Xiruo Jiang, Sheng Liu, Xili Dai, Guosheng Hu, Xingguo Huang, Yazhou Yao, Guo-Sen Xie, and Ling Shao. 2024. Deep Metric Learning Based on Meta-Mining Strategy With Semiglobal Information. IEEE Transactions on Neural Networks and Learning Systems, Vol. 35, 4 (2024), 5103--5116.
[20]
Nazmul Karim, Mamshad Nayeem Rizve, Nazanin Rahnavard, Ajmal Mian, and Mubarak Shah. 2022. UNICON: Combating Label Noise Through Uniform Selection and Contrastive Learning. In IEEE Conference on Computer Vision and Pattern Recognition. 9666--9676.
[21]
Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images.
[22]
Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. 2018. CleanNet: Transfer Learning for Scalable Image Classifier Training With Label Noise. In IEEE Conference on Computer Vision and Pattern Recognition. 5447--5456.
[23]
Jichang Li, Guanbin Li, Hui Cheng, Zicheng Liao, and Yizhou Yu. 2024. FedDiv: Collaborative Noise Filtering for Federated Learning with Noisy Labels. In AAAI Conference on Artificial Intelligence. 3118--3126.
[24]
Jichang Li, Guanbin Li, Feng Liu, and Yizhou Yu. 2022. Neighborhood Collective Estimation for Noisy Label Identification and Correction. In European Conference on Computer Vision. 128--145.
[25]
Junnan Li, Richard Socher, and Steven CH Hoi. 2020. DivideMix: Learning with Noisy Labels as Semi-supervised Learning. In International Conference on Learning Representations.
[26]
Shikun Li, Xiaobo Xia, Shiming Ge, and Tongliang Liu. 2022. Selective-Supervised Contrastive Learning with Noisy Labels. In IEEE Conference on Computer Vision and Pattern Recognition. 316--325.
[27]
Yifan Li, Hu Han, Shiguang Shan, and Xilin Chen. 2023. DISC: Learning from Noisy Labels via Dynamic Instance-Specific Selection and Correction. In IEEE Conference on Computer Vision and Pattern Recognition. 24070--24079.
[28]
Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, and Li-Jia Li. 2017. Learning from Noisy Labels with Distillation. In IEEE International Conference on Computer Vision. 1928--1936.
[29]
Huafeng Liu, Mengmeng Sheng, Zeren Sun, Yazhou Yao, Xian-Sheng Hua, and Heng-Tao Shen. 2024. Learning with Imbalanced Noisy Data by Preventing Bias in Sample Selection. IEEE Transactions on Multimedia (2024), 1--12.
[30]
Huafeng Liu, Chuanyi Zhang, Yazhou Yao, Xiu-Shen Wei, Fumin Shen, Zhenmin Tang, and Jian Zhang. 2022. Exploiting Web Images for Fine-Grained Visual Recognition by Eliminating Open-Set Noise and Utilizing Hard Examples. IEEE Transactions on Multimedia, Vol. 24 (2022), 546--557.
[31]
Huafeng Liu, Haofeng Zhang, Jianfeng Lu, and Zhenmin Tang. 2022. Exploiting Web Images for Fine-Grained Visual Recognition via Dynamic Loss Correction and Global Sample Selection. IEEE Transactions on Multimedia, Vol. 24 (2022), 1105--1115.
[32]
Sheng Liu, Jonathan Niles-Weed, Narges Razavian, and Carlos Fernandez-Granda. 2020. Early-Learning Regularization Prevents Memorization of Noisy Labels. In Advances in Neural Information Processing Systems.
[33]
Sheng Liu, Zhihui Zhu, Qing Qu, and Chong You. 2022. Robust Training under Label Noise by Over-parameterization. In International Conference on Machine Learning. 14153--14172.
[34]
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In IEEE International Conference on Computer Vision. 9992--10002.
[35]
Eran Malach and Shai Shalev-Shwartz. 2017. Decoupling "when to update" from "how to update". In Advances in Neural Information Processing Systems. 960--970.
[36]
Devraj Mandal, Shrisha Bharadwaj, and Soma Biswas. 2020. A Novel Self-Supervised Re-labeling Approach for Training with Noisy Labels. In IEEE Winter Conference on Applications of Computer Vision. 1370--1379.
[37]
Junzhu Mao, Yazhou Yao, Zeren Sun, Xingguo Huang, Fumin Shen, and Heng Tao Shen. 2023. Attention Map Guided Transformer Pruning for Occluded Person Re-Identification on Edge Device. IEEE Transactions on Multimedia, Vol. 25 (2023), 1592--1599.
[38]
Deep Patel and P. S. Sastry. 2023. Adaptive Sample Selection for Robust Learning under Label Noise. In IEEE Winter Conference on Applications of Computer Vision. 3921--3931.
[39]
Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. 2017. Making deep neural networks robust to label noise: A loss correction approach. In IEEE Conference on Computer Vision and Pattern Recognition. 1944--1952.
[40]
Xiaojiang Peng, Kai Wang, Zhaoyang Zeng, Qing Li, Jianfei Yang, and Yu Qiao. 2020. Suppressing Mislabeled Data via Grouping and Self-attention. In European Conference on Computer Vision. 786--802.
[41]
Joseph Redmon and Ali Farhadi. 2017. YOLO9000: Better, Faster, Stronger. In IEEE Conference on Computer Vision and Pattern Recognition. 6517--6525.
[42]
Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. 2018. Learning to Reweight Examples for Robust Deep Learning. In International Conference on Machine Learning. 4331--4340.
[43]
Mengmeng Sheng, Zeren Sun, Zhenhuang Cai, Tao Chen, Yichao Zhou, and Yazhou Yao. 2024. Adaptive Integration of Partial Label Learning and Negative Learning for Enhanced Noisy Label Learning. In AAAI Conference on Artificial Intelligence. 4820--4828.
[44]
Mengmeng Sheng, Zeren Sun, Tao Chen, Shuchao Pang, Yucheng Wang, and Yazhou Yao. 2024. Foster Adaptivity and Balance in Learning with Noisy Labels. In European Conference on Computer Vision.
[45]
Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. 2019. Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting. In Advances in Neural Information Processing Systems. 1917--1928.
[46]
Zeren Sun, Xian-Sheng Hua, Yazhou Yao, Xiu-Shen Wei, Guosheng Hu, and Jian Zhang. 2020. CRSSC: Salvage Reusable Samples from Noisy Data for Robust Learning. In ACM International Conference on Multimedia. 92--101.
[47]
Zeren Sun, Huafeng Liu, Qiong Wang, Tianfei Zhou, Qi Wu, and Zhenmin Tang. 2022. Co-LDL: A Co-Training-Based Label Distribution Learning Method for Tackling Label Noise. IEEE Transactions on Multimedia (2022), 1093--1104.
[48]
Zeren Sun, Fumin Shen, Dan Huang, Qiong Wang, Xiangbo Shu, Yazhou Yao, and Jinhui Tang. 2022. PNP: Robust Learning From Noisy Labels by Probabilistic Noise Prediction. In IEEE Conference on Computer Vision and Pattern Recognition. 5311--5320.
[49]
Zeren Sun, Yazhou Yao, Xiu-Shen Wei, Yongshun Zhang, Fumin Shen, Jianxin Wu, Jian Zhang, and Heng-Tao Shen. 2021. Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach. In IEEE International Conference on Computer Vision. 10602--10611.
[50]
Cheng Tan, Jun Xia, Lirong Wu, and Stan Z. Li. 2021. Co-learning: Learning from Noisy Labels with Self-supervision. In ACM International Conference on Multimedia. 1405--1413.
[51]
Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2018. Joint Optimization Framework for Learning With Noisy Labels. In IEEE Conference on Computer Vision and Pattern Recognition. 5552--5560.
[52]
Yuanpeng Tu, Boshen Zhang, Yuxi Li, Liang Liu, Jian Li, Yabiao Wang, Chengjie Wang, and Cairong Zhao. 2023. Learning from Noisy Labels with Decoupled Meta Label Purifier. In IEEE Conference on Computer Vision and Pattern Recognition. 19934--19943.
[53]
Yuanpeng Tu, Boshen Zhang, Yuxi Li, Liang Liu, Jian Li, Jiangning Zhang, Yabiao Wang, Chengjie Wang, and Cairong Zhao. 2023. Learning with Noisy labels via Self-supervised Adversarial Noisy Masking. In IEEE Conference on Computer Vision and Pattern Recognition. 16186--16195.
[54]
Yixin Wang, Alp Kucukelbir, and David M. Blei. 2020. Robust Probabilistic Modeling with Bayesian Data Reweighting. In International Conference on Machine Learning. 3646--3655.
[55]
Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. 2020. Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization. In IEEE Conference on Computer Vision and Pattern Recognition. 13723--13732.
[56]
Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Masashi Sugiyama, and Yang Liu. 2022. To Smooth or Not? When Label Smoothing Meets Noisy Labels. In International Conference on Machine Learning, Vol. 162. 23589--23614.
[57]
Qi Wei, Lei Feng, Haoliang Sun, Ren Wang, Chenhui Guo, and Yilong Yin. 2023. Fine-grained classification with noisy labels. In IEEE Conference on Computer Vision and Pattern Recognition. 11651--11660.
[58]
Peter Welinder, Steve Branson, Serge J. Belongie, and Pietro Perona. 2010. The Multidimensional Wisdom of Crowds. In Advances in Neural Information Processing Systems. 2424--2432.
[59]
Tong Wu, Bicheng Dai, Shuxin Chen, Yanyun Qu, and Yuan Xie. 2020. Meta Segmentation Network for Ultra-Resolution Medical Images. In International Joint Conference on Artificial Intelligence. 544--550.
[60]
Xiaobo Xia, Bo Han, Nannan Wang, Jiankang Deng, Jiatong Li, Yinian Mao, and Tongliang Liu. 2023. Extended textdollarTtextdollarT: Learning With Mixed Closed-Set and Open-Set Noisy Labels. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023), 3047--3058.
[61]
Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, and Tongliang Liu. 2023. Combating Noisy Labels with Sample Selection by Mining High-Discrepancy Examples. In IEEE International Conference on Computer Vision. 1833--1843.
[62]
Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, and Masashi Sugiyama. 2019. Are Anchor Points Really Indispensable in Label-Noise Learning?. In Advances in Neural Information Processing Systems. 6835--6846.
[63]
Ruixuan Xiao, Yiwen Dong, Haobo Wang, Lei Feng, Runze Wu, Gang Chen, and Junbo Zhao. 2023. ProMix: Combating Label Noise via Maximizing Clean Sample Utility. In International Joint Conference on Artificial Intelligence.
[64]
Wenjie Xuan, Shanshan Zhao, Yu Yao, Juhua Liu, Tongliang Liu, Yixin Chen, Bo Du, and Dacheng Tao. 2023. PNT-Edge: Towards Robust Edge Detection with Noisy Labels by Learning Pixel-level Noise Transitions. In ACM International Conference on Multimedia. 1924--1932.
[65]
Erkun Yang, Dongren Yao, Tongliang Liu, and Cheng Deng. 2022. Mutual Quantization for Cross-Modal Search with Noisy Labels. In IEEE Conference on Computer Vision and Pattern Recognition. 7541--7550.
[66]
Yazhou Yao, Zeren Sun, Chuanyi Zhang, Fumin Shen, Qi Wu, Jian Zhang, and Zhenmin Tang. 2021. Jo-SRC: A Contrastive Approach for Combating Noisy Labels. In IEEE Conference on Computer Vision and Pattern Recognition. 5192--5201.
[67]
Xichen Ye, Xiaoqiang Li, Songmin Dai, Tong Liu, Yan Sun, and Weiqin Tong. 2023. Active Negative Loss Functions for Learning with Noisy Labels. In Advances in Neural Information Processing Systems.
[68]
Kun Yi and Jianxin Wu. 2019. Probabilistic End-To-End Noise Correction for Learning With Noisy Labels. In IEEE Conference on Computer Vision and Pattern Recognition. 7017--7025.
[69]
Kun Yi and Jianxin Wu. 2019. Probabilistic End-To-End Noise Correction for Learning With Noisy Labels. In IEEE Conference on Computer Vision and Pattern Recognition. 7017--7025.
[70]
Hui Ying, Zhaojin Huang, Shu Liu, Tianjia Shao, and Kun Zhou. 2021. EmbedMask: Embedding Coupling for Instance Segmentation. In International Joint Conference on Artificial Intelligence. 1266--1273.
[71]
Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, and Masashi Sugiyama. 2019. How does Disagreement Help Generalization against Label Corruption?. In International Conference on Machine Learning. 7164--7173.
[72]
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations.
[73]
Shuo Zhang, Yuwen Li, Zhongyu Wang, Jianqing Li, and Chengyu Liu. 2024. Learning with Noisy Labels Using Hyperspherical Margin Weighting. In AAAI Conference on Artificial Intelligence. 16848--16856.
[74]
Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, and Chao Chen. 2021. Learning with Feature-Dependent Label Noise: A Progressive Approach. In International Conference on Learning Representations.
[75]
Xiong Zhou, Xianming Liu, Chenyang Wang, Deming Zhai, Junjun Jiang, and Xiangyang Ji. 2021. Learning with Noisy Labels via Sparse Regularization. In IEEE International Conference on Computer Vision. 72--81.
[76]
Xiong Zhou, Xianming Liu, Deming Zhai, Junjun Jiang, and Xiangyang Ji. 2023. Asymmetric Loss Functions for Noise-Tolerant Learning: Theory and Applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, 7 (2023), 8094--8109.

Index Terms

  1. Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '24: Proceedings of the 32nd ACM International Conference on Multimedia
    October 2024
    11719 pages
    ISBN:9798400706868
    DOI:10.1145/3664647
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 October 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. asymmetric co-training
    2. noisy labels
    3. sample selection

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    MM '24
    Sponsor:
    MM '24: The 32nd ACM International Conference on Multimedia
    October 28 - November 1, 2024
    Melbourne VIC, Australia

    Acceptance Rates

    MM '24 Paper Acceptance Rate 1,150 of 4,385 submissions, 26%;
    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 67
      Total Downloads
    • Downloads (Last 12 months)67
    • Downloads (Last 6 weeks)22
    Reflects downloads up to 09 Jan 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media