Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- invited-talkAugust 2024
Lessons Learned while Running ML Models in Harsh Environments
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPage 4734https://doi.org/10.1145/3637528.3672499Once a very large payment processor client told us: 'if we are down for 5 minutes, we open the evening news - so don't screw up'. Processing billions of dollars per day, many financial institutions, need to continuously fight organized crime in the form ...
- research-articleAugust 2024
Hypformer: Exploring Efficient Transformer Fully in Hyperbolic Space
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3770–3781https://doi.org/10.1145/3637528.3672039Hyperbolic geometry have shown significant potential in modeling complex structured data, particularly those with underlying tree-like and hierarchical structures. Despite the impressive performance of various hyperbolic neural networks across numerous ...
- research-articleAugust 2024
BitLINK: Temporal Linkage of Address Clusters in Bitcoin Blockchain
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 4583–4594https://doi.org/10.1145/3637528.3672037In the Bitcoin blockchain, an entity (e.g., a gambling service) may control multiple distinct address clusters. Links (i.e., trust relationships) between these disjoint address clusters can be established when one cluster is abandoned, and a new one is ...
- research-articleAugust 2024
Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 4059–4070https://doi.org/10.1145/3637528.3672013The public sharing of user information opens the door for adversaries to infer private data, leading to privacy breaches and facilitating malicious activities. While numerous studies have concentrated on privacy leakage via public user attributes, the ...
- research-articleAugust 2024
Reinforced Compressive Neural Architecture Search for Versatile Adversarial Robustness
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3001–3012https://doi.org/10.1145/3637528.3672009Prior research on neural architecture search (NAS) for adversarial robustness has revealed that a lightweight and adversarially robust sub-network could exist in a non-robust large teacher network. Such a sub-network is generally discovered based on ...
-
- research-articleAugust 2024
ReCDA: Concept Drift Adaptation with Representation Enhancement for Network Intrusion Detection
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3818–3828https://doi.org/10.1145/3637528.3672007The deployment of learning-based models to detect malicious activities in network traffic flows is significantly challenged by concept drift. With evolving attack technology and dynamic attack behaviors, the underlying data distribution of recently ...
- research-articleAugust 2024
Using Self-supervised Learning Can Improve Model Fairness
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3942–3953https://doi.org/10.1145/3637528.3671991Self-supervised learning (SSL) has become the de facto training paradigm of large models, where pre-training is followed by supervised fine-tuning using domain-specific data and labels. Despite demonstrating comparable performance with supervised methods,...
- research-articleAugust 2024
FLAIM: AIM-based Synthetic Data Generation in the Federated Setting
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 2165–2176https://doi.org/10.1145/3637528.3671990Preserving individual privacy while enabling collaborative data sharing is crucial for organizations. Synthetic data generation is one solution, producing artificial data that mirrors the statistical properties of private data. While numerous techniques ...
- research-articleAugust 2024
Cross-Context Backdoor Attacks against Graph Prompt Learning
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 2094–2105https://doi.org/10.1145/3637528.3671956Graph Prompt Learning (GPL) bridges significant disparities between pretraining and downstream applications to alleviate the knowledge transfer bottleneck in real-world graph learning. While GPL offers superior effectiveness in graph knowledge transfer ...
- research-articleAugust 2024
Money Never Sleeps: Maximizing Liquidity Mining Yields in Decentralized Finance
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 2248–2259https://doi.org/10.1145/3637528.3671942The popularity of decentralized finance has drawn attention to liquidity mining (LM). In LM, a user deposits her cryptocurrencies into liquidity pools to provide liquidity for exchanges and earn yields. Different liquidity pools offer varying yields and ...
- research-articleAugust 2024
Rethinking Graph Backdoor Attacks: A Distribution-Preserving Perspective
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 4386–4397https://doi.org/10.1145/3637528.3671910Graph Neural Networks (GNNs) have shown remarkable performance in various tasks. However, recent works reveal that GNNs are vulnerable to backdoor attacks. Generally, backdoor attack poisons the graph by attaching backdoor triggers and the target class ...
- research-articleAugust 2024
FedRoLA: Robust Federated Learning Against Model Poisoning via Layer-based Aggregation
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3667–3678https://doi.org/10.1145/3637528.3671906Federated Learning (FL) is increasingly vulnerable to model poisoning attacks, where malicious clients degrade the global model's accuracy with manipulated updates. Unfortunately, most existing defenses struggle to handle the scenarios when multiple ...
- research-articleAugust 2024
FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3484–3494https://doi.org/10.1145/3637528.3671899Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server. However, existing FL methods still face challenges when dealing with scarce and label-...
- research-articleAugust 2024
Evading Community Detection via Counterfactual Neighborhood Search
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 131–140https://doi.org/10.1145/3637528.3671896Community detection techniques are useful for social media platforms to discover tightly connected groups of users who share common interests. However, this functionality often comes at the expense of potentially exposing individuals to privacy breaches ...
- research-articleAugust 2024
Attacking Graph Neural Networks with Bit Flips: Weisfeiler and Leman Go Indifferent
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 1428–1439https://doi.org/10.1145/3637528.3671890Prior attacks on graph neural networks have focused on graph poisoning and evasion, neglecting the network's weights and biases. For convolutional neural networks, however, the risk arising from bit flip attacks is well recognized. We show that the ...
- research-articleAugust 2024
BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 1944–1955https://doi.org/10.1145/3637528.3671879Federated Learning (FL) is susceptible to poisoning attacks, wherein compromised clients manipulate the global model by modifying local datasets or sending manipulated model updates. Experienced defenders can readily detect and mitigate the poisoning ...
- research-articleAugust 2024
SEBot: Structural Entropy Guided Multi-View Contrastive learning for Social Bot Detection
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3841–3852https://doi.org/10.1145/3637528.3671871Recent advancements in social bot detection have been driven by the adoption of Graph Neural Networks. The social graph, constructed from social network interactions, contains benign and bot accounts that influence each other. However, previous graph-...
- research-articleAugust 2024
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 2596–2607https://doi.org/10.1145/3637528.3671852Deep visual graph matching (GM) is a challenging combinatorial task that involves finding a permutation matrix that indicates the correspondence between keypoints from a pair of images. Like many learning systems, empirical studies have shown that visual ...
- research-articleAugust 2024
Compact Decomposition of Irregular Tensors for Data Compression: From Sparse to Dense to High-Order Tensors
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 1451–1462https://doi.org/10.1145/3637528.3671846An irregular tensor is a collection of matrices with different numbers of rows. Real-world data from diverse domains, including medical and stock data, are effectively represented as irregular tensors due to the inherent variations in data length. For ...
- research-articleAugust 2024
CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 2284–2295https://doi.org/10.1145/3637528.3671837Recently, Large Language Model (LLM)-empowered recommender systems (RecSys) have brought significant advances in personalized user experience and have attracted considerable attention. Despite the impressive progress, the research question regarding the ...