In situ neighborhood sampling for large-scale GNN training
Abstract
References
Index Terms
- In situ neighborhood sampling for large-scale GNN training
Recommendations
HongTu: Scalable Full-Graph GNN Training on Multiple GPUs
PACMMODFull-graph training on graph neural networks (GNN) has emerged as a promising training method for its effectiveness. Full-graph training requires extensive memory and computation resources. To accelerate this training process, researchers have proposed ...
ADGNN: Towards Scalable GNN Training with Aggregation-Difference Aware Sampling
PACMMODDistributed computing is promising to enable large-scale graph neural network (GNN) model training. However, care is needed to avoid excessive computational and communication overheads. Sampling is promising in terms of enabling scalability, and sampling ...
A Unified CPU-GPU Protocol for GNN Training
CF '24: Proceedings of the 21st ACM International Conference on Computing FrontiersTraining a Graph Neural Network (GNN) model on large-scale graphs involves a high volume of data communication and computations. While state-of-the-art CPUs and GPUs feature high computing power, the Standard GNN training protocol adopted in existing GNN ...
Comments
Information & Contributors
Information
Published In
Sponsors
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Short-paper
- Research
- Refereed limited
Funding Sources
Conference
Acceptance Rates
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 360Total Downloads
- Downloads (Last 12 months)360
- Downloads (Last 6 weeks)54
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in