GNNear: Accelerating Full-Batch Training of Graph Neural Networks with Near-Memory Processing
Published in International Conference on Parallel Architectures and Compilation Techniques (PACT), 2022
In this paper we propose GNNear, a hybrid accelerator architecture leveraging near-memory processing to accelerate full-batch GNN training on large graphs. GNNear matchs the heterogeneous nature of GNN training by offloading the memory-intensive Reduce operations to in-DIMM Near-Memory-Engines (NMEs) and using a Centralized-Acceleration-Engine (CAE) to process the computation-intensive Update operations. To deal with the irregularity of graphs, we also propose several optimization strategies concerning data reuse, graph partitioning, and dataflow scheduling, etc. Evaluations on 16 tasks demonstrate that GNNear achieves 30.8× / 2.5× geomean speedup and 79.6× / 7.3× (geomean) higher energy efficiency compared to Xeon E5-2698-v4 CPU and V100 GPU.
@inproceedings{zhou2022gnnear,
title={GNNear: Accelerating Full-Batch Training of Graph Neural Networks with Near-Memory Processing},
author={Zhou, Zhe and Li, Cong and Wei, Xuechao and Wang, Xiaoyang and Sun, Guangyu},
booktitle={Proceedings of the International Conference on Parallel Architectures and Compilation Techniques},
pages={54--68},
year={2022}
}