论文标题
OPSPARSE:GPU上稀疏一般矩阵乘法的高度优化框架
OpSparse: a Highly Optimized Framework for Sparse General Matrix Multiplication on GPUs
论文作者
论文摘要
稀疏的一般矩阵乘法(SPGEMM)在许多实际应用中都是重要且昂贵的计算原始计算。由于SPGEMM固有的不规则性以及其输入矩阵的大量多样性,因此在现代处理器(例如GPU)上开发了高性能的SPGEMM实施是具有挑战性的。最先进的SPGEMM库(即$ NSPARSE $和$ SPECK $)采用多种算法来应对全球负载余额,本地负载余额以及结果矩阵的分配的挑战。尽管这些库专注于SPGEMM的高级算法设计,但它们忽略了几种低级体系结构的优化,这会导致其库中的效率低下。在本文中,我们将其效率低下的实现分为七个类别。根据我们的观察,我们提出了一个高度优化的SPGEMM库,称为$ OPSPARSE $。 The optimizations in $OpSparse$ include 1) optimizing the binning method by improving the utilization of the shared memory, 2) optimizing the hashing method by reducing the access to the hash table, 3) improving the trade-off between hash collision rate and hardware utilization in the hashing method by setting appropriate binning ranges, 4) reducing the overheads of global memory utilization by minimizing the global memory usage of the metadata, 5)通过与内核执行重叠的全局内存分配来改善执行并行性。 NVIDIA TESLA V100 GPU上的26个常用矩阵的性能评估表明,$ OPSPARSE $可达到高达$ 27.8 \ times $,$ 1.81 \ times \ times $ $ 2.04 \ $ 2.04 \ times $ $ $ $ $ $ cusparse $,$ cusparse $,$ nsparse $ $ nsparse $,以及$ nsparse $,以及分别。
Sparse general matrix multiplication (SpGEMM) is an important and expensive computation primitive in many real-world applications. Due to SpGEMM's inherent irregularity and the vast diversity of its input matrices, developing high-performance SpGEMM implementation on modern processors such as GPUs is challenging. The state-of-the-art SpGEMM libraries (i.e., $nsparse$ and $spECK$) adopt several algorithms to tackle the challenges of global load balance, local load balance, and allocation of the result matrix. While these libraries focus on the high-level algorithm design for SpGEMM, they neglect several low-level architecture-specific optimizations, which causes inefficient implementations in their libraries. In this paper, we classify their inefficient implementations into seven categories. Based on our observations, we propose a highly optimized SpGEMM library called $OpSparse$. The optimizations in $OpSparse$ include 1) optimizing the binning method by improving the utilization of the shared memory, 2) optimizing the hashing method by reducing the access to the hash table, 3) improving the trade-off between hash collision rate and hardware utilization in the hashing method by setting appropriate binning ranges, 4) reducing the overheads of global memory utilization by minimizing the global memory usage of the metadata, and 5) improving the execution parallelism by overlapping global memory allocation with kernel execution. Performance evaluations with 26 commonly used matrices on an Nvidia Tesla V100 GPU show that $OpSparse$ achieves up to $27.8\times$, $1.81\times$, and $2.04\times$ performance speedup over three state-of-the-art libraries: $cuSPARSE$, $nsparse$, and $spECK$, respectively.