Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

TitlePerformance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster
Publication TypeJournal Article
Year of Publication2009
AuthorsAllada V, Benjegerdes T, Bode B
Journal Title2009 Ieee International Conference on Cluster Computing and Workshops
Pages654-662687
Accession NumberISI:000275023800082
Keywordscublas, CUDA, gpu cluster, graphics, math kernel library, netpipe, performance, testa
Abstract

Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the Basic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, we study the performance of the memory copies and GEMM subroutines that are crucial to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE [1] framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

URL<Go to ISI>://000275023800082