标签:
GPUs (Graphic Processing Units) have become much more popular in recent years for computationally intensive calculations. Despite these gains, the use of this hardware has been very limited in the R programming language. Although possible, the prospect of programming in either OpenCL or CUDA is difficult for many programmers unaccustomed to working with such a low-level interface. Creating bindings for R’s high-level programming that abstracts away the complex GPU code would make using GPUs far more accessible to R users. This is the core idea behind the gpuR package. There are three novel aspects of gpuR
:
The ‘gpuR’ package was created to bring the power of GPU computing to any R user with a GPU device. Although there are a handful of packages that provide some GPU capability (e.g.gputools, cudaBayesreg, HiPLARM, HiPLARb, and gmatrix) all are strictly limited to NVIDIA GPUs. As such, a backend that is based upon OpenCL would allow all users to benefit from GPU hardware. The ‘gpuR’ package therefore utilizes the ViennaCL linear algebra library which contains auto-tuned OpenCL kernels (among others) that can be leveraged for GPUs. The headers have been conveniently repackaged in the RViennaCL package. It also allows for a CUDA backend for those with NVIDIA GPUs that may see further improved performance (contained within the companion gpuRcuda package not yet formally released).
The gpuR
package uses the S4 object oriented system to have explicit classes and methods that all the user to simply cast their matrix
or vector
and continue programming in R as normal. For example:
1
2
3
4
5
6
7
8
9
10
11
12
|
ORDER = 1024 A = matrix ( rnorm (ORDER^2), nrow=ORDER) B = matrix ( rnorm (ORDER^2), nrow=ORDER) gpuA = gpuMatrix (A, type= "double" ) gpuB = gpuMatrix (B, type= "double" ) C = A %*% B gpuC = gpuA %*% gpuB all (C == gpuC[]) [1] TRUE |
The gpuMatrix
object points to a matrix in RAM which is then computed by the GPU when a desired function is called. This avoids R’s habit of copying the memory of objects. For example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
library (pryr) # Initially points to same object x = matrix ( rnorm (16), 4) y = x address (x) [1] "0x16177f28" address (y) [1] "0x16177f28" # But once modify the second object it creates a copy y[1,1] = 0 address (x) [1] "0x16177f28" address (y) [1] "0x15fbb1d8 |
In contrast, the same syntax for a gpuMatrix
will modify the original object in-place without any copy.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
library (pryr) library (gpuR) # Initially points to same object x = gpuMatrix ( rnorm (16), 4, 4) y = x x@address [1] <pointer: 0x6baa040> y@address [1] <pointer: 0x6baa040> # Modification affects both objects without copy y[1,1] = 0 x@address [1] <pointer: 0x6baa040> y@address [1] <pointer: 0x6baa040> |
Each new variable assigned to this object will only copy the pointer thereby making the program more memory efficient. However, the gpuMatrix>
class does still require allocating GPU memory and copying data to device for each function call. The most commonly used methods have been overloaded such as %*%, +, -, *, /, crossprod, tcrossprod, and trig functions among others. In this way, an R user can create these objects and leverage GPU resources without the need to know a bunch more functions that would break existing algorithms.
For the gpuMatix
and gpuVector
classes there are companion vclMatrix
andvclVector
class that point to objects that persist in the GPU RAM. In this way, the user explicitly decides when data needs to be moved back to the host. By avoiding unnecessary data transfer between host and device performance can significantly improve. For example:
1
2
3
4
5
6
7
8
9
|
vclA = vclMatrix ( rnorm (10000), nrow = 100) vclB = vclMatrix ( rnorm (10000), nrow = 100) vclC = vclMatrix ( rnorm (10000), nrow = 100) # GEMM vclD = vclA %*% vclB # Element-wise addition vclD = vclD + vclC |
In this code, the three initial matrices already exist in the GPU memory so no data transfer takes place in the GEMM call. Furthermore, the returned matrix remains in the GPU memory. In this case, the ‘vclD’ object is still in GPU RAM. As such, the element-wise addition call also happens directly on the GPU with no data transfers. It is worth also noting that the user can still modify elements, rows, or columns with the exact same syntax as a normal R matrix.
1
2
3
|
vclD[1,1] = 42 vclD[,2] = rep (12, 100) vclD[3,] = rep (23, 100) |
These operations simply copy the new elements to the GPU and modify the object in-place within the GPU memory. The ‘vclD’ object is never copied to the host.
With all that in mind, how does gpuR perform? Here are some general benchmarks of the popular GEMM operation. I currently only have access to a single NVIDIA GeForce GTX 970 for these simulations. Users should expect to see differences with high performance GPUs (e.g. AMD FirePro, NVIDIA Tesla, etc.). Speedup relative to CPU will also vary depending upon user hardware.
R is known to only support two numeric types (integer and double). As such, Figure 1 shows the fold speedup achieved by using the gpuMatrix
and vclMatrix
classes. Since R is already known to not be the fastest language, an implementation with the OpenBLAS backend is included as well for reference using a 4 core Intel i5-2500 CPU @ 3.30GHz. As can be seen there is a dramatic speedup from just using OpenBLAS or the gpuMatrix class (essentially equivalent). Of interest is the impact of the transfer time from host-device-host that is typical in many GPU implementations. This cost is eliminated by using the vclMatrix
class which continues to scale with matrix size.
In many GPU benchmarks there is often float operations measured as well. As noted above, R does not provide this by default. One way to go around this is to use the RcppArmadillo package and explicitly casting R objects as float types. The armadillo library will also default to using the BLAS backend provided (i.e. OpenBLAS). Figure 2 shows the impact of using float data types. OpenBLAS continues to provide a noticeable speedup but gpuMatrix
begins to outperform once matrix order exceeds 1500. The vclMatrix
continues to demonstrate the value in retaining objects in GPU memory and avoiding memory transfers.
To give an additional view on the performance achieved by gpuMatrix
and vclMatrix
is comparing directly against the OpenBLAS performance. The gpuMatrix
reaches ~2-3 fold speedup over OpenBLAS whereas vclMatrix
scales to over 100 fold speedup! It is curious as to why the performance with vclMatrix
is so much faster (only differing in host-device-host transfers). Further optimization with gpuMatrix
will need to be explored (fresh eyes are welcome) accepting limitations in the BUS transfer speed. Performance will certainly improve with improved hardware capabilities such as NVIDIA’s NVLink.
The gpuR
package has been created to bring GPU computing to as many R users as possible. It is the intention to use gpuR
to more easily supplement current and future algorithms that could benefit from GPU acceleration. The gpuR
package is currently available on CRAN. The development version can be found on my github in addition to existing issues and wiki pages (assisting primarily in installation). Future developments include solvers (e.g. QR, SVD, cholesky, etc.), scaling across multiple GPUs, ‘sparse’ class objects, and custom OpenCL kernels.
As noted above, this package is intended to be used with a multitude of hardware and operating systems (it has been tested on Windows, Mac, and multiple Linux flavors). I only have access to a limited set of hardware (I can’t access every GPU, let along the most expensive). As such, the development of gpuR
depends upon the R user community. Volunteers who possess different hardware are always welcomed and encouraged to submit issues regarding any discovered bugs. I have begun a gitter account for users to report on successful usage with alternate hardware. Suggestions and general conversation about gpuR is welcome.
转自:http://www.parallelr.com/r-gpu-programming-for-all-with-gpur/
R – GPU Programming for All with ‘gpuR’
标签:
原文地址:http://www.cnblogs.com/payton/p/5465050.html