标签:style http io os 使用 ar for 数据 sp
from:https://developer.nvidia.com/mvapich
MVAPICH2 is an open source implementation of Message Passing Interface (MPI) that delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RoCE networking technologies. MVAPICH2 simplifies the task of porting MPI applications to run on clusters with NVIDIA GPUs by supporting standard MPI calls from GPU device memory. It optimizes the data movement between host and GPU, and between GPUs in the best way possible while requiring minimal or no effort from the application developer.
MVAPICH2是一个开源的MPI系统,其对使用InfiniBand(无线带宽技术)的高端计算系统和服务器提供高性能、可适应性、容错性,是一种具有10GigE/iWARP速度,RoCE网络的技术。
MVAPICH2简化MPI应用程序移植到与NVIDIA GPU集群上运行,并支持标准的MPI调用GPU设备内存的任务。它以最佳方式优化了主机和GPU之间、GPU和GPU之间的数据移动,而应用程序开发人员角度只需付出很少的努力,甚至不需要。 MVAPICH2以为使用Infiniband,10GigE/iWARP,ROCE网络的高性能集群提供优秀的可扩展性和容错著称。
High performance RDMA-based inter-node MPI point-to-point communication from/to GPU device memory (GPU-GPU, GPU-Host and Host-GPU)
The latest performance results using MVAPICH2 for MPI communication from/to/between GPU devices can be found on the OSU Microbenchmark Page for GPUs
The latest version of MVAPICH2 can be downloaded from: http://mvapich.cse.ohio-state.edu/download/mvapich2/ NVIDIA GPU related features are available in MVAPICH2 releases starting from 1.8.
http://mvapich.cse.ohio-state.edu/overview/mvapich2/features.shtml
标签:style http io os 使用 ar for 数据 sp
原文地址:http://www.cnblogs.com/jianyingzhou/p/3989952.html