码迷,mamicode.com
首页 > 其他好文 > 详细

The Improvement of Reduction Summation on GPU Using CUDA

时间:2015-05-08 23:47:38      阅读:130      评论:0      收藏:0      [点我收藏+]

标签:

   We can never be satisfied with the program just only running correctly.The reduction summation program described in previous blog post needs to be optimized.

1.make the best use of  hardware and do not forget CPUs!

   During the second part in the reduction Summation, the amount of  data to be calculated has been greatly reduced when the second kernel function runs.At this time, it equals to the number of  threads per block.Given the difference in architecture between the CPU and GPU device, CPUs are designed for running a small number of  potentially quite complex tasks,while GPUs are designed for running a large number of  potentially quite simple tasks. When you have a small amount of data,don’t forget CPUs,which can provide much faster computing rate than GPUs.

   We can delete the second kernel function,pass the partial sum within every block to CPU and add them on the CPU.

cudaMemcpy(a,dev_a,N*sizeof(int),cudaMemcpyDeviceToHost);

int c=0;
for(int i=0;i<BlockPerGrid;i++)
{
     c=+a[i];
}

2.the appropriate number of threads per block: not the more,the better.

   As we all know, if there are too few threads ,GPUs can’t hide memory latency using the capacity to handle data.Therefore we’d better not choose too few threads.

   However,as we consider  the number of threads,it will make a difference when we have synchronization points in the kernel.The number of threads per block not the more,the better.

   The time to execute a given block is undefined.A block cannot be retired from an SM until it’s completed its entire execution.Sometimes,all other idle warps are waiting for a single warp to complete,making the SM also idle.

   It follows that the larger the thread block,the more potential to wait for a slow warp to catch up. As a general,the value 256 gets you 100% utilization across all levels of the hardware.We had better aim for either 192 or 256.Or you can look up the table of utilization and select the smallest number of threads that gives the highest device utilization.

3.not too much more branches

   As the hardware can only fetch a single instruction stream per warp and if branches appear, some of the threads that don’t meet the condition will  stall,making the device utilization rate decrease.However, the actual scheduler in terms of  instruction execution is half-warp based,not warp based.Therefore we can arrange the divergence to fall on a half warp (16-thread) boundary,then it can execute both sides of the branch  condition.

if((thread_idx%32)<16)
{
     do something;
}
else
{
    do something;
}

   However,it just happens when data across memory is continuous.Sometimes we can supplement with zeros behind the array,just like the previous blog mentioned,to a standard length of the integral multiple of 32.That can help you keep the number of branches to a minimum. 

The Improvement of Reduction Summation on GPU Using CUDA

标签:

原文地址:http://www.cnblogs.com/little-hobbit/p/4488958.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!