我使用63寄存器/线程,所以(32768是最大值),我可以使用约520 threads.I AM在这个例子中使用现在512个线程。
(平行度是在函数“computeEvec”全球computeEHfields功能函数内。)的问题是:
1)下面的MEM检查错误。
2)当我使用numPointsRp> 2000年,告诉我“内存不足”,但(如果我没有做错了)我计算全局内存,它的确定。
- - - - - - - - - - - - - - - -更新 - - - - - - - - - ---------
我运行CUDA-MEMCHECK程序,这让我(只有当numPointsRs> numPointsRp):
=========大小为4的无效全局读
=========在computeEHfields 0x00000428
=========通过在块螺纹(2,0,0)(0,0,0)
=========地址0x4001076e0越界
========= =========大小为4的无效全局读
=========在computeEHfields 0x00000428
=========通过在块的线程(1,0,0)(0,0,0)
=========地址0x4001076e0越界
========= =========大小为4的无效全局读
=========在computeEHfields 0x00000428
=========通过在块的线程(0,0,0)(0,0,0)
=========地址0x4001076e0越界
错误摘要:160级的错误
- - - - - -编辑 - - - - - - - - - - - - - -
此外,一些时间(如果我只使用线程而不是块(我还没有测试它的块)),如果例如我有numPointsRs = 1000和numPointsRp = 100,然后改变numPointsRp = 200,然后再次更改numPointsRp = 100我没有服用第一的成绩!
import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy as np
import cmath
import pycuda.driver as drv
Rs=np.zeros((numPointsRs,3)).astype(np.float32)
for k in range (numPointsRs):
Rs[k]=[0,k,0]
Rp=np.zeros((numPointsRp,3)).astype(np.float32)
for k in range (numPointsRp):
Rp[k]=[1+k,0,0]
#---- Initialization and passing(allocate memory and transfer data) to GPU -------------------------
Rs_gpu=gpuarray.to_gpu(Rs)
Rp_gpu=gpuarray.to_gpu(Rp)
J_gpu=gpuarray.to_gpu(np.ones((numPointsRs,3)).astype(np.complex64))
M_gpu=gpuarray.to_gpu(np.ones((numPointsRs,3)).astype(np.complex64))
Evec_gpu=gpuarray.to_gpu(np.zeros((numPointsRp,3)).astype(np.complex64))
Hvec_gpu=gpuarray.to_gpu(np.zeros((numPointsRp,3)).astype(np.complex64))
All_gpu=gpuarray.to_gpu(np.ones(numPointsRp).astype(np.complex64))
mod =SourceModule("""
#include <pycuda-complex.hpp>
#include <cmath>
#include <vector>
#define RowRsSize %(numrs)d
#define RowRpSize %(numrp)d
typedef pycuda::complex<float> cmplx;
extern "C"{
__device__ void computeEvec(float Rs_mat[][3], int numPointsRs,
cmplx J[][3],
cmplx M[][3],
float *Rp,
cmplx kp,
cmplx eta,
cmplx *Evec,
cmplx *Hvec, cmplx *All)
{
while (c<numPointsRs){
...
c++;
}
}
__global__ void computeEHfields(float *Rs_mat_, int numPointsRs,
float *Rp_mat_, int numPointsRp,
cmplx *J_,
cmplx *M_,
cmplx kp,
cmplx eta,
cmplx E[][3],
cmplx H[][3], cmplx *All )
{
float Rs_mat[RowRsSize][3];
float Rp_mat[RowRpSize][3];
cmplx J[RowRsSize][3];
cmplx M[RowRsSize][3];
int k=threadIdx.x+blockIdx.x*blockDim.x;
while (k<numPointsRp)
{
computeEvec( Rs_mat, numPointsRs, J, M, Rp_mat[k], kp, eta, E[k], H[k], All );
k+=blockDim.x*gridDim.x;
}
}
}
"""% { "numrs":numPointsRs, "numrp":numPointsRp},no_extern_c=1)
func = mod.get_function("computeEHfields")
func(Rs_gpu,np.int32(numPointsRs),Rp_gpu,np.int32(numPointsRp),J_gpu, M_gpu, np.complex64(kp), np.complex64(eta),Evec_gpu,Hvec_gpu, All_gpu, block=(128,1,1),grid=(200,1))
print(" \n")
#----- get data back from GPU-----
Rs=Rs_gpu.get()
Rp=Rp_gpu.get()
J=J_gpu.get()
M=M_gpu.get()
Evec=Evec_gpu.get()
Hvec=Hvec_gpu.get()
All=All_gpu.get()
-------------------- GPU型号---------------------------- --------------------
Device 0: "GeForce GTX 560"
CUDA Driver Version / Runtime Version 4.20 / 4.10
CUDA Capability Major/Minor version number: 2.1
Total amount of global memory: 1024 MBytes (1073283072 bytes)
( 0) Multiprocessors x (48) CUDA Cores/MP: 0 CUDA Cores //CUDA Cores 336 => 7 MP and 48 Cores/MP