I have been working on performance evaluation of the clFFT library AMD Radeon R7 260x. The CPU is intel xeon inside and OS is centOS.
I have been studying the performance of 2D 16x16 clFFT with different batch modes (Parallel FFTs). I wondered to see the different results obtained from especially event profiling and gettimeofday.
The results of 2D 16x16 clFFT with different batch modes are as following,
Using EventProfiling:
batch kernel exec time(us)
1 320.7
16 461.1
256 458.3
512 537.7
1024 1016.8
Here, the batch represents the parallel FFTs and the kernel execution time represents the execution time in micro seconds.
Using gettimeofday
batch HtoD(us) kernelExecTime(us) DtoH(us)
1 29653 10850 39227
16 28313 10786 32474
256 26995 11167 39672
512 26145 10773 32273
1024 26856 11948 31060
Here, the batch represents the parallel FFTs, H to D represents data transfer time from host to device, the kernel exec time represents the kernel execution time and D to H represents the data transfer time from device to host and all are in micro seconds.
(I am sorry as I cant show you the results in good table format, I can not able to add tables here. hope you can read still).
Here are my questions,
1a) Why the kernel times obtained from EventProfiling are completely different from that of gettimeofday?
1b) Here the another question is that, which results are correct?
2) The data (w.r.t size) transfers increases as the batch size increases. Bur from the results of the gettimeofday, the data transfer times either the H to D or D to H are almost constant instead of growing as the batch size increases from 1 to 1024. Why is that?
clFinish( cl_queue);
// Copy data from host to device
gettimeofday(&t_s_gpu1, NULL);
clEnqueueWriteBuffer( cl_queue, d_data, CL_TRUE, 0, width*height*batchSize*sizeof(cl_compl_flt), h_src, 0, NULL, &event1);
clFinish( cl_queue);
clWaitForEvents(1, &event1);
gettimeofday(&t_e_gpu1, NULL);
checkCL( clAmdFftBakePlan( fftPlan, 1, &cl_queue, NULL, NULL) );
clAmdFftSetPlanBatchSize( fftPlan, batchSize );
clFinish( cl_queue);
gettimeofday(&t_s_gpu, NULL);
checkCL( clAmdFftEnqueueTransform( fftPlan, CLFFT_FORWARD, 1, &cl_queue, 0, NULL, &event, &d_data, NULL, NULL) );
clFinish( cl_queue);
clWaitForEvents(1, &event);
gettimeofday(&t_e_gpu, NULL);
clGetEventProfilingInfo(event, CL_PROFILING_COMMAND_START, sizeof(time_start), &time_start, NULL);
clGetEventProfilingInfo(event, CL_PROFILING_COMMAND_END, sizeof(time_end), &time_end, NULL);
totaltime=totaltime+time_end - time_start;
clFinish( cl_queue);
// Copy result from device to host
gettimeofday(&t_s_gpu2, NULL);
checkCL( clEnqueueReadBuffer(cl_queue, d_data, CL_TRUE, 0, width*height*batchSize*sizeof(cl_compl_flt), h_res, 0, NULL, &event2));
clFinish( cl_queue);
clWaitForEvents(1, &event2);
gettimeofday(&t_e_gpu2, NULL);
I will be looking for your comments and answers and load of thanks in advance.
Related
I'm working on a heightmap erosion compute shader in unity, where each point on the map is eroded separately. This is working well for small maps, but the project I'm working on requires 4096x4096 maps. This means 4096^2 = 16777216 points to simulate. With the default thread dimensions of [64,1,1], this creates 262144 thread groups, way more than the allowed limit of 65535.
My question is:
Can I simply raise the thread dimensions, and what do I have to consider in terms of performance when I do?
Is it maybe possible to simply run the shader multiple times, with different ranges of heightmap coordinates?
This is my first time working with shaders. The tutorials I've seen online quickly go too in depth into gpu hardware specifications, so I didn't pick up much from that.
With 64x64 threads per work group, you can Dispatch 64x64 work groups to do what you need : remember that 64x64 threads will be invoked for each work group you dispatch, so you will have 64x64 work groups x 64x64 threads = 4096 workgroups x 4096 threads executed.
computeShader.Dispatch(computeShader.FindKernel("kernel"), 64, 64, 1);
[numthreads(64, 64, 1)]
void kernel(uint3 id : SV_DispatchThreadID)
{
// ...
// 0 <= id.x < 4096
// 0 <= id.y < 4096
}
As for the performance implication, the general answer is "try it out !" : run your kernel with different sizes for threads and work groups. The results may vary depending on your computations and on your hardware.
But, in case you need to bypass the 65535 limit, you can use DispatchIndirect. Basically, it's the same as Dispatch but the arguments are passed through a ComputeBuffer.
ComputeBuffer argsBuffer = new ComputeBuffer(3, sizeof(uint), ComputeBufferType.IndirectArguments);
uint[] args = { 64, 64, 1 }; // work groups
argsBuffer.SetData(args);
computeShader.DispatchIndirect(computeShader.FindKernel("kernel"), argsBuffer);
Ps : working on a GPU requires understanding its architecture because (1) you work at a low level, close to the hardware and many of the features you work with are actually hardware implemented (e.g. textures); (2) you want to make the best performances out of your programs (e.g. make best use of blocks and warps and cache ...) ;)
I'm using YOLOv3 and YOLOv3-Tiny from AlexeyAB's fork of Darknet. I understand that the image size must be a multiple of 32. And that batch divided by subdivisions determines the number of images that will be processed in parallel.
For example, the batch size in the default yolov3.cfg file is 64, and subdivision is 16, meaning 4 images will be loaded at once, and it will take 16 of these mini batches to complete one iteration.
What I don't see documented in the wiki:
Are there restrictions on these values? Do they need to be a multiple of 16? Power of 2? Can I have batch=25 and subdivisions=5?
I believe it is not a must to be a power of 2, the important thing is that batch must be divisible by subdivisions as the code uses small batches of batch / subdivisions as you can see in parcer.c:
net->batch /= subdivs;
then the number of images processed in every step is defines as in detector.c:
int imgs = net.batch * net.subdivisions * ngpus;
Although the defined BLOCK in dark_cuda.h is 512, the used num_blocks in the kernels doesn't have to be divisible by 2 as can be seen in dark_cuda.c:
int get_number_of_blocks(int array_size, int block_size)
{
return array_size / block_size + ((array_size % block_size > 0) ? 1 : 0);
}
I think the only problem could be a performance issue as CUDA runs in wraps of 32, so any number not a multiple of 2 may cause part of the used memory to not be fully utilized.
However, I recommend that you try training your network with these parameters to confirm that it works as desired.
For opencl optimization, my idea is try to make match for
1 workgroup(kernel coding) as compute unit(GPU Hardware)
1 workitem(kernel coding) as process element(GPU Hardware)
( Maybe my idea is not correct, please teach me )
for example:
1. I have a global work size of 4000 by 3000.
2. My GPU opnecl device has a maximum work-group-size of 8192.
3. I call clEnqueueNDRangeKernel with the desired local-work-size (along with all other necessary parameters)
4. by fucntion call:
a. clGetKernelWorkGroupInfo(kernel, device, CL_KERNEL_WORK_GROUP_SIZE, sizeof(size_t), (void*)&workGroupSizeUsed, NULL);
b. clGetKernelWorkGroupInfo(kernel, device, CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE, sizeof(size_t), (void*)&workGroupSizeUsed, NULL);
above a and b are return 8192.
maximum work-group-size, CL_KERNEL_WORK_GROUP_SIZE, CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE all are 8192.
I have no idea what I should follow to define my local work size...
(Q1)Any good idea for setting the local work size? (10x10? 40x30, X by Y )
clEnqueueNDRangeKernel(command_queue, kernel, 2, NULL, global_work_item_size, local_work_item_size, 0, NULL, NULL);
Very headache to define this "local_work_item_size" of clEnqueueNDRangeKernel function.
(Q2)
Could some one explain the difference if I set local work size = 1,1 between
local work size = 4000,3000 ?
Thank you in advance!
(Q1)Any good idea for setting the local work size? (10x10? 40x30, X by Y )
As pmdj pointed out, this highly depends on your application. Since it is unclear how you selected your global_work_size and it is also linked to the local_work_size I would like to explain that one first. Usually what you would want to do is to map the size of the data you want to process to the global_work_size. E.g. if you have an array with 1024 values you would also pick a global_work_size of 1024 because then you can easily use the global id as an index in your OpenCL program:
int index = get_global_id(0);
input_array[index]++; // your data processing
However, the global_work_size is limited to a maximum 2^32 - 1. If you have more data to process than that you can pass your global_work_size and data size as parameters and use a loop like the following one:
int index = get_global_id(0);
for (int i = index; i < data_size; i += global_work_size) {
input_array[i]++; // your data processing
}
The last fact which is important for the global_work_size is that it needs to be dividable by the local_work_size. This can result into a your global_work_size being bigger than your data size, e.g. you could have 1000 values while your local_work_size is 32. Then you would make your global_work_size 1024 and ensure through a condition like the one above (i < data_size) that the redundant work items are not doing anything weird like accessing not allocated memory areas.
The local_work_size depends on your platform. First of all you should always have a local_work_size which is a multiple of 32 for NVIDIA or a multiple of 64 for AMD GPUs. This represents the amount of operations which are scheduled together anyway. If you use a different number the GPU will have idle threads which won't do anything but decrease your performance.
Not only the manufacturer but also the specific type of your GPU has to be considered to find the optimal local_work_size. The global_work_size divided by the local_work_size is the number of work groups. Each work group is executed by one thread inside your CPU/GPU. If you use OpenCL to run your application on powerful hardware you want to make sure that it runs as parallel as possible. E.g. if you use an Intel i7 with 8 threads you would want to make sure that you have at least 8 work groups (global_work_size / local_work_size >= 8). If you use a NVIDIA GeForce GTX 1060 with 1280 CUDA Cores you would want to have at least 1280 work groups. But never at the cost of having a local_work_size of less than 32 which is more important!
If you are having more work groups than your hardware has threads that does not matter, they will be processed sequentially. Hence for most applications you can always set your local_work_size to 32/64. The only exception is if you require synchronization among more than work items. E.g. barriers only work inside work groups but not among different work groups. An example: If you need to to sum up chunks of 1024 values before being able to proceed with your algorithm you would need to set your local_work_size to 1024 for the barrier to work as desired.
(Q2) Could some one explain the difference if I set local work size = 1,1 between local work size = 4000,3000 ?
Both, the global_work_size and the local_work_size can have more than one dimension. If this is used or not solely depends on the preference of the programmer. All algorithms can be implemented in one dimension as well and the number of work groups is calculated by multiplying the dimensions, e.g. if your global_work_size is 20*20 and your local_work_size is 10*10 you would run the program with (20*20) / (10*10) = 400 work groups.
I personally like to use the dimensions if I am processing data which has multiple dimensions. Imagine your input is a two-dimensional image, you could simply use its width and height as global_work_size (e.g. 1024 * 1024) and the local_work_size accordingly (e.g. 32 * 32). In your code you could then use the following indices:
int x = get_global_id(0);
int y = get_global_id(1);
input_array[x][y]++; // your data processing
I'm developing a Litecoin Miner for a processor that has only 32KB of internal memory. So I was looking at SCrypt algorithms and for Litecoin it uses N = 1024, that gives me 2^10 * 1 * 128 = 128KB memory use aproximate.
So I was looking into GPU Algorithms that has the parameter Lookup Gap. For reading I'm using kepler code from CudaMiner:
https://github.com/cbuchner1/CudaMiner/blob/master/kepler_kernel.cu (Line 535)
So I understand that lookup gap is a tradeoff between CPU and Memory. So higher is it, higher is my CPU use and lower my memory. What I didnt understand is how it works exactly.
In the code I have
int pos = c_N_1/LOOKUP_GAP, loop = 1 + (c_N_1-pos*LOOKUP_GAP);
That will make it look the scratchpad every LOOKUP_GAP byte (if its 2, it will be 0,2,4,6,8,10), but where is the more CPU Use of the algorithm?
My implementation will not be highly optimized, is something like try to run.
I also saw a FPGA Implementation that uses Interpolation ( https://github.com/kramble/FPGA-Litecoin-Miner ) this is more strange to me. I dunno how they could do interpolation of the values in scratchpad.
Thanks!
The increased CPU usage comes if you do not hit a pre-calculated entry. With LOOKUP 2 you are calculating 0-1023, but only storing 0, 2, 4, etc... So if you need the data for scratch-pad entry 3 you have to calculate it on the fly using the data from 2. This is an extra calculation vs. having them all stored permanently. As the lookup gap increases the amount of on the fly calculations you will do will increase.
My environment is
Windows 7 x64
Matlab 2012a x64
Cuda SDK 4.2
Tesla C2050 GPU
I am having trouble figuring out why my GPU is crashing with the "uncorrectable ECC error encountered". This error only occurs when i use 512 threads or more. I can't post the kernel, but i will try to describe what it does.
In general, the kernel takes a number of parameters and produces 2 complex matricies defined by the thread size, M and another number, N. So the returned matrices will be of size MxN. A typical configuration is 512x512, but each number is independent and can vary up or down. The kernel works when the numbers are 256x256.
Each thread (kernel) extracts a 999 size vector out of a 2D array based on the thread id, ie size 999xM, then cycles through the row (0 .. N-1) of the output matrices for calculation. A number of intermediate parameters are calculated, only using pow, sin and cos among the + - * / operators. To calculate one of the output matrices an additional loop needs to be executed to sum up the contribution of the 999 vector that was extracted earlier. This loop does some intermediate calculations to determine a range of values that will allow contribution. The contribution is then scaled by a factor determined by the cos and sine values of a calculated fractional value. This is where it crashes. If i stick in a constant value or 1.0 or any other for that matter, the kernel executes without trouble. however, when only one of the calls (cos or sine) is included, the kernel crashes.
Some psuedocode follows:
kernel()
{
/* Extract 999 vector from 2D array 999xM - one 999 vector for each thread. */
for (int i = 0; i < 999; i++)
{
.....
}
/* Cycle through the 2nd dimension of the output matricies */
for (int j = 0; j < N; j++)
{
/* Calculate some intermediate variables */
/* Calculate the real and imaginary components of the first output matrix */
/* real = cos(value), imaginary = sin(value) */
/* Construct the first output matrix from some intermediate variables and the real and imaginary components */
/* Calculate some more intermediate variables */
/* cycle through the extracted vector (0 .. 998) */
for (int k = 0; k < 999; k++)
{
/* Calculate some more intermediate variables */
/* Determine the range of allowed values to contribute to the second output matrix. */
/* Calculate the real and imaginary components of the second output matrix */
/* real = cos(value), imaginary = sin(value) */
/* This is were it crashes, unless real and imaginary are constant values (1.0) */
/* Sum up the contributions of the extracted vector to the second output matrix */
}
/* Construct the Second output matrix from some intermediate variables and the real and imaginary components */
}
}
I thought this could be due to a register limit, but the occupancy calculator indicates that this is not the case, I'm using less than the 32,768 registers with 512 threads. Can anyone give any suggestions as to what the cause of this could be?
Here is the ptasx info:
ptxas info : Compiling entry function '_Z40KerneliidddddPKdS0_S0_S0_iiiiiiiiiPdS1_S1_S1_S1_S1_S1_S1_S1_S1_' for 'sm_20'
ptxas info : Function properties for _Z40KerneliidddddPKdS0_S0_S0_iiiiiiiiiPdS1_S1_S1_S1_S1_S1_S1_S1_S1_
8056 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Function properties for __internal_trig_reduction_slowpathd
40 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 53 registers, 232 bytes cmem[0], 144 bytes cmem[2], 28 bytes cmem[16]
tmpxft_00001d70_00000000-3_MexFunciton.cudafe1.cpp
"Uncorrectable ECC error" usually refers to a hardware failure. ECC is Error Correcting Code, a means to detect and correct errors in bits stored in RAM. A stray cosmic ray can disrupt one bit stored in RAM every once in a great while, but "uncorrectable ECC error" indicates that several bits are coming out of RAM storage "wrong" - too many for the ECC to recover the original bit values.
This could mean that you have a bad or marginal RAM cell in your GPU device memory.
Marginal circuits of any kind may not fail 100%, but are more likely to fail under the stress of heavy use - and associated rise in temperature.
There are diagnostic utilities floating around to stress-test all the RAM banks of your PC to confirm or pinpoint which chip is failing, but I don't know of an analog for testing the device RAM banks of the GPU.
If you have access to another machine with a GPU of similar capability, try running your app on that machine to see how it behaves. If you don't get the ECC error on the second machine, this confirms that the problem is almost certainly in the hardware of the first machine. If you get the same ECC error on the second machine, then ignore everything I've written here and continue looking for your software bug. Unless your code is actually causing hardware damage, the chances of two machines having the same hardware failure are extremely small.