Why doesn't this OpenCL kernel work? - objective-c

I am trying to write a physics emulator for Mac which uses OpenCL hardware acceleration. Right now, when I try to run my kernel with OpenCL, the debugger reports a SIGABORT at my gcl_memcpy line to get results from the kernel.
Right now, I have managed to simplify the code quite a bit and still get the error. Here is my barebones OpenCL kernel which still causes a crash:
kernel void pointfield(global const float * points,
float posX, float posY,
global float * fieldsOut) {
size_t index = get_global_id(0);
float2 chargePosition = vload2(0, &points[index * 3]);
vstore2(chargePosition, 0, &fieldsOut[2 * index]);
}
This literally just loads a 2-dimensional vector and then stores it immediately. And I know what you are thinking, but the problem is not an overflow error. This equivalent code works:
kernel void pointfield(global const float * points,
float posX, float posY,
global float * fieldsOut) {
size_t index = get_global_id(0);
fieldsOut[2 * index] = points[3 * index];
fieldsOut[2 * index + 1] = points[3 * index + 1];
}
What on EARTH is wrong with my vector code?
Also, incase it seems necessary, here is the gist of how I am creating the OpenCL context:
dispatch_queue_t queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_GPU, NULL);
if (!queue) {
queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_CPU, NULL);
}
cl_device_id gpu = gcl_get_device_id_with_dispatch_queue(queue);
char name[128];
clGetDeviceInfo(gpu, CL_DEVICE_NAME, 128, name, NULL);
dispatch_sync(queue, ^{
cl_ndrange range = {
1,
{0, 0, 0},
{self.pointChargeCount, 0, 0},
0
};
pointfield_kernel(&range, (cl_float *)_pointCharges.clBuffer, point.x, point.y,
(cl_float *)_fieldValues.clBuffer);
[_fieldValues getCLBuffer]; // calls gcl_memcpy
});
If I omit the [_fieldValues getCLBuffer] call, the abort happens at the end of the dispatch_sync call rather than at the gcl_memcpy call.

Related

STM32 Crash on Flash Sector Erase

I'm trying to write 4 uint32's of data into the flash memory of my STM32F767ZI so I've looked at some examples and in the reference manual but still I cannot do it. My goal is to write 4 uint32's into the flash and read them back and compare with the original data, and light different leds depending on the success of the comparison.
My code is as follows:
void flash_write(uint32_t offset, uint32_t *data, uint32_t size) {
FLASH_EraseInitTypeDef EraseInitStruct = {0};
uint32_t SectorError = 0;
HAL_FLASH_Unlock();
EraseInitStruct.TypeErase = FLASH_TYPEERASE_SECTORS;
EraseInitStruct.VoltageRange = FLASH_VOLTAGE_RANGE_3;
EraseInitStruct.Sector = FLASH_SECTOR_11;
EraseInitStruct.NbSectors = 1;
//EraseInitStruct.Banks = FLASH_BANK_1; // or FLASH_BANK_2 or FLASH_BANK_BOTH
st = HAL_FLASHEx_Erase(&EraseInitStruct, &SectorError);
if (st == HAL_OK) {
for (int i = 0; i < size; i += 4) {
st = HAL_FLASH_Program(FLASH_TYPEPROGRAM_WORD, FLASH_USER_START_ADDR + offset + i, *(data + i)); //This is what's giving me trouble
if (st != HAL_OK) {
// handle the error
break;
}
}
}else {
// handle the error
}
HAL_FLASH_Lock();
}
void flash_read(uint32_t offset, uint32_t *data, uint32_t size) {
for (int i = 0; i < size; i += 4) {
*(data + i) = *(__IO uint32_t*)(FLASH_USER_START_ADDR + offset + i);
}
}
int main(void) {
uint32_t data[] = {'a', 'b', 'c', 'd'};
uint32_t read_data[] = {0, 0, 0, 0};
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
flash_write(0, data, sizeof(data));
flash_read(0, read_data, sizeof(read_data));
if (compareArrays(data,read_data,4))
{
HAL_GPIO_WritePin(GPIOB, GPIO_PIN_7,SET);
}
else
{
HAL_GPIO_WritePin(GPIOB, GPIO_PIN_14,SET);
}
return 0;
}
The problem is that before writing data I must erase a sector, and when I do it with the HAL_FLASHEx_Erase(&EraseInitStruct, &SectorError), function, the program always crashes, and sometimes even corrupts my codespace forcing me to update firmware.
I've selected the sector farthest from the code space but still it crashes when i try to erase it.
I've read in the reference manual that
Any attempt to read the Flash memory while it is being written or erased, causes the bus to
stall. Read operations are processed correctly once the program operation has completed.
This means that code or data fetches cannot be performed while a write/erase operation is
ongoing.
which I believe means the code should ideally be run from RAM while we operate on the flash, but I've seen other people online not have this issue so I'm wondering if that's the only problem I have. With that in mind I wanted to confirm if this is my only issue, or if I'm doing something wrong?
In your loop, you are adding multiples of 4 to i, but then you are adding i to data. When you add to a pointer it is automatically multiplied by the size of the pointed type, so you are adding multiples of 16 bytes and reading past the end of your input buffer.
Also, make sure you initialize all members of EraseInitStruct. Uncomment that line and set the correct value!

Objective C OpenCL Kernel Error Checking

My kernel seems to not work if I pass in a certain image2d_t but works if I pass in another, and I don't know how to check if there is an error.
My kernel code:
kernel void change_color(read_only image2d_t img, write_only image2d_t out) {
int x = get_global_id(0);
int y = get_global_id(1);
int2 coords = {x, y};
float4 v = {1, 1, 0, 1};
write_imagef(out, coords, v);
}
Relevant host code:
cl_image_format format = {CL_BGRA, CL_UNSIGNED_INT8};
cl_image screen = gcl_create_image(
&format,
width,
height,
0,
frameSurface);
cl_ndrange range = {
2, {0,0,0}, {width,height,0}, {0,0,0}
};
change_color_kernel(&range, screen, output);
If I change
change_color_kernel(&range, screen, output)
to
change_color_kernel(&range, output, output)
the code works perfectly fine and the output image turns yellow; otherwise, the program runs as if the kernel was not invoked at all, and the output image remains unchanged.
Does this mean that passing screen results in some kind of error? How do I check what the error is, and what could be the cause?
Note: I do not know if my initialization of screen from frameSurface is correct.
Note #2: This is simply for testing purposes. I know that I'm not using img in the kernel, but the error should not be happening anyway.

How-to convert an iOS camera image to greyscale using the Accelerate Framework?

It seems like this should be simpler than I'm finding it to be.
I have an AVFoundation frame coming back in the standard delegate method:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
where I would like to convert the frame to greyscale using the Accelerate.Framework.
There is a family of conversion methods in the framework, including vImageConvert_RGBA8888toPlanar8(), which looks like it might be what I would like to see, however, I can't find any examples of how to use them!
So far, I have the code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t stride = CVPixelBufferGetBytesPerRow(imageBuffer);
// vImage In
Pixel_8 *bitmap = (Pixel_8 *)malloc(width * height * sizeof(Pixel_8));
const vImage_Buffer inImage = { bitmap, height, width, stride };
//How can I take this inImage and convert it to greyscale?????
//vImageConvert_RGBA8888toPlanar8()??? Is the correct starting format here??
}
}
So I have two questions:
(1) In the code above, is RBGA8888 the correct starting format?
(2) How can I actually make the Accelerate.Framework call to convert to greyscale?
There is an easier option here. If you change the camera acquire format to YUV, then you already have a greyscale frame that you can use as you like. When setting up your data output, use something like:
dataOutput.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };
You can then access the Y plane in your capture callback using:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
uint8_t *yPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
... do stuff with your greyscale camera image ...
CVPixelBufferUnlockBaseAddress(pixelBuffer);
The vImage method is to use vImageMatrixMultiply_Planar8 and a 1x3 matrix.
vImageConvert_RGBA8888toPlanar8 is the function you use to convert a RGBA8888 buffer into 4 planar buffers. These are used by vImageMatrixMultiply_Planar8. vImageMatrixMultiply_ARGB8888 will do it too in one pass, but your gray channel will be interleaved with three other channels in the result. vImageConvert_RGBA8888toPlanar8 itself doesn't do any math. All it does is separate your interleaved image into separate image planes.
If you need to adjust the gamma as well, then probably vImageConvert_AnyToAny() is the easy choice. It will do the fully color managed conversion from your RGB format to a grayscale colorspace. See vImage_Utilities.h.
I like Tarks answer better though. It just leaves you in a position of having to color manage the Luminance manually (if you care).
Convert BGRA Image to Grayscale with Accelerate vImage
This method is meant to illustrate getting Accelerate's vImage use in converting BGR images to grayscale. Your image may very well be in RGBA format and you'll need to adjust the matrix accordingly, but the camera outputs BGRA so I'm using it here. The values in the matrix are the same values used in OpenCV for cvtColor, there are other values you might play with like luminosity. I assume you malloc the appropriate amount of memory for the result. In the case of grayscale it is only 1-channel or 1/4 the memory used for BGRA. If anyone finds issues with this code please leave a comment.
Performance note
Converting to grayscale in this way may NOT be the fastest. You should check the performance of any method in your environment. Brad Larson's GPUImage might be faster, or even OpenCV's cvtColor. In any case you will want to remove the calls to malloc and free for the intermediate buffers and manage them for the app lifecycle. Otherwise, the function call will be dominated by the malloc and free. Apple's docs recommend reusing the whole vImage_Buffer when possible.
You can also read about solving the same problem with NEON intrinsics.
Finally, the fastest method is not converting at all. If you're getting image data from the device camera the device camera is natively in the kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format. Meaning, grabbing the first plane's data (Y-Channel, luma) is the fastest way to get grayscale.
BGRA to Grayscale
- (void)convertBGRAFrame:(const CLPBasicVideoFrame &)bgraFrame toGrayscale:(CLPBasicVideoFrame &)grayscaleFrame
{
vImage_Buffer bgraImageBuffer = {
.width = bgraFrame.width,
.height = bgraFrame.height,
.rowBytes = bgraFrame.bytesPerRow,
.data = bgraFrame.rawPixelData
};
void *intermediateBuffer = malloc(bgraFrame.totalBytes);
vImage_Buffer intermediateImageBuffer = {
.width = bgraFrame.width,
.height = bgraFrame.height,
.rowBytes = bgraFrame.bytesPerRow,
.data = intermediateBuffer
};
int32_t divisor = 256;
// int16_t a = (int16_t)roundf(1.0f * divisor);
int16_t r = (int16_t)roundf(0.299f * divisor);
int16_t g = (int16_t)roundf(0.587f * divisor);
int16_t b = (int16_t)roundf(0.114f * divisor);
const int16_t bgrToGray[4 * 4] = { b, 0, 0, 0,
g, 0, 0, 0,
r, 0, 0, 0,
0, 0, 0, 0 };
vImage_Error error;
error = vImageMatrixMultiply_ARGB8888(&bgraImageBuffer, &intermediateImageBuffer, bgrToGray, divisor, NULL, NULL, kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(#"%s, vImage error %zd", __PRETTY_FUNCTION__, error);
}
vImage_Buffer grayscaleImageBuffer = {
.width = grayscaleFrame.width,
.height = grayscaleFrame.height,
.rowBytes = grayscaleFrame.bytesPerRow,
.data = grayscaleFrame.rawPixelData
};
void *scratchBuffer = malloc(grayscaleFrame.totalBytes);
vImage_Buffer scratchImageBuffer = {
.width = grayscaleFrame.width,
.height = grayscaleFrame.height,
.rowBytes = grayscaleFrame.bytesPerRow,
.data = scratchBuffer
};
error = vImageConvert_ARGB8888toPlanar8(&intermediateImageBuffer, &grayscaleImageBuffer, &scratchImageBuffer, &scratchImageBuffer, &scratchImageBuffer, kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(#"%s, vImage error %zd", __PRETTY_FUNCTION__, error);
}
free(intermediateBuffer);
free(scratchBuffer);
}
CLPBasicVideoFrame.h - For reference
typedef struct
{
size_t width;
size_t height;
size_t bytesPerRow;
size_t totalBytes;
unsigned long pixelFormat;
void *rawPixelData;
} CLPBasicVideoFrame;
I got through the grayscale conversion, but was having trouble with the quality when I found this book on the web called Instant OpenCV for iOS. I personally picked up a copy and it has a number of gems, although the code is bit of a mess. On the bright-side it is a very reasonably priced eBook.
I'm very curious about that matrix. I toyed around with it for hours trying to figure out what the arrangement should be. I would have thought the values should be on the diagonal, but the Instant OpenCV guys put it as above.
if you need to use BGRA vide streams - you can use this excellent conversion
here
This is the function you'll need to take:
void neon_convert (uint8_t * __restrict dest, uint8_t * __restrict src, int numPixels)
{
int i;
uint8x8_t rfac = vdup_n_u8 (77);
uint8x8_t gfac = vdup_n_u8 (151);
uint8x8_t bfac = vdup_n_u8 (28);
int n = numPixels / 8;
// Convert per eight pixels
for (i=0; i < n; ++i)
{
uint16x8_t temp;
uint8x8x4_t rgb = vld4_u8 (src);
uint8x8_t result;
temp = vmull_u8 (rgb.val[0], bfac);
temp = vmlal_u8 (temp,rgb.val[1], gfac);
temp = vmlal_u8 (temp,rgb.val[2], rfac);
result = vshrn_n_u16 (temp, 8);
vst1_u8 (dest, result);
src += 8*4;
dest += 8;
}
}
more optimisations (using assembly) are in the link
(1) My experience with the iOS camera framework has been with images in the kCMPixelFormat_32BGRA format, which is compatible with the ARGB8888 family of functions. (It may be possible to use other formats as well.)
(2) The simplest way to convert from BGR to grayscale on iOS is to use vImageMatrixMultiply_ARGB8888ToPlanar8():
https://developer.apple.com/documentation/accelerate/1546979-vimagematrixmultiply_argb8888top
Here is a fairly complete example written in Swift. I'm assuming the Objective-C code would be similar.
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
// TODO: report error
return
}
// Lock the image buffer
if (kCVReturnSuccess != CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags.readOnly)) {
// TODO: report error
return
}
defer {
CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags.readOnly)
}
// Create input vImage_Buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let stride = CVPixelBufferGetBytesPerRow(imageBuffer)
var inImage = vImage_Buffer(data: baseAddress, height: UInt(height), width: UInt(width), rowBytes: stride)
// Create output vImage_Buffer
let bitmap = malloc(width * height)
var outImage = vImage_Buffer(data: bitmap, height: UInt(height), width: UInt(width), rowBytes: width)
defer {
// Make sure to free unless the caller is responsible for this
free(bitmap)
}
// Arbitrary divisor to scale coefficients to integer values
let divisor: Int32 = 0x1000
let fDivisor = Float(divisor)
// Rec.709 coefficients
var coefficientsMatrix = [
Int16(0.0722 * fDivisor), // blue
Int16(0.7152 * fDivisor), // green
Int16(0.2126 * fDivisor), // red
0 // alpha
]
// Convert to greyscale
if (kvImageNoError != vImageMatrixMultiply_ARGB8888ToPlanar8(
&inImage, &outImage, &coefficientsMatrix, divisor, nil, 0, vImage_Flags(kvImageNoFlags))) {
// TODO: report error
return
}
The code above was inspired by a tutorial from Apple on grayscale conversion, which can be found at the following link. It also includes conversion to a CGImage if that is needed. Note that they assume RGB order instead of BGR, and they only provide a 3 coefficients instead of 4 (mistake?)
https://developer.apple.com/documentation/accelerate/vimage/converting_color_images_to_grayscale

CUDA Crashes for big data set

My computer crashes (I have to manually reset it) when I run my kernel function in a loop for 600+ times (it would not crash if it were like 50 times or so), and I'm not sure what's causing the crash.
My main is as follows:
int main()
{
int *seam = new int [image->height];
int width = image->width;
int height = image->height;
int *fMC = (int*)malloc(width*height*sizeof(int*));
int *fNew = (int*)malloc(width*height*sizeof(int*));
for(int i=0;i<numOfSeams;i++)
{
seam = cpufindSeamV2(fMC,width,height,1);
fMC = kernel_shiftSeam(fMC,fNew,seam,width,height,nWidth,1);
for(int k=0;k<height;k++)
{
fMC[(nWidth-1)+width*k] = INT_MAX;
}
}
and my kernel is :
int* kernel_shiftSeam(int *MCEnergyMat, int *newE, int *seam, int width, int height, int x, int direction)
{
//time measurement
float elapsed_time_ms = 0;
cudaEvent_t start, stop; //threads per block
dim3 threads(16,16);
//blocks
dim3 blocks((width+threads.x-1)/threads.x, (height+threads.y-1)/threads.y);
//MCEnergy and Seam arrays on device
int *device_MC, *device_new, *device_Seam;
//MCEnergy and Seam arrays on host
int *host_MC, *host_new, *host_Seam;
//total number of bytes in array
int size = width*height*sizeof(int);
int seamSize;
if(direction == 1)
{
seamSize = height*sizeof(int);
host_Seam = (int*)malloc(seamSize);
for(int i=0;i<height;i++)
host_Seam[i] = seam[i];
}
else
{
seamSize = width*sizeof(int);
host_Seam = (int*)malloc(seamSize);
for(int i=0;i<width;i++)
host_Seam[i] = seam[i];
}
cudaMallocHost((void**)&host_MC, size );
cudaMallocHost((void**)&host_new, size );
host_MC = MCEnergyMat;
host_new = newE;
//allocate 1D flat array on device
cudaMalloc((void**)&device_MC, size);
cudaMalloc((void**)&device_new, size);
cudaMalloc((void**)&device_Seam, seamSize);
//copy host array to device
cudaMemcpy(device_MC, host_MC, size, cudaMemcpyHostToDevice);
cudaMemcpy(device_new, host_new, size, cudaMemcpyHostToDevice);
cudaMemcpy(device_Seam, host_Seam, seamSize, cudaMemcpyHostToDevice);
//measure start time for cpu calculations
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaEventRecord(start, 0);
//perform gpu calculations
if(direction == 1)
{
gpu_shiftSeam<<< blocks,threads >>>(device_MC, device_new, device_Seam, width, height, x);
}
//measure end time for cpu calcuations
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&elapsed_time_ms, start, stop );
execTime += elapsed_time_ms;
//copy out the results back to host
cudaMemcpy(newE, device_new, size, cudaMemcpyDeviceToHost);
//free memory
free(host_Seam);
cudaFree(host_MC); cudaFree(host_new);
cudaFree(device_MC); cudaFree(device_new); cudaFree(device_Seam);
//destroy event objects
cudaEventDestroy(start); cudaEventDestroy(stop);
return newE;
}
So, my program crashes when I call "kernel_shiftSeam" for many times, I also freed the memory using cudaFree so I don't know whether or not its a memory leak problem. It would be great if someone can point me in the right direction.
Could be heap problems. Try reordering the cudaFree statements in your kernel to be LIFO. Check release notes for any newer CUDA drivers that contain heap/leak fixes. On windows try installing process explorer 15.12 or newer as it shows GPU memory usage - and a leaky heap is easy to spot.

CUDA program causes nvidia driver to crash

My monte carlo pi calculation CUDA program is causing my nvidia driver to crash when I exceed around 500 trials and 256 full blocks. It seems to be happening in the monteCarlo kernel function.Any help is appreciated.
#include <stdio.h>
#include <stdlib.h>
#include <cuda.h>
#include <curand.h>
#include <curand_kernel.h>
#define NUM_THREAD 256
#define NUM_BLOCK 256
///////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
// Function to sum an array
__global__ void reduce0(float *g_odata) {
extern __shared__ int sdata[];
// each thread loads one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
sdata[tid] = g_odata[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x; s *= 2) { // step = s x 2
if (tid % (2*s) == 0) { // only threadIDs divisible by the step participate
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}
///////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
__global__ void monteCarlo(float *g_odata, int trials, curandState *states){
// unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
unsigned int incircle, k;
float x, y, z;
incircle = 0;
curand_init(1234, i, 0, &states[i]);
for(k = 0; k < trials; k++){
x = curand_uniform(&states[i]);
y = curand_uniform(&states[i]);
z =(x*x + y*y);
if (z <= 1.0f) incircle++;
}
__syncthreads();
g_odata[i] = incircle;
}
///////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
int main() {
float* solution = (float*)calloc(100, sizeof(float));
float *sumDev, *sumHost, total;
const char *error;
int trials;
curandState *devStates;
trials = 500;
total = trials*NUM_THREAD*NUM_BLOCK;
dim3 dimGrid(NUM_BLOCK,1,1); // Grid dimensions
dim3 dimBlock(NUM_THREAD,1,1); // Block dimensions
size_t size = NUM_BLOCK*NUM_THREAD*sizeof(float); //Array memory size
sumHost = (float*)calloc(NUM_BLOCK*NUM_THREAD, sizeof(float));
cudaMalloc((void **) &sumDev, size); // Allocate array on device
error = cudaGetErrorString(cudaGetLastError());
printf("%s\n", error);
cudaMalloc((void **) &devStates, (NUM_THREAD*NUM_BLOCK)*sizeof(curandState));
error = cudaGetErrorString(cudaGetLastError());
printf("%s\n", error);
// Do calculation on device by calling CUDA kernel
monteCarlo <<<dimGrid, dimBlock>>> (sumDev, trials, devStates);
error = cudaGetErrorString(cudaGetLastError());
printf("%s\n", error);
// call reduction function to sum
reduce0 <<<dimGrid, dimBlock, (NUM_THREAD*sizeof(float))>>> (sumDev);
error = cudaGetErrorString(cudaGetLastError());
printf("%s\n", error);
dim3 dimGrid1(1,1,1);
dim3 dimBlock1(256,1,1);
reduce0 <<<dimGrid1, dimBlock1, (NUM_THREAD*sizeof(float))>>> (sumDev);
error = cudaGetErrorString(cudaGetLastError());
printf("%s\n", error);
// Retrieve result from device and store it in host array
cudaMemcpy(sumHost, sumDev, sizeof(float), cudaMemcpyDeviceToHost);
error = cudaGetErrorString(cudaGetLastError());
printf("%s\n", error);
*solution = 4*(sumHost[0]/total);
printf("%.*f\n", 1000, *solution);
free (solution);
free(sumHost);
cudaFree(sumDev);
cudaFree(devStates);
//*solution = NULL;
return 0;
}
If smaller numbers of trials work correctly, and if you are running on MS Windows without the NVIDIA Tesla Compute Cluster (TCC) driver and/or the GPU you are using is attached to a display, then you are probably exceeding the operating system's "watchdog" timeout. If the kernel occupies the display device (or any GPU on Windows without TCC) for too long, the OS will kill the kernel so that the system does not become non-interactive.
The solution is to run on a non-display-attached GPU and if you are on Windows, use the TCC driver. Otherwise, you will need to reduce the number of trials in your kernel and run the kernel multiple times to compute the number of trials you need.
EDIT: According to the CUDA 4.0 curand docs(page 15, "Performance Notes"), you can improve performance by copying the state for a generator to local storage inside your kernel, then storing the state back (if you need it again) when you are finished:
curandState state = states[i];
for(k = 0; k < trials; k++){
x = curand_uniform(&state);
y = curand_uniform(&state);
z =(x*x + y*y);
if (z <= 1.0f) incircle++;
}
Next, it mentions that setup is expensive, and suggests that you move curand_init into a separate kernel. This may help keep the cost of your MC kernel down so you don't run up against the watchdog.
I recommend reading that section of the docs, there are several useful guidelines.
For those of you having a geforce GPU which does not support TCC driver there is another solution based on:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff569918(v=vs.85).aspx
start regedit,
navigate to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\GraphicsDrivers
create new DWORD key called TdrLevel, set value to 0,
restart PC.
Now your long-running kernels should not be terminated. This answer is based on:
Modifying registry to increase GPU timeout, windows 7
I just thought it might be useful to provide the solution here as well.