Convert (vectorize) code with per32-bit element conditional to SSE2 SSE3 - optimization

I want to vectorize code for Core2. I think, I can use intrinsic functions from gcc or icc, and the SSE, SSE2, SSE3, SSSE3 instructions are allowed.
My code works on arrays of 8 uint32_t elements and it is like this (only hotspot is here):
const uint32_t p[8] = {2147483743, 2147483713, 2147483693, 2147483659,
2147483647, 2147483629, 2147483587, 2147483579};
void vector_mod_add(uint32_t *a /* a[8] */, uint32_t *b /* b[8] */) {
int n;
for(n=0;n<8;n++)
a[n]+=b[n];
for(n=0;n<8;n++)
if(a[n]>=p[n])
a[n]-=p[n];
}
Addition is rather easy, but I don't know how it is possible to do an conditional subtraction.
Also, I have no experience in manual vectorizing with SSE2, so, please, tell me how should I define all types here.

You can write it as a[n] -= p[n] & ~(a[n] < p[n]). Note that the < here is not the C one, it's the SSE one (pcmpltd) that returns -1 in each true element and 0 in each false element (to allow the AND operation), and &~ is pandn. Here is an attempt at the code:
__m128i a, p;
a = _mm_sub_epi32(a, _mm_andnot_si128(_mm_cmplt_epi32(a, p), p));
Note that this uses signed operations, and so your numbers will need to stay below 2^31 - 1 for it to work correctly. If you need to go beyond that, change _mm_cmplt_epi32(a, p) to _mm_cmplt_epi32(_mm_xor_si128(a, signs), _mm_xor_si128(p, signs)), where signs is a vector of 32-bit words whose elements are all 0x80000000. Here is a version that seems like it will handle wider ranges more efficiently:
__m128i a, p;
a = _mm_sub_epi32(a, p);
a = _mm_add_epi32(a, _mm_and_si128(_mm_srai_epi32(a, 31), p));

Related

Parallel Dynamic Programming with CUDA

It is my first attempt to implement recursion with CUDA. The goal is to extract all the combinations from a set of chars "12345" using the power of CUDA to parallelize dynamically the task. Here is my kernel:
__device__ char route[31] = { "_________________________"};
__device__ char init[6] = { "12345" };
__global__ void Recursive(int depth) {
// up to depth 6
if (depth == 5) return;
// newroute = route - idx
int x = depth * 6;
printf("%s\n", route);
int o = 0;
int newlen = 0;
for (int i = 0; i<6; ++i)
{
if (i != threadIdx.x)
{
route[i+x-o] = init[i];
newlen++;
}
else
{
o = 1;
}
}
Recursive<<<1,newlen>>>(depth + 1);
}
__global__ void RecursiveCount() {
Recursive <<<1,5>>>(0);
}
The idea is to exclude 1 item (the item corresponding to the threadIdx) in each different thread. In each recursive call, using the variable depth, it works over a different base (variable x) on the route device variable.
I expect the kernel prompts something like:
2345_____________________
1345_____________________
1245_____________________
1234_____________________
2345_345_________________
2345_245_________________
2345_234_________________
2345_345__45_____________
2345_345__35_____________
2345_345__34_____________
..
2345_245__45_____________
..
But it prompts ...
·_____________
·_____________
·_____________
·_____________
·_____________
·2345
·2345
·2345
·2345
...
What I´m doing wrong?
What I´m doing wrong?
I may not articulate every problem with your code, but these items should get you a lot closer.
I recommend providing a complete example. In my view it is basically required by Stack Overflow, see item 1 here, note use of the word "must". Your example is missing any host code, including the original kernel call. It's only a few extra lines of code, why not include it? Sure, in this case, I can deduce what the call must have been, but why not just include it? Anyway, based on the output you indicated, it seems fairly evident the launch configuration of the host launch would have to be <<<1,1>>>.
This doesn't seem to be logical to me:
I expect the kernel prompts something like:
2345_____________________
The very first thing your kernel does is print out the route variable, before making any changes to it, so I would expect _____________________. However we can "fix" this by moving the printout to the end of the kernel.
You may be confused about what a __device__ variable is. It is a global variable, and there is only one copy of it. Therefore, when you modify it in your kernel code, every thread, in every kernel, is attempting to modify the same global variable, at the same time. That cannot possibly have orderly results, in any thread-parallel environment. I chose to "fix" this by making a local copy for each thread to work on.
You have an off-by-1 error, as well as an extent error in this loop:
for (int i = 0; i<6; ++i)
The off-by-1 error is due to the fact that you are iterating over 6 possible items (that is, i can reach a value of 5) but there are only 5 items in your init variable (the 6th item being a null terminator. The correct indexing starts out over 0-4 (with one of those being skipped). On subsequent iteration depths, its necessary to reduce this indexing extent by 1. Note that I've chosen to fix the first error here by increasing the length of init. There are other ways to fix, of course. My method inserts an extra _ between depths in the result.
You assume that at each iteration depth, the correct choice of items is the same, and in the same order, i.e. init. However this is not the case. At each depth, the choices of items must be selected not from the unchanging init variable, but from the choices passed from previous depth. Therefore we need a local, per-thread copy of init also.
A few other comments about CUDA Dynamic Parallelism (CDP). When passing pointers to data from one kernel scope to a child scope, local space pointers cannot be used. Therefore I allocate for the local copy of route from the heap, so it can be passed to child kernels. init can be deduced from route, so we can use an ordinary local variable for myinit.
You're going to quickly hit some dynamic parallelism (and perhaps memory) limits here if you continue this. I believe the total number of kernel launches for this is 5^5, which is 3125 (I'm doing this quickly, I may be mistaken). CDP has a pending launch limit of 2000 kernels by default. We're not hitting this here according to what I see, but you'll run into that sooner or later if you increase the depth or width of this operation. Furthermore, in-kernel allocations from the device heap are by default limited to 8KB. I don't seem to be hitting that limit, but probably I am, so my design should probably be modified to fix that.
Finally, in-kernel printf output is limited to the size of a particular buffer. If this technique is not already hitting that limit, it will soon if you increase the width or depth.
Here is a worked example, attempting to address the various items above. I'm not claiming it is defect free, but I think the output is closer to your expectations. Note that due to character limits on SO answers, I've truncated/excerpted some of the output.
$ cat t1639.cu
#include <stdio.h>
__device__ char route[31] = { "_________________________"};
__device__ char init[7] = { "12345_" };
__global__ void Recursive(int depth, const char *oroute) {
char *nroute = (char *)malloc(31);
char myinit[7];
if (depth == 0) memcpy(myinit, init, 6);
else memcpy(myinit, oroute+(depth-1)*6, 6);
myinit[6] = 0;
if (nroute == NULL) {printf("oops\n"); return;}
memcpy(nroute, oroute, 30);
nroute[30] = 0;
// up to depth 6
if (depth == 5) return;
// newroute = route - idx
int x = depth * 6;
//printf("%s\n", nroute);
int o = 0;
int newlen = 0;
for (int i = 0; i<(6-depth); ++i)
{
if (i != threadIdx.x)
{
nroute[i+x-o] = myinit[i];
newlen++;
}
else
{
o = 1;
}
}
printf("%s\n", nroute);
Recursive<<<1,newlen>>>(depth + 1, nroute);
}
__global__ void RecursiveCount() {
Recursive <<<1,5>>>(0, route);
}
int main(){
RecursiveCount<<<1,1>>>();
cudaDeviceSynchronize();
}
$ nvcc -o t1639 t1639.cu -rdc=true -lcudadevrt -arch=sm_70
$ cuda-memcheck ./t1639
========= CUDA-MEMCHECK
2345_____________________
1345_____________________
1245_____________________
1235_____________________
1234_____________________
2345__345________________
2345__245________________
2345__235________________
2345__234________________
2345__2345_______________
2345__345___45___________
2345__345___35___________
2345__345___34___________
2345__345___345__________
2345__345___45____5______
2345__345___45____4______
2345__345___45____45_____
2345__345___45____5______
2345__345___45____5_____5
2345__345___45____4______
2345__345___45____4_____4
2345__345___45____45____5
2345__345___45____45____4
2345__345___35____5______
2345__345___35____3______
2345__345___35____35_____
2345__345___35____5______
2345__345___35____5_____5
2345__345___35____3______
2345__345___35____3_____3
2345__345___35____35____5
2345__345___35____35____3
2345__345___34____4______
2345__345___34____3______
2345__345___34____34_____
2345__345___34____4______
2345__345___34____4_____4
2345__345___34____3______
2345__345___34____3_____3
2345__345___34____34____4
2345__345___34____34____3
2345__345___345___45_____
2345__345___345___35_____
2345__345___345___34_____
2345__345___345___45____5
2345__345___345___45____4
2345__345___345___35____5
2345__345___345___35____3
2345__345___345___34____4
2345__345___345___34____3
2345__245___45___________
2345__245___25___________
2345__245___24___________
2345__245___245__________
2345__245___45____5______
2345__245___45____4______
2345__245___45____45_____
2345__245___45____5______
2345__245___45____5_____5
2345__245___45____4______
2345__245___45____4_____4
2345__245___45____45____5
2345__245___45____45____4
2345__245___25____5______
2345__245___25____2______
2345__245___25____25_____
2345__245___25____5______
2345__245___25____5_____5
2345__245___25____2______
2345__245___25____2_____2
2345__245___25____25____5
2345__245___25____25____2
2345__245___24____4______
2345__245___24____2______
2345__245___24____24_____
2345__245___24____4______
2345__245___24____4_____4
2345__245___24____2______
2345__245___24____2_____2
2345__245___24____24____4
2345__245___24____24____2
2345__245___245___45_____
2345__245___245___25_____
2345__245___245___24_____
2345__245___245___45____5
2345__245___245___45____4
2345__245___245___25____5
2345__245___245___25____2
2345__245___245___24____4
2345__245___245___24____2
2345__235___35___________
2345__235___25___________
2345__235___23___________
2345__235___235__________
2345__235___35____5______
2345__235___35____3______
2345__235___35____35_____
2345__235___35____5______
2345__235___35____5_____5
2345__235___35____3______
2345__235___35____3_____3
2345__235___35____35____5
2345__235___35____35____3
2345__235___25____5______
2345__235___25____2______
2345__235___25____25_____
2345__235___25____5______
2345__235___25____5_____5
2345__235___25____2______
2345__235___25____2_____2
2345__235___25____25____5
2345__235___25____25____2
2345__235___23____3______
2345__235___23____2______
2345__235___23____23_____
2345__235___23____3______
2345__235___23____3_____3
2345__235___23____2______
2345__235___23____2_____2
2345__235___23____23____3
2345__235___23____23____2
2345__235___235___35_____
2345__235___235___25_____
2345__235___235___23_____
2345__235___235___35____5
2345__235___235___35____3
2345__235___235___25____5
2345__235___235___25____2
2345__235___235___23____3
2345__235___235___23____2
2345__234___34___________
2345__234___24___________
2345__234___23___________
2345__234___234__________
2345__234___34____4______
2345__234___34____3______
2345__234___34____34_____
2345__234___34____4______
2345__234___34____4_____4
2345__234___34____3______
2345__234___34____3_____3
2345__234___34____34____4
2345__234___34____34____3
2345__234___24____4______
2345__234___24____2______
2345__234___24____24_____
2345__234___24____4______
2345__234___24____4_____4
2345__234___24____2______
2345__234___24____2_____2
2345__234___24____24____4
2345__234___24____24____2
2345__234___23____3______
2345__234___23____2______
2345__234___23____23_____
2345__234___23____3______
2345__234___23____3_____3
2345__234___23____2______
2345__234___23____2_____2
2345__234___23____23____3
2345__234___23____23____2
2345__234___234___34_____
2345__234___234___24_____
2345__234___234___23_____
2345__234___234___34____4
2345__234___234___34____3
2345__234___234___24____4
2345__234___234___24____2
2345__234___234___23____3
2345__234___234___23____2
2345__2345__345__________
2345__2345__245__________
2345__2345__235__________
2345__2345__234__________
2345__2345__345___45_____
2345__2345__345___35_____
2345__2345__345___34_____
2345__2345__345___45____5
2345__2345__345___45____4
2345__2345__345___35____5
2345__2345__345___35____3
2345__2345__345___34____4
2345__2345__345___34____3
2345__2345__245___45_____
2345__2345__245___25_____
2345__2345__245___24_____
2345__2345__245___45____5
2345__2345__245___45____4
2345__2345__245___25____5
2345__2345__245___25____2
2345__2345__245___24____4
2345__2345__245___24____2
2345__2345__235___35_____
2345__2345__235___25_____
2345__2345__235___23_____
2345__2345__235___35____5
2345__2345__235___35____3
2345__2345__235___25____5
2345__2345__235___25____2
2345__2345__235___23____3
2345__2345__235___23____2
2345__2345__234___34_____
2345__2345__234___24_____
2345__2345__234___23_____
2345__2345__234___34____4
2345__2345__234___34____3
2345__2345__234___24____4
2345__2345__234___24____2
2345__2345__234___23____3
2345__2345__234___23____2
1345__345________________
1345__145________________
1345__135________________
1345__134________________
1345__1345_______________
1345__345___45___________
1345__345___35___________
1345__345___34___________
1345__345___345__________
1345__345___45____5______
1345__345___45____4______
1345__345___45____45_____
1345__345___45____5______
1345__345___45____5_____5
1345__345___45____4______
1345__345___45____4_____4
1345__345___45____45____5
1345__345___45____45____4
1345__345___35____5______
1345__345___35____3______
1345__345___35____35_____
1345__345___35____5______
1345__345___35____5_____5
1345__345___35____3______
1345__345___35____3_____3
1345__345___35____35____5
1345__345___35____35____3
1345__345___34____4______
1345__345___34____3______
1345__345___34____34_____
1345__345___34____4______
1345__345___34____4_____4
1345__345___34____3______
1345__345___34____3_____3
1345__345___34____34____4
1345__345___34____34____3
1345__345___345___45_____
1345__345___345___35_____
1345__345___345___34_____
1345__345___345___45____5
1345__345___345___45____4
1345__345___345___35____5
1345__345___345___35____3
1345__345___345___34____4
1345__345___345___34____3
1345__145___45___________
1345__145___15___________
1345__145___14___________
1345__145___145__________
1345__145___45____5______
1345__145___45____4______
1345__145___45____45_____
1345__145___45____5______
1345__145___45____5_____5
1345__145___45____4______
1345__145___45____4_____4
1345__145___45____45____5
1345__145___45____45____4
1345__145___15____5______
1345__145___15____1______
1345__145___15____15_____
1345__145___15____5______
1345__145___15____5_____5
1345__145___15____1______
1345__145___15____1_____1
1345__145___15____15____5
1345__145___15____15____1
1345__145___14____4______
1345__145___14____1______
1345__145___14____14_____
1345__145___14____4______
1345__145___14____4_____4
1345__145___14____1______
1345__145___14____1_____1
1345__145___14____14____4
1345__145___14____14____1
1345__145___145___45_____
1345__145___145___15_____
1345__145___145___14_____
1345__145___145___45____5
1345__145___145___45____4
1345__145___145___15____5
1345__145___145___15____1
1345__145___145___14____4
1345__145___145___14____1
1345__135___35___________
1345__135___15___________
1345__135___13___________
1345__135___135__________
1345__135___35____5______
1345__135___35____3______
1345__135___35____35_____
1345__135___35____5______
1345__135___35____5_____5
1345__135___35____3______
1345__135___35____3_____3
1345__135___35____35____5
1345__135___35____35____3
1345__135___15____5______
1345__135___15____1______
1345__135___15____15_____
1345__135___15____5______
1345__135___15____5_____5
1345__135___15____1______
1345__135___15____1_____1
1345__135___15____15____5
1345__135___15____15____1
1345__135___13____3______
1345__135___13____1______
1345__135___13____13_____
1345__135___13____3______
1345__135___13____3_____3
1345__135___13____1______
1345__135___13____1_____1
1345__135___13____13____3
1345__135___13____13____1
1345__135___135___35_____
1345__135___135___15_____
1345__135___135___13_____
1345__135___135___35____5
1345__135___135___35____3
1345__135___135___15____5
1345__135___135___15____1
1345__135___135___13____3
1345__135___135___13____1
1345__134___34___________
1345__134___14___________
1345__134___13___________
1345__134___134__________
1345__134___34____4______
1345__134___34____3______
1345__134___34____34_____
1345__134___34____4______
1345__134___34____4_____4
1345__134___34____3______
1345__134___34____3_____3
1345__134___34____34____4
1345__134___34____34____3
1345__134___14____4______
1345__134___14____1______
1345__134___14____14_____
1345__134___14____4______
1345__134___14____4_____4
1345__134___14____1______
1345__134___14____1_____1
1345__134___14____14____4
1345__134___14____14____1
1345__134___13____3______
1345__134___13____1______
1345__134___13____13_____
1345__134___13____3______
1345__134___13____3_____3
1345__134___13____1______
1345__134___13____1_____1
1345__134___13____13____3
1345__134___13____13____1
1345__134___134___34_____
1345__134___134___14_____
1345__134___134___13_____
1345__134___134___34____4
1345__134___134___34____3
1345__134___134___14____4
1345__134___134___14____1
1345__134___134___13____3
1345__134___134___13____1
1345__1345__345__________
1345__1345__145__________
1345__1345__135__________
1345__1345__134__________
1345__1345__345___45_____
1345__1345__345___35_____
1345__1345__345___34_____
1345__1345__345___45____5
1345__1345__345___45____4
1345__1345__345___35____5
1345__1345__345___35____3
1345__1345__345___34____4
1345__1345__345___34____3
1345__1345__145___45_____
1345__1345__145___15_____
1345__1345__145___14_____
1345__1345__145___45____5
1345__1345__145___45____4
1345__1345__145___15____5
1345__1345__145___15____1
1345__1345__145___14____4
1345__1345__145___14____1
1345__1345__135___35_____
1345__1345__135___15_____
1345__1345__135___13_____
1345__1345__135___35____5
1345__1345__135___35____3
1345__1345__135___15____5
1345__1345__135___15____1
1345__1345__135___13____3
1345__1345__135___13____1
1345__1345__134___34_____
1345__1345__134___14_____
1345__1345__134___13_____
1345__1345__134___34____4
1345__1345__134___34____3
1345__1345__134___14____4
1345__1345__134___14____1
1345__1345__134___13____3
1345__1345__134___13____1
1245__245________________
1245__145________________
1245__125________________
1245__124________________
1245__1245_______________
1245__245___45___________
1245__245___25___________
1245__245___24___________
1245__245___245__________
1245__245___45____5______
1245__245___45____4______
1245__245___45____45_____
1245__245___45____5______
1245__245___45____5_____5
1245__245___45____4______
1245__245___45____4_____4
1245__245___45____45____5
1245__245___45____45____4
1245__245___25____5______
1245__245___25____2______
1245__245___25____25_____
1245__245___25____5______
1245__245___25____5_____5
1245__245___25____2______
1245__245___25____2_____2
1245__245___25____25____5
1245__245___25____25____2
1245__245___24____4______
1245__245___24____2______
1245__245___24____24_____
1245__245___24____4______
1245__245___24____4_____4
1245__245___24____2______
1245__245___24____2_____2
1245__245___24____24____4
1245__245___24____24____2
1245__245___245___45_____
1245__245___245___25_____
1245__245___245___24_____
1245__245___245___45____5
1245__245___245___45____4
1245__245___245___25____5
1245__245___245___25____2
1245__245___245___24____4
1245__245___245___24____2
1245__145___45___________
1245__145___15___________
1245__145___14___________
1245__145___145__________
1245__145___45____5______
1245__145___45____4______
1245__145___45____45_____
1245__145___45____5______
1245__145___45____5_____5
1245__145___45____4______
...
1235__1235__235___25_____
1235__1235__235___23_____
1235__1235__235___35____5
1235__1235__235___35____3
1235__1235__235___25____5
1235__1235__235___25____2
1235__1235__235___23____3
1235__1235__235___23____2
1235__1235__135___35_____
1235__1235__135___15_____
1235__1235__135___13_____
1235__1235__135___35____5
1235__1235__135___35____3
1235__1235__135___15____5
1235__1235__135___15____1
1235__1235__135___13____3
1235__1235__135___13____1
1235__1235__125___25_____
1235__1235__125___15_____
1235__1235__125___12_____
1235__1235__125___25____5
1235__1235__125___25____2
1235__1235__125___15____5
1235__1235__125___15____1
1235__1235__125___12____2
1235__1235__125___12____1
1235__1235__123___23_____
1235__1235__123___13_____
1235__1235__123___12_____
1235__1235__123___23____3
1235__1235__123___23____2
1235__1235__123___13____3
1235__1235__123___13____1
1235__1235__123___12____2
1235__1235__123___12____1
1234__234________________
1234__134________________
1234__124________________
1234__123________________
1234__1234_______________
1234__234___34___________
1234__234___24___________
1234__234___23___________
1234__234___234__________
1234__234___34____4______
1234__234___34____3______
1234__234___34____34_____
1234__234___34____4______
1234__234___34____4_____4
1234__234___34____3______
1234__234___34____3_____3
1234__234___34____34____4
1234__234___34____34____3
1234__234___24____4______
1234__234___24____2______
1234__234___24____24_____
1234__234___24____4______
1234__234___24____4_____4
1234__234___24____2______
1234__234___24____2_____2
1234__234___24____24____4
1234__234___24____24____2
1234__234___23____3______
1234__234___23____2______
1234__234___23____23_____
1234__234___23____3______
1234__234___23____3_____3
1234__234___23____2______
1234__234___23____2_____2
1234__234___23____23____3
1234__234___23____23____2
1234__234___234___34_____
1234__234___234___24_____
1234__234___234___23_____
1234__234___234___34____4
1234__234___234___34____3
1234__234___234___24____4
1234__234___234___24____2
1234__234___234___23____3
1234__234___234___23____2
1234__134___34___________
1234__134___14___________
1234__134___13___________
1234__134___134__________
1234__134___34____4______
1234__134___34____3______
1234__134___34____34_____
1234__134___34____4______
1234__134___34____4_____4
1234__134___34____3______
1234__134___34____3_____3
1234__134___34____34____4
1234__134___34____34____3
1234__134___14____4______
1234__134___14____1______
1234__134___14____14_____
1234__134___14____4______
1234__134___14____4_____4
1234__134___14____1______
1234__134___14____1_____1
1234__134___14____14____4
1234__134___14____14____1
1234__134___13____3______
1234__134___13____1______
1234__134___13____13_____
1234__134___13____3______
1234__134___13____3_____3
1234__134___13____1______
1234__134___13____1_____1
1234__134___13____13____3
1234__134___13____13____1
1234__134___134___34_____
1234__134___134___14_____
1234__134___134___13_____
1234__134___134___34____4
1234__134___134___34____3
1234__134___134___14____4
1234__134___134___14____1
1234__134___134___13____3
1234__134___134___13____1
1234__124___24___________
1234__124___14___________
1234__124___12___________
1234__124___124__________
1234__124___24____4______
1234__124___24____2______
1234__124___24____24_____
1234__124___24____4______
1234__124___24____4_____4
1234__124___24____2______
1234__124___24____2_____2
1234__124___24____24____4
1234__124___24____24____2
1234__124___14____4______
1234__124___14____1______
1234__124___14____14_____
1234__124___14____4______
1234__124___14____4_____4
1234__124___14____1______
1234__124___14____1_____1
1234__124___14____14____4
1234__124___14____14____1
1234__124___12____2______
1234__124___12____1______
1234__124___12____12_____
1234__124___12____2______
1234__124___12____2_____2
1234__124___12____1______
1234__124___12____1_____1
1234__124___12____12____2
1234__124___12____12____1
1234__124___124___24_____
1234__124___124___14_____
1234__124___124___12_____
1234__124___124___24____4
1234__124___124___24____2
1234__124___124___14____4
1234__124___124___14____1
1234__124___124___12____2
1234__124___124___12____1
1234__123___23___________
1234__123___13___________
1234__123___12___________
1234__123___123__________
1234__123___23____3______
1234__123___23____2______
1234__123___23____23_____
1234__123___23____3______
1234__123___23____3_____3
1234__123___23____2______
1234__123___23____2_____2
1234__123___23____23____3
1234__123___23____23____2
1234__123___13____3______
1234__123___13____1______
1234__123___13____13_____
1234__123___13____3______
1234__123___13____3_____3
1234__123___13____1______
1234__123___13____1_____1
1234__123___13____13____3
1234__123___13____13____1
1234__123___12____2______
1234__123___12____1______
1234__123___12____12_____
1234__123___12____2______
1234__123___12____2_____2
1234__123___12____1______
1234__123___12____1_____1
1234__123___12____12____2
1234__123___12____12____1
1234__123___123___23_____
1234__123___123___13_____
1234__123___123___12_____
1234__123___123___23____3
1234__123___123___23____2
1234__123___123___13____3
1234__123___123___13____1
1234__123___123___12____2
1234__123___123___12____1
1234__1234__234__________
1234__1234__134__________
1234__1234__124__________
1234__1234__123__________
1234__1234__234___34_____
1234__1234__234___24_____
1234__1234__234___23_____
1234__1234__234___34____4
1234__1234__234___34____3
1234__1234__234___24____4
1234__1234__234___24____2
1234__1234__234___23____3
1234__1234__234___23____2
1234__1234__134___34_____
1234__1234__134___14_____
1234__1234__134___13_____
1234__1234__134___34____4
1234__1234__134___34____3
1234__1234__134___14____4
1234__1234__134___14____1
1234__1234__134___13____3
1234__1234__134___13____1
1234__1234__124___24_____
1234__1234__124___14_____
1234__1234__124___12_____
1234__1234__124___24____4
1234__1234__124___24____2
1234__1234__124___14____4
1234__1234__124___14____1
1234__1234__124___12____2
1234__1234__124___12____1
1234__1234__123___23_____
1234__1234__123___13_____
1234__1234__123___12_____
1234__1234__123___23____3
1234__1234__123___23____2
1234__1234__123___13____3
1234__1234__123___13____1
1234__1234__123___12____2
1234__1234__123___12____1
========= ERROR SUMMARY: 0 errors
$
The answer given by Robert Crovella is correct at the 5th point, the mistake was in the using of init in every recursive call, but I want to clarify something that can be useful for other beginners with CUDA.
I used this variable because when I tried to launch a child kernel passing a local variable I always got the exception: Error: a pointer to local memory cannot be passed to a launch as an argument.
As I´m C# expert developer I´m not used to using pointers (Ref does the low-level-work for that) so I thought there was no way to do it in CUDA/c programming.
As Robert shows in its code it is possible copying the pointer with memalloc for using it as a referable argument.
Here is a kernel simplified as an example of deep recursion.
__device__ char init[6] = { "12345" };
__global__ void Recursive(int depth, const char* route) {
// up to depth 6
if (depth == 5) return;
//declaration for a referable argument (point 6)
char* newroute = (char*)malloc(6);
memcpy(newroute, route, 5);
int o = 0;
int newlen = 0;
for (int i = 0; i < (6 - depth); ++i)
{
if (i != threadIdx.x)
{
newroute[i - o] = route[i];
newlen++;
}
else
{
o = 1;
}
}
printf("%s\n", newroute);
Recursive <<<1, newlen>>>(depth + 1, newroute);
}
__global__ void RecursiveCount() {
Recursive <<<1, 5>>>(0, init);
}
I don't add the main call because I´m using ManagedCUDA for C# but as Robert says it can be figured-out how the call RecursiveCount is.
About ending arrays of char with /0 ... sorry but I don't know exactly what is the benefit; this code works fine without them.

Gather AVX2&512 intrinsic for 16-bit integers?

Imagine this piece of code:
void Function(int16 *src, int *indices, float *dst, int cnt, float mul)
{
for (int i=0; i<cnt; i++) dst[i] = float(src[indices[i]]) * mul;
};
This really asks for gather intrinsics e.g. _mm_i32gather_epi32. I got great success with these when loading floats, but are there any for 16-bit ints? Another problem here is that I need to transition from 16-bits on the input to 32-bits (float) on the output.
There is indeed no instruction to gather 16bit integers, but (assuming there is no risk of memory-access violation) you can just load 32bit integers starting at the corresponding addresses, and mask out the upper halves of each value.
For uint16_t this would be a simple bit-and, for signed integers you can shift the values to the left in order to have the sign bit at the most-significant position. You can then (arithmetically) shift back the values before converting them to float, or, since you multiply them anyway, just scale the multiplication factor accordingly.
Alternatively, you could load from two bytes earlier and arithmetically shift to the right. Either way, your bottle-neck will likely be the load-ports (vpgatherdd requires 8 load-uops. Together with the load for the indices you have 9 loads distributed on two ports, which should result in 4.5 cycles for 8 elements).
Untested possible AVX2 implementation (does not handle the last elements, if cnt is not a multiple of 8 just execute your original loop at the end):
void Function(int16_t const *src, int const *indices, float *dst, size_t cnt, float mul_)
{
__m256 mul = _mm256_set1_ps(mul_*float(1.0f/0x10000));
for (size_t i=0; i+8<=cnt; i+=8){ // todo handle last elements
// load indicies:
__m256i idx = _mm256_loadu_si256(reinterpret_cast<__m256i const*>(indices + i));
// load 16bit integers in the lower halves + garbage in the upper halves:
__m256i values = _mm256_i32gather_epi32(reinterpret_cast<int const*>(src), idx, 2);
// shift each value to upper half (removes garbage, makes sure sign is at the right place)
// values are too large by a factor of 0x10000
values = _mm256_slli_epi32(values, 16);
// convert to float, scale and multiply:
__m256 fvalues = _mm256_mul_ps(_mm256_cvtepi32_ps(values), mul);
// store result
_mm256_storeu_ps(dst, fvalues);
}
}
Porting this to AVX-512 should be straight-forward.

Ambiguous process calcChecksum

CONTEXT
I'm using a code written to work with a GPS module that connects to the Arduino through serial communication. The module starts each packet with a header (0xb5, 0x62), continues with the information you requested and ends with to bytes of checksum, CK_A, and CK_B. I don't understand the code that calculates that checksum. More info about the algorithm of checksum (8-Bit Fletcher Algorithm) in the module protocol (https://www.u-blox.com/sites/default/files/products/documents/u-blox7-V14_ReceiverDescriptionProtocolSpec_%28GPS.G7-SW-12001%29_Public.pdf), page 74 (87 with index).
MORE INFO
Just wanted to understand the code, it works fine. In the UBX protocol, I mentioned there is also a piece of code that explains how it works (isn't write in c++)
struct NAV_POSLLH {
//Here goes the struct
};
NAV_POSLLH posllh;
void calcChecksum(unsigned char* CK) {
memset(CK, 0, 2);
for (int i = 0; i < (int)sizeof(NAV_POSLLH); i++) {
CK[0] += ((unsigned char*)(&posllh))[i];
CK[1] += CK[0];
}
}
In the link you provide, you can find a link to RFC 1145, containing that Fletcher 8 bit algorithm as well and explaining
It can be shown that at the end of the loop A will contain the 8-bit
1's complement sum of all octets in the datagram, and that B will
contain (n)*D[0] + (n-1)*D[1] + ... + D[n-1].
n = sizeof byte D[];
Quote adjusted to C syntax
Try it with a couple of bytes, pen and paper, and you'll see :)

Optimizing a Bit-Wise Manipulation Kernel

I have the following code which progressively goes through a string of bits and rearrange them into blocks of 20bytes. I'm using 32*8 blocks with 40 threads per block. However the process takes something like 36ms on my GT630M. Are there any further optimization I can do? Especially with regard to removing the if-else in the inner most loop.
__global__ void test(unsigned char *data)
{
__shared__ unsigned char dataBlock[20];
__shared__ int count;
count = 0;
unsigned char temp = 0x00;
for(count=0; count<(streamSize/8); count++)
{
for(int i=0; i<8; i++)
{
if(blockIdx.y >= i)
temp |= (*(data + threadIdx.x*(blockIdx.x + gridDim.x*(i+count)))&(0x01<<blockIdx.y))>>(blockIdx.y - i);
else
temp |= (*(data + threadIdx.x*(blockIdx.x + gridDim.x*(i+count)))&(0x01<<blockIdx.y))<<(i - blockIdx.y);
}
dataBlock[threadIdx.x] = temp;
//do something
}
}
It's not clear what your code is trying to accomplish, but a couple obvious opportunities are:
1) if possible, use 32-bit words instead of unsigned char.
2) use block sizes that are multiples of 32.
3) The conditional code may not be costing you as much as you expect. You can check by compiling with --cubin --gpu-architecture sm_xx (where xx is the SM version of your target hardware), and using cuobjdump --dump-sass on the resulting cubin file to look at the generated assembly. You may have to modify the source code to loft the common subexpression into a separate variable, and/or use the ternary operator ? : to hint to the compiler to use predication.

What compilers can detect pure mathematical functions and optimize them (without telling you so)?

I have seen that GCC is not able to detect pure mathematical functions and it needs you to provide the attribute "const" to indicate that.
What compilers can detect pure mathematical functions and optimize them (without telling you so)?
To do so is inherently risky in languages that have pointers and lack global compilation & analysis. So, if a an operation is declared non-const, the compiler must assume it could have side-effects.
Example:
//getx.cpp
int GetX(int input)
{
int* pData = (int*) input;
*pData = 50;
return 0;
}
// gety.cpp
int GetY(int input)
{
return GetX(input + 4);
}
// main.cpp
int main()
{
int arg[] { 0, 4 };
return GetY((int)arg);
}
The compiler while compiling GetY can't tell that GetX treats its argument as a pointer and dereferences and modifies data in a non-functional, side-effect-prone manner. That information is only available during linking so you'd have to re-invent the concept of linking to include a lot of code generation and analysis to support such a feature.
It's not really (afaik) the compiler that does this, but when writing C# in Visual Studio when using the plugin ReSharper, you can get compile time hints that indicate that it is possible to declare something as const. On the other hand, that doesn't go under the category "without telling you so", so it might not be what you're looking for...
It seems that gcc now does: doing "gcc -O2 -S" on the following code, and reading the assembly, the call to foo() from within test() is identified as pure and moved outside of the loop:
#include <stdio.h>
double __attribute__((noinline)) foo(double x)
{
x = x + 1;
x = x * x;
if (x > 20)
x -= 1;
x -= x * x;
return x;
}
void test(int iters, double x)
{
int i;
for (i = 0; i < iters; ++i) {
printf("%g\n", foo(x));
}
}
This is Fedora 22, gcc 5.1.1, x86_64. I haven't tried, but with -flto, I would expect this to work across compilation units.
Also, it is worth noting that today gcc has the command line options -Wsuggest-attribute=pure and -Wsuggest-attribute=const.