Why does my textbook say that the space complexity for this algorithm is O(1)? I feel like it is O(n) for the size of the Linked List that it is holding.
public static LinkedListNode nthToLast(LinkedListNode head, int k) {
LinkedListNode p1 = head;
LinkedListNode p2 = head;
/* Move p1 k nodes into the list.*/
for (int i = 0; i < k; i++) {
if (p1 == null) return null; // Out of bounds
p1 = p1.next;
}
/* Move them at the same pace. When p1 hits the end,
* p2 will be at the right element. */
while (p1 != null) {
p1 = p1.next;
p2 = p2.next;
}
return p2;
}
When you think of space complexity of an algorithm, you should consider the additional space that the algorithm explicitly allocates. In the example above, the algorithm to find the nth to last LinkedListNode in the list simply creates two additional LinkedListNode pointers p1 and p2 as well as one counter i for for loop iteration. The amount of additional memory the algorithm allocates has nothing to do with the length of the linked list passed in. You would still use two LinkedListNode pointers and one integer counter regardless of whether the linked list passed to nthToLast() was 10 nodes long or 10 million nodes long. Therefore, this algorithm's space complexity is O(1).
The space complexity is how much memory the algorithm uses as a function of the size of the input. In other words: if the size of the input changed, how much more memory would the algorithm use?
Whether or not the linked list input has 1 node, 10 nodes, or 1000000 nodes, the algorithm uses the same amount of memory. It uses a constant amount because it only allocates 3 variables (a constant number) -- int i, LinkedListNode p1, and LinkedListNode p2.
UPDATE: It's important to note that p1 and p2 just reference a single node. They are initialized to the reference the value of head, which would just be the first node in the list. Those two variables don't contain the whole list itself.
head
|
v
[node1] --> [node 2] --> [node 3] --> ..... --> [node n]
^ ^
| |
p1 p2
Notice in the drawing above, that if we had 1 node or 20 nodes, you'd still only have a single p1 and p2. They may reference different nodes at different times in the algorithm, but only ever contain a single node each.
The algorithm does O(n) iterations, but it does does not allocate any memory for the elements in the list, it only uses the already existing items. The only memory used is for local variables p1 and p2, used to designate the items.
Related
I was reading book about competitive programming and was encountered to problem where we have to count all possible paths in the n*n matrix.
Now the conditions are :
`
1. All cells must be visited for once (cells must not be unvisited or visited more than once)
2. Path should start from (1,1) and end at (n,n)
3. Possible moves are right, left, up, down from current cell
4. You cannot go out of the grid
Now this my code for the problem :
typedef long long ll;
ll path_count(ll n,vector<vector<bool>>& done,ll r,ll c){
ll count=0;
done[r][c] = true;
if(r==(n-1) && c==(n-1)){
for(ll i=0;i<n;i++){
for(ll j=0;j<n;j++) if(!done[i][j]) {
done[r][c]=false;
return 0;
}
}
count++;
}
else {
if((r+1)<n && !done[r+1][c]) count+=path_count(n,done,r+1,c);
if((r-1)>=0 && !done[r-1][c]) count+=path_count(n,done,r-1,c);
if((c+1)<n && !done[r][c+1]) count+=path_count(n,done,r,c+1);
if((c-1)>=0 && !done[r][c-1]) count+=path_count(n,done,r,c-1);
}
done[r][c] = false;
return count;
}
Here if we define recurrence relation then it can be like: T(n) = 4T(n-1)+n2
Is this recurrence relation true? I don't think so because if we use masters theorem then it would give us result as O(4n*n2) and I don't think it can be of this order.
The reason, why I am telling, is this because when I use it for 7*7 matrix it takes around 110.09 seconds and I don't think for n=7 O(4n*n2) should take that much time.
If we calculate it for n=7 the approx instructions can be 47*77 = 802816 ~ 106. For such amount of instruction it should not take that much time. So here I conclude that my recurrene relation is false.
This code generates output as 111712 for 7 and it is same as the book's output. So code is right.
So what is the correct time complexity??
No, the complexity is not O(4^n * n^2).
Consider the 4^n in your notation. This means, going to a depth of at most n - or 7 in your case, and having 4 choices at each level. But this is not the case. In the 8th, level you still have multiple choices where to go next. In fact, you are branching until you find the path, which is of depth n^2.
So, a non tight bound will give us O(4^(n^2) * n^2). This bound however is far from being tight, as it assumes you have 4 valid choices from each of your recursive calls. This is not the case.
I am not sure how much tighter it can be, but a first attempt will drop it to O(3^(n^2) * n^2), since you cannot go from the node you came from. This bound is still far from optimal.
I was recently asked an interview question about testing the validity of a Sudoku board. A basic answer involves for loops. Essentially:
for(int x = 0; x != 9; ++x)
for(int y = 0; y != 9; ++y)
// ...
Do this nested for loops to check the rows. Do it again to check the columns. Do one more for the sub-squares but that one is more funky because we're dividing the suoku board into sub-boards so we end end up more than two nested loops, maybe three or four.
I was later asked the complexity of this code. Frankly, as far as I'm concerned, all the cells of the board are visited exactly three times so O(3n). To me, the fact that we have nested loops doesn't mean this code is automatically O(n^2) or even O(n^highest-nesting-level-of-loops). But I have suspicion that that's the answer the interviewer expected...
Posed another way, what is the complexity of these two pieces of code:
for(int i = 0; i != n; ++i)
// ...
and:
for(int i = 0; i != sqrt(n); ++i)
for(int j = 0; j != sqrt(n); ++j)
// ...
Your general intuition is correct. Let's clarify a bit about Big-O notation:
Big-O gives you an upper bound for the worst-case (time) complexity for your algorithm, in relation to n - the size of your input. In essence, it is a measurement of how the amount of work changes in relation to the size of the input.
When you say something like
all the cells of the board are visited exactly three times so O(3n).
you are implying that n (the size of your input) is the the number of cells in the board and therefore visiting all cells three times would indeed be an O(3n) (which is O(n)) operation. If this is the case you would be correct.
However usually when referring to Sudoku problems (or problems involving a grid in general), n is taken to be the number of cells in each row/column (an n x n board). In this case, the runtime complexity would be O(3n²) (which is indeed equal to O(n²)).
In the future, it is perfectly valid to ask your interviewer what n is.
As for the question in the title (Is a nested for loop automatically O(n^2)?) the short answer is no.
Consider this example:
for(int i = 0 ; i < n ; i++) {
for(int j = 0 ; j < n ; j * 2) {
... // some constant time operation
}
}
The outer loops makes n iterations while the inner loop makes log2(n) iterations - therefore the time complexity will be O(nlogn).
In your examples, in the first one you have a single for-loop making n iterations, therefore a complexity of (at least) O(n) (the operation is performed an order of n times).
In the second one you two nested for loops, each making sqrt(n) iterations, therefore a total runtime complexity of (at least) O(n) as well. The second function isn't automatically O(n^2) simply because it contains a nested loop. The amount of operations being made is still of the same order (n) therefore these two examples have the same complexity - since we assume n is the same for both examples.
This is the most crucial point to sail home. To compare between the performance of two algorithms, you must be using the same input to make the comparison. In your sudoku problem you could have defined n in a few different ways, and the way you did would directly affect the complexity calculation of the problem - even if the amount of work is all the same.
*NOTE - this is unrelated to your question, but in the future avoid using != in loop conditions. In your second example, if log(n) is not a whole number, the loop could run forever, depending on the language and how it is defined. It is therefore recommended to use < instead.
It depends on how you define the so-called N.
If the size of the board is N-by-N, then yes, the complexity is O(N^2).
But if you say, the total number of grids is N (i.e., the board id sqrt(N)-by-sqrt(N)), then the complexity is O(N), or 3O(N) if you mind the constant.
I tried to write a program to construct a binary search tree using the pre-order sequence. I know there are many solutions: the min/max algorithm, the classical (or "obvious" recursion) or even iteration rather than recursion.
I tried to implement the classical recursion: the first element of pre-order traversal is the root. Then I search for all elements which are less than the root. All these elements will be part of left subtree, and the other values will be part of the right subtree. I repeat that until I construct all substrees. It's a very classical approach.
Here is my code:
public static TreeNode constructInOrderTree(int[] inorder) {
return constructInOrderTree(inorder, 0, inorder.length-1);
}
private static TreeNode constructInOrderTree(int[] inorder, int start, int end){
if(start>end){
return null;
}
int rootValue = inorder[start];
TreeNode root = new TreeNode(rootValue);
int k = 0;
for (int i =0; i< inorder.length; i++){
if (inorder[i]<= rootValue){
k=i;
}
}
root.left = constructInOrderTree(inorder, start+1, k);
root.right= constructInOrderTree(inorder, k+1, end);
return root;
}
My question is: What is the time complexity of this algorithm? Is it O(n^2) or O(n * log(n) ) ?
I searched here in stackoverflow but I found many contradictory answers. Sometimes, someone said that it is O(n^2), sometime O(n*log(n)) and I got really confused.
Can we apply the master theorem here? If yes, "perhaps" we can consider that each time we divide the tree in two subtrees (of equal parts), so we will have the relation: (O(n) is the complexity of searching in the array)
T(n) = 1/2 * T(n/2) + O(n)
Which will give us a complexity of O(n*log(n)). But, it's not really true I think, we don't divide the tree at equal parts because we search in the array until we found the adequate elements no?
Is it possible to apply the master theorem here?
Forethoughts:
No, it is neither O(n^2), nor O(nlogn) in WC. Because of the nature of trees and the fact that you don't perform any complex actions on each element. All you do is output it, in contrast to sorting it with some comparison algorithm.
Then the WC would be O(n).
That is when the tree is skewed, i.e. one of the root's subtrees is empty. Then you have quasi a simple linked list. Then to output it you must visit each element at least once giving O(n).
Proof:
Lets assume the right subtree is empty and the per call effort is constant(only print out). Then
T(n) = T(n-1) + T(0) + c
T(n) = T(n-2) + 2T(0) + 2c
.
.
T(n) = nT(0) + nc = n(T(0) + c)
Since T(0) and c are constants, you end up in O(n).
For the problem of producing a bit-pattern with exactly n set bits, I know of two practical methods, but they both have limitations I'm not happy with.
First, you can enumerate all of the possible word values which have that many bits set in a pre-computed table, and then generate a random index into that table to pick out a possible result. This has the problem that as the output size grows the list of candidate outputs eventually becomes impractically large.
Alternatively, you can pick n non-overlapping bit positions at random (for example, by using a partial Fisher-Yates shuffle) and set those bits only. This approach, however, computes a random state in a much larger space than the number of possible results. For example, it may choose the first and second bits out of three, or it might, separately, choose the second and first bits.
This second approach must consume more bits from the random number source than are strictly required. Since it is choosing n bits in a specific order when their order is unimportant, this means that it is making an arbitrary distinction between n! different ways of producing the same result, and consuming at least floor(log_2(n!)) more bits than are necessary.
Can this be avoided?
There is obviously a third approach of iteratively computing and counting off the legal permutations until a random index is reached, but that's simply a space-for-time trade-off on the first approach, and isn't directly helpful unless there is an efficient way to count off those n permutations.
clarification
The first approach requires picking a single random number between zero and (where w is the output size), as this is the number of possible solutions.
The second approach requires picking n random values between zero and w-1, zero and w-2, etc., and these have a product of , which is times larger than the first approach.
This means that the random number source has been forced to produce bits to distinguish n! different results which are all equivalent. I'd like to know if there's an efficient method to avoid relying on this superfluous randomness. Perhaps by using an algorithm which produces an un-ordered list of bit positions, or by directly computing the nth unique permutation of bits.
Seems like you want a variant of Floyd's algorithm:
Algorithm to select a single, random combination of values?
Should be especially useful in your case, because the containment test is a simple bitmask operation. This will require only k calls to the RNG. In the code below, I assume you have randint(limit) which produces a uniform random from 0 to limit-1, and that you want k bits set in a 32-bit int:
mask = 0;
for (j = 32 - k; j < 32; ++j) {
r = randint(j+1);
b = 1 << r;
if (mask & b) mask |= (1 << j);
else mask |= b;
}
How many bits of entropy you need here depends on how randint() is implemented. If k > 16, set it to 32 - k and negate the result.
Your alternative suggestion of generating a single random number representing one combination among the set (mathematicians would call this a rank of the combination) is simpler if you use colex order rather than lexicographic rank. This code, for example:
for (i = k; i >= 1; --i) {
while ((b = binomial(n, i)) > r) --n;
buf[i-1] = n;
r -= b;
}
will fill the array buf[] with indices from 0 to n-1 for the k-combination at colex rank r. In your case, you'd replace buf[i-1] = n with mask |= (1 << n). The binomial() function is binomial coefficient, which I do with a lookup table (see this). That would make the most efficient use of entropy, but I still think Floyd's algorithm would be a better compromise.
[Expanding my comment:] If you only have a little raw entropy available, then use a PRNG to stretch it further. You only need enough raw entropy to seed a PRNG. Use the PRNG to do the actual shuffle, not the raw entropy. For the next shuffle reseed the PRNG with some more raw entropy. That spreads out the raw entropy and makes less of a demand on your entropy source.
If you know exactly the range of numbers you need out of the PRNG, then you can, carefully, set up your own LCG PRNG to cover the appropriate range while needing the minimum entropy to seed it.
ETA: In C++there is a next_permutation() method. Try using that. See std::next_permutation Implementation Explanation for more.
Is this a theory problem or a practical problem?
You could still do the partial shuffle, but keep track of the order of the ones and forget the zeroes. There are log(k!) bits of unused entropy in their final order for your future consumption.
You could also just use the recurrence (n choose k) = (n-1 choose k-1) + (n-1 choose k) directly. Generate a random number between 0 and (n choose k)-1. Call it r. Iterate over all of the bits from the nth to the first. If we have to set j of the i remaining bits, set the ith if r < (i-1 choose j-1) and clear it, subtracting (i-1 choose j-1), otherwise.
Practically, I wouldn't worry about the couple of words of wasted entropy from the partial shuffle; generating a random 32-bit word with 16 bits set costs somewhere between 64 and 80 bits of entropy, and that's entirely acceptable. The growth rate of the required entropy is asymptotically worse than the theoretical bound, so I'd do something different for really big words.
For really big words, you might generate n independent bits that are 1 with probability k/n. This immediately blows your entropy budget (and then some), but it only uses linearly many bits. The number of set bits is tightly concentrated around k, though. For a further expected linear entropy cost, I can fix it up. This approach has much better memory locality than the partial shuffle approach, so I'd probably prefer it in practice.
I would use solution number 3, generate the i-th permutation.
But do you need to generate the first i-1 ones?
You can do it a bit faster than that with kind of divide and conquer method proposed here: Returning i-th combination of a bit array and maybe you can improve the solution a bit
Background
From the formula you have given - w! / ((w-n)! * n!) it looks like your problem set has to do with the binomial coefficient which deals with calculating the number of unique combinations and not permutations which deals with duplicates in different positions.
You said:
"There is obviously a third approach of iteratively computing and counting off the legal permutations until a random index is reached, but that's simply a space-for-time trade-off on the first approach, and isn't directly helpful unless there is an efficient way to count off those n permutations.
...
This means that the random number source has been forced to produce bits to distinguish n! different results which are all equivalent. I'd like to know if there's an efficient method to avoid relying on this superfluous randomness. Perhaps by using an algorithm which produces an un-ordered list of bit positions, or by directly computing the nth unique permutation of bits."
So, there is a way to efficiently compute the nth unique combination, or rank, from the k-indexes. The k-indexes refers to a unique combination. For example, lets say that the n choose k case of 4 choose 3 is taken. This means that there are a total of 4 numbers that can be selected (0, 1, 2, 3), which is represented by n, and they are taken in groups of 3, which is represented by k. The total number of unique combinations can be calculated as n! / ((k! * (n-k)!). The rank of zero corresponds to the k-index of (2, 1, 0). Rank one is represented by the k-index group of (3, 1, 0), and so forth.
Solution
There is a formula that can be used to very efficiently translate between a k-index group and the corresponding rank without iteration. Likewise, there is a formula for translating between the rank and corresponding k-index group.
I have written a paper on this formula and how it can be seen from Pascal's Triangle. The paper is called Tablizing The Binomial Coeffieicent.
I have written a C# class which is in the public domain that implements the formula described in the paper. It uses very little memory and can be downloaded from the site. It performs the following tasks:
Outputs all the k-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters.
Converts the k-index to the proper lexicographic index or rank of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle and is very efficient compared to iterating over the entire set.
Converts the index in a sorted binomial coefficient table to the corresponding k-index. The technique used is also much faster than older iterative solutions.
Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers. This version returns a long value. There is at least one other method that returns an int. Make sure that you use the method that returns a long value.
The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to use the 4 above methods. Accessor methods are provided to access the table.
There is an associated test class which shows how to use the class and its methods. It has been extensively tested with at least 2 cases and there are no known bugs.
The following tested example code demonstrates how to use the class and will iterate through each unique combination:
public void Test10Choose5()
{
String S;
int Loop;
int N = 10; // Total number of elements in the set.
int K = 5; // Total number of elements in each group.
// Create the bin coeff object required to get all
// the combos for this N choose K combination.
BinCoeff<int> BC = new BinCoeff<int>(N, K, false);
int NumCombos = BinCoeff<int>.GetBinCoeff(N, K);
// The Kindexes array specifies the indexes for a lexigraphic element.
int[] KIndexes = new int[K];
StringBuilder SB = new StringBuilder();
// Loop thru all the combinations for this N choose K case.
for (int Combo = 0; Combo < NumCombos; Combo++)
{
// Get the k-indexes for this combination.
BC.GetKIndexes(Combo, KIndexes);
// Verify that the Kindexes returned can be used to retrive the
// rank or lexigraphic order of the KIndexes in the table.
int Val = BC.GetIndex(true, KIndexes);
if (Val != Combo)
{
S = "Val of " + Val.ToString() + " != Combo Value of " + Combo.ToString();
Console.WriteLine(S);
}
SB.Remove(0, SB.Length);
for (Loop = 0; Loop < K; Loop++)
{
SB.Append(KIndexes[Loop].ToString());
if (Loop < K - 1)
SB.Append(" ");
}
S = "KIndexes = " + SB.ToString();
Console.WriteLine(S);
}
}
So, the way to apply the class to your problem is by considering each bit in the word size as the total number of items. This would be n in the n!/((k! (n - k)!) formula. To obtain k, or the group size, simply count the number of bits set to 1. You would have to create a list or array of the class objects for each possible k, which in this case would be 32. Note that the class does not handle N choose N, N choose 0, or N choose 1 so the code would have to check for those cases and return 1 for both the 32 choose 0 case and 32 choose 32 case. For 32 choose 1, it would need to return 32.
If you need to use values not much larger than 32 choose 16 (the worst case for 32 items - yields 601,080,390 unique combinations), then you can use 32 bit integers, which is how the class is currently implemented. If you need to use 64 bit integers, then you will have to convert the class to use 64 bit longs. The largest value that a long can hold is 18,446,744,073,709,551,616 which is 2 ^ 64. The worst case for n choose k when n is 64 is 64 choose 32. 64 choose 32 is 1,832,624,140,942,590,534 - so a long value will work for all 64 choose k cases. If you need numbers bigger than that, then you will probably want to look into using some sort of big integer class. In C#, the .NET framework has a BigInteger class. If you are working in a different language, it should not be hard to port.
If you are looking for a very good PRNG, one of the fastest, lightweight, and high quality output is the Tiny Mersenne Twister or TinyMT for short . I ported the code over to C++ and C#. it can be found here, along with a link to the original author's C code.
Rather than using a shuffling algorithm like Fisher-Yates, you might consider doing something like the following example instead:
// Get 7 random cards.
ulong Card;
ulong SevenCardHand = 0;
for (int CardLoop = 0; CardLoop < 7; CardLoop++)
{
do
{
// The card has a value of between 0 and 51. So, get a random value and
// left shift it into the proper bit position.
Card = (1UL << RandObj.Next(CardsInDeck));
} while ((SevenCardHand & Card) != 0);
SevenCardHand |= Card;
}
The above code is faster than any shuffling algorithm (at least for obtaining a subset of random cards) since it only works on 7 cards instead of 52. It also packs the cards into individual bits within a single 64 bit word. It makes evaluating poker hands much more efficient as well.
As a side, note, the best binomial coefficient calculator I have found that works with very large numbers (it accurately calculated a case that yielded over 15,000 digits in the result) can be found here.
This kernel is doing the right thing giving me the correct result. My problem is more in correctness of the while loop if I want to improve the performance. I tried several configuration of blocks and threads but if i'm going to change them, the while loop won't give me the correct result.
The results i obtained changing the configuration of the kernel are that firstArray and secondArray won't be filled completely (they will have 0 inside the cells). Both arrays must be filled with the curValue obtained from the if loop.
Any advice is welcomed :)
Thank you in advance
#define N 65536
__global__ void whileLoop(int* firstArray_device, int* secondArray_device)
{
int curValue = 0;
int curIndex = 1;
int i = (threadIdx.x)+2;
while(i < N) {
if (i % curIndex == 0) {
curValue = curValue + curIndex;
curIndex *= 2;
}
firstArray_device[i] = curValue;
secondArray_device[i] = curValue;
i += blockDim.x * gridDim.x;
}
}
int main(){
firstArray_host[0] = 0;
firstArray_host[1] = 1;
secondArray_host[0] = 0;
secondArray_host[1] = 1;
// memory allocation + copy on GPU
// definition number of blocks and threads
dim3 dimBlock(1, 1);
dim3 dimGrid(1, 1);
whileLoop<<<dimGrid, dimBlock>>>(firstArray_device, secondArray_device);
// copy back to CPU + free memory
}
You have a data dependency issue here which hinders you to do some meaningful optimization. The variables curValue and curIndex are changed within the while loop and feed forward into the next run. As soon as you try to optimize the loop you will find you in a situation where this variables have different states and the result is changed.
I do not really know what you try to achieve, but try to make the while loop indepdent to the values of a former run of the loop to avoid the dependencies. Try to separate the data into threads and data chunks in a way that the indizes and values are calculated on the environment states like threadIdx, blockDim, gridDim...
Also try to avoid conditional loops. It is better to use for loops with a constant number of runs. This is also easier to optimize.
A few things:
You left out the code you used to declare your global arrays on the
device. It would be helpful to have this info.
Your algorithm is
not thread-safe when multiple blocks are used. In other words, if you are running multiple
blocks, not only would they be doing redundant work (thus giving
you no gains), but they would also likely at some point try to write
to the same global memory locations, creating errors.
Your code is thus
correct when only one block is used, but this makes it rather pointless ... you're running a serial, or lightly-threaded operation on a parallel device. You cannot run on all your available resources (multiple blocks on multiple SMPs without memory conflicts (see below)...
Currently there are two main issues with this code from a parallel standpoint:
int i = (threadIdx.x)+2; ...yields a starting index of 2 for a
single thread; 2 and 3 for two threads in a single block, and so on. I doubt this is
what you want as the first two positions (0, 1) are never getting
addressed. (Remember, arrays start at index 0 in C.)
Further, if you include multiple blocks (say 2 blocks
each with one thread) then you would have multiple duplicate indices
(e.g. for 2 b x 1 t --> indices b1t1: 2, b1t2: 2), which when you used the index
to write to global memory would create conflicts and errors. Doing something like int i = threadIdx.x + blockDim.x * blockIdx.x; would be the typical way to correctly calculate your indices so as to avoid this issue.
Your
final expression i += blockDim.x * gridDim.x; is okay, because its
adds a number equivalent to the total # of threads to i and thus
does not create additional clashing or overlap.
Why use the GPU to shuffle memory and do a trivial computation? You may not see much speedup versus a fast CPU, when you factor in the time to take your arrays onto and off of the device.
Work on problems 1 and 2 if you wish, but beyond that consider your overall goal and what exactly kind of algorithm you are trying to optimize and come up with a more parallel-friendly solution -- or consider whether GPU computing really makes sense for your problem.
To parallelize this algorithm, you need to come up with a formula that can directly calculate the value for a given index in the array. So, pick a random index within the range of the array, then consider what the factors are that go into determining what the value will be for that location. After finding a formula, test it by comparing output values for random indexes with the calculated values from your serial algorithm. When that is correct, create a kernel that starts out by selecting an unique index based on it's thread and block indexes. Then calculate the value for that index and store it in the corresponding index in the array.
A trivial example:
Serial:
__global__ void serial(int* array)
{
int j(0);
for (int i(0); i < 1024; ++i) {
array[i] = j;
j += 5;
}
int main() {
dim3 dimBlock(1);
dim3 dimGrid(1);
serial<<<dimGrid, dimBlock>>>(array);
}
Parallel:
__global__ void parallel(int* array)
{
int i(threadIdx.x + blockDim.x * blockIdx.x);
int j(i * 5);
array[i] = j;
}
int main(){
dim3 dimBlock(256);
dim3 dimGrid(1024 / 256);
parallel<<<dimGrid, dimBlock>>>(array);
}