I need to generate all the possible combinations of N numbers including repetitions.
Problem input: I have N cells where I can put one number in the interval 0 to: 9, in each cell.
Wrong solution (with N = 4):
(0 to: 3) permutationsDo: [ : each | Transcript cr; show: each printString].
Does not includes #(0 0 0 0) , #(1 1 1 1) , #(2 2 2 2), etc.
Expected output (with N = 2, and range 1-4 for the sake of brevity):
#(1 1)
#(2 2)
#(3 3)
#(4 4)
#(2 1)
#(3 2)
#(4 3)
#(1 4)
#(3 1)
#(4 2)
#(1 3)
#(2 4)
#(4 1)
#(1 2)
#(2 3)
#(3 4)
Here are a couple of selectors with which you could extend SequenceableCollection. That's the class where permutationsDo: is defined and is inherited, ultimately, by the Interval class.
Category "enumerating":
enumerationsDo: aBlock
| anArray |
anArray := Array new: self size.
self enumerateWithSize: (self size) in: anArray do: [ :each | aBlock value: each ]
Category "private":
enumerateWithSize: aSize in: anArray do: aBlock
(aSize = 1)
ifTrue: [
self do: [ :each |
aBlock value: (anArray at: (self size - aSize + 1) put: each ; yourself) ] ]
ifFalse: [
self do: [ :each |
self enumerateWithSize: (aSize - 1) in: anArray do: [ :eachArray |
aBlock value: (eachArray at: (self size - aSize + 1) put: each ; yourself) ] ] ]
So now you can do:
(0 to: 2) enumerationsDo: [ :each | Transcript show: each printString ; cr ]
Which yields:
#(0 0 0)
#(0 0 1)
#(0 0 2)
#(0 1 0)
#(0 1 1)
#(0 1 2)
#(0 2 0)
#(0 2 1)
#(0 2 2)
#(1 0 0)
#(1 0 1)
#(1 0 2)
#(1 1 0)
#(1 1 1)
#(1 1 2)
#(1 2 0)
#(1 2 1)
#(1 2 2)
#(2 0 0)
#(2 0 1)
#(2 0 2)
#(2 1 0)
#(2 1 1)
#(2 1 2)
#(2 2 0)
#(2 2 1)
#(2 2 2)
This selector operates "symmetrically" like the existing permutationsDo: selector does, which is the number of elements in the resulting arrays (number of choices) is the same as the number of values in the collection.
You can easily go from that to a more general solution:
Under "enumerating":
enumerationsDo: aBlock
self enumerationsOfSize: (self size) do: aBlock
enumerationsOfSize: aSize do: aBlock
| anArray |
anArray := Array new: aSize.
self enumerateWithSize: aSize subSize: aSize in: anArray do: [ :each | aBlock value: each ]
Under "private":
enumerateWithSize: aSize subSize: sSize in: anArray do: aBlock
(aSize < sSize)
ifTrue: [ ^self error: 'subSize cannot exceed array size' ].
(sSize < 1)
ifTrue: [ ^self error: 'sizes must be positive' ].
(sSize = 1)
ifTrue: [
self do: [ :each |
aBlock value: (anArray at: (aSize - sSize + 1) put: each ; yourself) ] ]
ifFalse: [
self do: [ :each |
self enumerateWithSize: aSize subSize: (sSize - 1) in: anArray do: [ :eachArray |
aBlock value: (eachArray at: (aSize - sSize + 1) put: each ; yourself) ] ] ]
Here's an example:
(1 to: 3) enumerationsOfSize: 2 do: [ :e | Transcript show: e printString ; cr ]
Which yields:
#(1 1)
#(1 2)
#(1 3)
#(2 1)
#(2 2)
#(2 3)
#(3 1)
#(3 2)
#(3 3)
Let me implement this in SequenceableCollection for the sake of simplicity:
nextCombination09
| j |
j := self findLast: [:ai | ai < 9] ifAbsent: [^nil].
j + 1 to: self size do: [:i | self at: i put: 0].
self at: j put: (self at: j) + 1
The idea is the following: Use the lexicographic order to sort all combinations. In other words:
(a1, ..., an) < (b1, ..., bn)
if ai = bi for all i below some index j where aj < bj.
With this order the first combination is (0, ..., 0) and the last one (9, ..., 9).
Moreover, given a combination (a1, ..., an) the next one in this order is the one that adds 1 to the lowest preeminent index, this is the last index j where aj < 9. For example the next to (2, 3, 8, 9) is (2, 3, 9, 9) as there can't be anything in between.
Once we get to (9, ..., 9) we are done, and answer with nil.
Be aware that the method above modifies the receiver, which is why we have to copy in the script below.
Here is the script that produces all combinations (n is your N)
n := <whatever>
array := Array new: n withAll: 0.
combinations := OrderedCollection new: (10 raisedTo: n).
[
combinations add: array copy.
array nextCombination09 notNil] whileTrue.
^combinations
ADDENDUM
The same technique can be used for other problems of similar nature.
Related
I need to populate the matrix (stored as an array of arrays) with some values. The matrix is a Jacobian for a simple diffusion problem and looks like this:
J(1,1) = 1, J(N,N)=0
and for 1<n<N:
J(n,n) = -2k/dx^2 - 2*c(n)
J(n,n-1)=J(n,n+1) = k/dx^2
the rest of the matrix entries are zeros.
So far I have this monstrosity:
(1 to: c size) collect: [ :n |
(1 to: c size) collect: [ :m |
n = 1 | (n = c size)
ifTrue: [ m = n ifTrue: [ 1.0 ] ifFalse: [ 0.0 ] ]
ifFalse: [ m = n
ifTrue: [ -2.0 * k / dx squared - (2.0 * (c at: n)) ]
ifFalse: [ m = (n-1) | (m = (n+1))
ifTrue: [ k / dx squared ]
ifFalse: [ 0.0 ] ] ]
] ]
Notice the nested "if-statements" (Smalltalk equivalents). This works. But, is there, perhaps, a more elegant way of doing the same thing? As it stands now, it is rather unreadable.
n := c size.
Matrix
new: n
tabulate: [:i :j | self jacobianAtRow: i column: j]
where
jacobianAtRow: i column: j
n := c size.
(i = 1 or: [i = n]) ifTrue: [^j = i ifTrue: [1.0] ifFalse [0.0]].
j = i ifTrue: [^-2.0 * k / dx squared - (2.0 * (c at: i))].
(j = (i - 1) or: [j = (i + 1)]) ifTrue: [^k / dx squared].
^0.0
Basically, the general idea is this: whenever you find nested ifs, factor out that piece of code to a method by itself and transform the nesting into a cases-like enumeration that returns a value at every possibility.
For readability's sake I would consider sacrificing the extra O(n) time and avoid IFs altogether (which just make it even faster...).
J(N,N) = 0
J(1,1) = 1
//and for 1<n<N:
J(n,n) = Y(n)
J(n,m-1) = J(n,m+1) = X
What this tells me is that the whole matrix looks something like this
( 1 X 0 0 0 )
( X Y X 0 0 )
( 0 X Y X 0 )
( 0 0 X Y X )
( 0 0 0 X 0 )
Which means that I can create the whole matrix with just zeros, and then change the diagonal and neighboring diagonals.
jNM := [ k / dx squared ].
jNN := [ :n | -2.0 * k / dx squared - (2.0 * (c at: n)) ].
n := c size.
m := Matrix
new: n
tabulate: [:i :j | 0 ].
(1 to: n - 1) do: [ :i |
m at: i at: i put: (jNN value: i).
m at: i + 1 at: i put: jnM value.
m at: i at: i + 1 put: jnM value.
].
m at: 1 at: 1 put: 1.
Note: I'm not familiar with the math behind this but the value for J(n,m-1) seems like a constant to me.
Note 2: I'm putting the values at i + 1 indexes, because I am starting at position 1;1, but you can start from the opposite direction and have i-1.
Long in short it is a Vandermonde matrix and I have a problem to run a for in the second dimension of the array.
'add meg M-et majd N-et (enter kozotte)(az 1. sor az 1-es szam hatvanyai)' displayNl.
M := stdin nextLine asInteger.
N := stdin nextLine asInteger.
|tomb|
tomb := Array new: M.
x := 1.
y := 1.
a := M + 1.
b := N + 1.
x to: a do: [ :i|
tomb at:x put: (Array new: N) y to: b do: [ :j |
x at: y put: (x raisedTo: y - 1) ] ].
tomb printNl.
Here is a good way to create a matrix for which we have an expression of the generic entry aij:
Matrix class >> fromBlock: aBlock rows: n columns: m
| matrix |
matrix := self rows: n columns: m.
matrix indicesDo: [:i :j | | aij |
aij := aBlock value: i value: j.
matrix at: i at: j put: aij].
^matrix
With the above method you can now implement
Matrix class >> vandermonde: anArray degree: anInteger
^self
fromBlock: [:i :j | (anArray at: i) raisedTo: j - 1]
rows: anArray size
columns: anInteger + 1
EDIT
I just realized that in Pharo there is a way to create a matrix from the expression of its aij, it is named rows:columns:tabulate:, so my answer reduces to:
Matrix class >> vandermonde: anArray degree: anInteger
^self
rows: anArray size
columns: anInteger + 1
tabulate: [:i :j | (anArray at: i) raisedTo: j - 1]
I am running a sorting algorithm in a kernel, and the sorting part uses about 36 VGPR, thus resulting in 12.5% occupancy and awful performance.
The code segment is as follows:
typedef struct {
float record[8];
float dis;
int t_class;
}node;
for ( int i = 0 ; i < num_record ; i ++ ){
in_data [ i]. dis = Dist ( in_data [i]. record , new_point , num_feature );
}
node tmp ;
int i;
int j;
#pragma unroll 1
for ( i = 0 ; i < num_record - 1 ; i ++ )
for ( j = 0 ; j < num_record - i - 1 ; j ++ )
{
if ( in_data [ j]. dis > in_data [ (j + 1) ]. dis )
{
tmp = in_data [ j ];
in_data [ j ] = in_data [ (j + 1) ];
in_data [ (j + 1) ] = tmp ;
}
}
Is there any way to reduce the register usage without big modifications to the algorithm itself? I guess it would be better to reduce the register under 16.
Update:
Basically the kernel is trying to implement exhaustive knn method.
float tmp;
tmp = in_data [ j ].x;
in_data [ j ].x = in_data [ (j + 1) ].x;
in_data [ (j + 1) ].x = tmp ;
tmp = in_data [ j ].y;
in_data [ j ].y = in_data [ (j + 1) ].y;
in_data [ (j + 1) ].y = tmp ;
tmp = in_data [ j ].z;
in_data [ j ].z = in_data [ (j + 1) ].z;
in_data [ (j + 1) ].z = tmp ;
should be using 1/3 of registers of the original code since it needs only 1/3 space at a time.
You could also do a global--->local ------> global
instead of global -----> private -----> global to reduce private register usage.
Now that I remember, AMD is famous for using lots of registers when using loop unrolling. If your num_record is a fixed value known to the compiler, it may do some unrolling, and create copies of tmp as well.
Can you try setting pragma unroll, or declaring the tmp parameters inside the loop?
Possibility 1:
node tmp;
#pragma unroll
for ( int i = 0 ; i < num_record - 1 ; i ++ )
#pragma unroll
for ( int j = 0 ; j < num_record - i - 1 ; j ++ )
{
if ( in_data [ j]. dis > in_data [ (j + 1) ]. dis )
{
tmp = in_data [ j ];
in_data [ j ] = in_data [ (j + 1) ];
in_data [ (j + 1) ] = tmp ;
}
}
Possibility 2:
for ( int i = 0 ; i < num_record - 1 ; i ++ )
for ( int j = 0 ; j < num_record - i - 1 ; j ++ )
{
if ( in_data [ j]. dis > in_data [ (j + 1) ]. dis )
{
node tmp = in_data [ j ];
in_data [ j ] = in_data [ (j + 1) ];
in_data [ (j + 1) ] = tmp ;
}
}
According to your description your node structure looks more or less like this (I'm assuming dis is an int but it doesn't matter whether is really int or float):
struct node
{
int dis;
float f1;
...
float f9;
};
If simple changes won't help ... how about adding one more field index to node which would make it possible to uniquely identify the node (unless this can be done with one of the existing fields) and introducing node2 structure which would contain only index and dis?
Something like this:
struct node
{
int index; // add index for uniqueness unless one of f1 ... f1 could do
int dis;
float f1;
...
float f9;
};
struct node2
{
int index; // add index for uniqueness unless one of f1 ... f1 could do
int dis;
};
Then use node2 for sorting and after in another kernel or on the host sort node using index of already sorted node2.
This should reduce the register pressure in sorting kernel but will require more changes like already said introduction of new node2 and possibly kernel splitting.
This question already has answers here:
Generate all possible n-character passwords
(4 answers)
Closed 1 year ago.
I have a list of integers, a = [0, ..., n]. I want to generate all possible combinations of k elements from a; i.e., the cartesian product of the a with itself k times. Note that n and k are both changeable at runtime, so this needs to be at least a somewhat adjustable function.
So if n was 3, and k was 2:
a = [0, 1, 2, 3]
k = 2
desired = [(0,0), (0, 1), (0, 2), ..., (2,3), (3,0), ..., (3,3)]
In python I would use the itertools.product() function:
for p in itertools.product(a, repeat=2):
print p
What's an idiomatic way to do this in Go?
Initial guess is a closure that returns a slice of integers, but it doesn't feel very clean.
For example,
package main
import "fmt"
func nextProduct(a []int, r int) func() []int {
p := make([]int, r)
x := make([]int, len(p))
return func() []int {
p := p[:len(x)]
for i, xi := range x {
p[i] = a[xi]
}
for i := len(x) - 1; i >= 0; i-- {
x[i]++
if x[i] < len(a) {
break
}
x[i] = 0
if i <= 0 {
x = x[0:0]
break
}
}
return p
}
}
func main() {
a := []int{0, 1, 2, 3}
k := 2
np := nextProduct(a, k)
for {
product := np()
if len(product) == 0 {
break
}
fmt.Println(product)
}
}
Output:
[0 0]
[0 1]
[0 2]
[0 3]
[1 0]
[1 1]
[1 2]
[1 3]
[2 0]
[2 1]
[2 2]
[2 3]
[3 0]
[3 1]
[3 2]
[3 3]
The code to find the next product in lexicographic order is simple: starting from the right, find the first value that won't roll over when you increment it, increment that and zero the values to the right.
package main
import "fmt"
func main() {
n, k := 5, 2
ix := make([]int, k)
for {
fmt.Println(ix)
j := k - 1
for ; j >= 0 && ix[j] == n-1; j-- {
ix[j] = 0
}
if j < 0 {
return
}
ix[j]++
}
}
I've changed "n" to mean the set is [0, 1, ..., n-1] rather than [0, 1, ..., n] as given in the question, since the latter is confusing since it has n+1 elements.
Just follow the answer Implement Ruby style Cartesian product in Go, play it on http://play.golang.org/p/NR1_3Fsq8F
package main
import "fmt"
// NextIndex sets ix to the lexicographically next value,
// such that for each i>0, 0 <= ix[i] < lens.
func NextIndex(ix []int, lens int) {
for j := len(ix) - 1; j >= 0; j-- {
ix[j]++
if j == 0 || ix[j] < lens {
return
}
ix[j] = 0
}
}
func main() {
a := []int{0, 1, 2, 3}
k := 2
lens := len(a)
r := make([]int, k)
for ix := make([]int, k); ix[0] < lens; NextIndex(ix, lens) {
for i, j := range ix {
r[i] = a[j]
}
fmt.Println(r)
}
}
I'm working on a simple board game in Pharo, and I've got a method on my Board that adds objects to a cell. Cells are simply a dictionary of Points on Objects.
As part of the method, I wanted to enforce that a Point should be greater than zero, but less than the width and height of the board, in other words, it should actually be on the board. What is the best way to do this?
My current attempt looks like this:
at: aPoint put: aCell
((((aPoint x > self numberOfRows)
or: [aPoint x <= 0])
or: [aPoint y > self numberOfColumns ])
or: [aPoint y <= 0])
ifTrue: [ self error:'The point must be inside the grid.' ].
self cells at: aPoint put: aCell .
Kind of lisp-y with all those parens! But I can't use the short-circuiting or: without closing off each expression so it evaluates as a boolean and not a block (or as the or:or:or:or: message). I could use the binary operator | instead and for-go short circuiting, but that doesn't seem right.
So what is the properly Smalltalk-ish way to handle this?
Typically the or: are nested like this:
(aPoint x > self numberOfRows
or: [ aPoint x <= 0
or: [ aPoint y > self numberOfColumns
or: [ aPoint y <= 0 ] ] ])
ifTrue: [ self error: 'The point must be inside the grid.' ].
Your nesting is short-circuting but less efficient because of repeated tests of the first argument (check the bytecode to see the difference).
Alternative you can use assert: or assert:description: that is defined on Object:
self
assert: (aPoint x > self numberOfRows
or: [ aPoint x <= 0
or: [ aPoint y > self numberOfColumns
or: [ aPoint y <= 0 ] ] ])
description: 'The point must be inside the grid.'
Any time things are that heavily nested, it's time to call another method.
isValidPoint: aPoint
aPoint x > self numberOfRows ifTrue: [^ false].
aPoint x <= 0 ifTrue: [^ false].
aPoint y > self numberOfColumns ifTrue: [^ false].
aPoint y <= 0 ifTrue: [^ false].
^ true.
In general, your methods should be relatively flat. If not, time to refactor.
You can simply prefill the 'cells' dictionary with all points which is valid within the range, i.e. somewhere in initialization you put:
1 to: numberOfRows do: [:y |
1 to: numberOfCols do: [:x |
cells at: x#y put: dummy "or nil " ] ]
then your method for adding a cell at given point will look as simple as:
at: aPoint put: aCell
self cells at: aPoint ifAbsent: [ self error: 'The point must be inside the grid.' ].
self cells at: aPoint put: aCell .
There's also a helper method #between:and: , which you can use to minimize the code clutter:
((aPoint x between: 1 and: self numCols) and: [
aPoint y between: 1 and: self numRows ]) ifFalse: [ ... bummer ... ]