Write executes after read - input

I'm making a prolog program to play 'the game of nim', and the program is working fine. I am trying to make a user interface to play the game, where at first, the user has to say he wants the initial state of the game to be. However, when I execute the code shown in the code snippet, the write only gets executed after I type in some input.
Does anyone know how to solve this?
init(List) :-
write('Enter number of matches in heaps [N1, N2, N3]:'),
read(List).
I want the output to be:
?- init(X).
Enter number of matches in heaps [N1, N2, N3]: [1, 3, 5].
X = [1, 3, 5].
But I'm getting:
?- init(X).
[1, 3, 5].
Enter number of matches in heaps [N1, N2, N3]:
X = [1, 3, 5].

Related

Sampling non-repeating integers with given probability distribution

I need to sample n different values taken from a set of integers.
These integers should have different occurence probability. E.g. the largest the lukilier.
By using the random package I can sample a set of different values from the set, by maeans of the method
random.sample
However it doesn't seem to provide the possibility to associate a probability distribution.
On the other hand there is the numpy package which allows to associate the distribution, but it returns a sample with repetitions. This can be done with the method
numpy.random.choice
I am looking for a method (or a way around) to do what the two methods do, but together.
You can actually use numpy.random.choice as it has the replace parameter. If set to False, the sampling will be done wihtout remplacement.
Here's a random example:
>>> np.random.choice([1, 2, 4, 6, 9], 3, replace=False, p=[1/2, 1/8, 1/8, 1/8, 1/8])
>>> array([1, 9, 4])

How to do 2D Convolution only at a specific location?

This question has been asked multiple times but still I could not get what I was looking for. Imagine
data=np.random.rand(N,N) #shape N x N
kernel=np.random.rand(3,3) #shape M x M
I know convolution typically means placing the kernel all over the data. But in my case N and M are of the orders of 10000. So I wish to get the value of the convolution at a specific location in the data, say at (10,37) without doing unnecessary calculations at all locations. So the output will be just a number. The main goal is to reduce the computation and memory expenses. Is there any inbuilt function that does this with minimal adjustments?
Indeed, applying the convolution for a particular position coincides with the mere sum over the entries of a (pointwise) multiplication of the submatrix in data and the flipped kernel itself. Here, is a reproducible example.
Code
N = 1000
M = 3
np.random.seed(777)
data = np.random.rand(N,N) #shape N x N
kernel= np.random.rand(M,M) #shape M x M
# Pointwise convolution = pointwise product
data[10:10+M,37:37+M]*kernel[::-1, ::-1]
>array([[0.70980514, 0.37426475, 0.02392947],
[0.24387766, 0.1985901 , 0.01103323],
[0.06321042, 0.57352696, 0.25606805]])
with output
conv = np.sum(data[10:10+M,37:37+M]*kernel[::-1, ::-1])
conv
>2.45430578
The kernel is being flipped by definition of the convolution as explained in here and was kindly pointed Warren Weckesser. Thanks!
The key is to make sense of the index you provided. I assumed it refers to the upper left corner of the sub-matrix in data. However, it can refer to the midpoint as well when M is odd.
Concept
A different example with N=7 and M=3 exemplifies the idea
and is presented in here for the kernel
kernel = np.array([[3,0,-1], [2,0,1], [4,4,3]])
which, when flipped, yields
k[::-1,::-1]
> array([[ 3, 4, 4],
[ 1, 0, 2],
[-1, 0, 3]])
EDIT 1:
Please note that the lecturer in this video does not explicitly mention that flipping the kernel is required before the pointwise multiplication to adhere to the mathematically proper definition of convolution.
EDIT 2:
For large M and target index close to the boundary of data, a ValueError: operands could not be broadcast together with shapes ... might be thrown. To prevent this, padding the matrix data with zeros can prevent this (although it blows up the memory requirement). I.e.
data = np.pad(data, pad_width=M, mode='constant')

numpy: Broadcasting a vector horizontally

I have a 1-D array in numpy v. I'd like to copy it to make a matrix with each row being a copy of v. That's easy: np.broadcast_to(v, desired_shape).
However, if I'd like to treat v as a column vector, and copy it to make a matrix with each column being a copy of v, I can't find a simple way to do it. Through trial and error, I'm able to do this:
np.broadcast_to(v.reshape(v.shape[0], 1), desired_shape)
While that works, I can't claim to understand it (even though I wrote it!).
Part of the problem is that numpy doesn't seem to have a concept of a column vector (hence the reshape hack instead of what in math would just be .T).
But, a deeper part of the problem seems to be that broadcasting only works vertically, not horizontally. Or perhaps a more correct way to say it would be: broadcasting only works on the higher dimensions, not the lower dimensions. I'm not even sure if that's correct.
In short, while I understand the concept of broadcasting in general, when I try to use it for particular applications, like copying the col vector to make a matrix, I get lost.
Can you help me understand or improve the readability of this code?
https://en.wikipedia.org/wiki/Transpose - this article on Transpose talks only of transposing a matrix.
https://en.wikipedia.org/wiki/Row_and_column_vectors -
a column vector or column matrix is an m × 1 matrix
a row vector or row matrix is a 1 × m matrix
You can easily create row or column vectors(matrix):
In [464]: np.array([[1],[2],[3]]) # column vector
Out[464]:
array([[1],
[2],
[3]])
In [465]: _.shape
Out[465]: (3, 1)
In [466]: np.array([[1,2,3]]) # row vector
Out[466]: array([[1, 2, 3]])
In [467]: _.shape
Out[467]: (1, 3)
But in numpy the basic structure is an array, not a vector or matrix.
[Array in Computer Science] - Generally, a collection of data items that can be selected by indices computed at run-time
A numpy array can have 0 or more dimensions. In contrast in MATLAB matrix has 2 or more dimensions. Originally a 2d matrix was all that MATLAB had.
To talk meaningfully about a transpose you have to have at least 2 dimensions. One may have size one, and map onto a 1d vector, but it still a matrix, a 2d object.
So adding a dimension to a 1d array, whether done with reshape or [:,None] is NOT a hack. It is a perfect valid and normal numpy operation.
The basic broadcasting rules are:
a dimension of size 1 can be changed to match the corresponding dimension of the other array
a dimension of size 1 can be added automatically on the left (front) to match the number of dimensions.
In this example, both steps apply: (5,)=>(1,5)=>(3,5)
In [458]: np.broadcast_to(np.arange(5), (3,5))
Out[458]:
array([[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]])
In this, we have to explicitly add the size one dimension on the right (end):
In [459]: np.broadcast_to(np.arange(5)[:,None], (5,3))
Out[459]:
array([[0, 0, 0],
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4]])
np.broadcast_arrays(np.arange(5)[:,None], np.arange(3)) produces two (5,3) arrays.
np.broadcast_arrays(np.arange(5), np.arange(3)[:,None]) makes (3,5).
np.broadcast_arrays(np.arange(5), np.arange(3)) produces an error because it has no way of determining whether you want (5,3) or (3,5) or something else.
Broadcasting always adds new dimensions to the left because it'd be ambiguous and bug-prone to try to guess when you want new dimensions on the right. You can make a function to broadcast to the right by reversing the axes, broadcasting, and reversing back:
def broadcast_rightward(arr, shape):
return np.broadcast_to(arr.T, shape[::-1]).T

Redux state: action or selector?

The state gets hydrated by a function that randomly creates an array of numbers (+ 0) as follows:
[[3, 0], [6, 0], [8, 0], [2, 0].....]
and that's the app's state at the moment. [3, 0] is an example of a tile containing number 3 and being invisible (0) Once I click on a tile, it dispatches an action and the 0 of the corresponding element changes to 1 and based on that the tile uncovers displaying the number (3)
So if I clicked on the first tile, the state would change to:
[[3, 1], [6, 0], [8, 0], [2, 0].....]
Now, I want to add the following:
track how many tiles are uncovered (ie. have 1s as the second element) at any time.
limit the number of uncovered tiles to 2 (if the 2nd tile's number does not match the first one, both get covered again - many memory games have similar functionality)
if I uncover one tile and then the second and the numbers match, I'd like them to remain permanently uncovered, and we add +1 to the score.
Should I design it as a different part of main state (using a different reducer and then combineReducers)? or should I redesign the first part of the state to include it as follows:
initialState = {
grid: [[3,0], [4,0],...],
score: 0,
number_of_uncovered_tiles: 0
}
Now to change the values of score and number_of_uncovered_tiles - am I correct that I should not be using actions but selectors as both will just be automatically calculated based on the state of the grid array element values?
Generally it is suggested to keep your state as flat as possible, avoiding deeply nested hierarchies operated by single reducer.
In your case I would split grid reducer and uncoveredTiles reducer. This will give you better control on uncovering tiles without the need to to run over array of tiles over and over again.
{
grid: [ 3, 4, 9, ...],
score: 0,
uncoveredTiles: [ 0, 2 ],
}
This way closing tiles when two are opened is just matter of updating grid[uncoveredTiles[0]] and grid[uncoveredTiles[1]] and reseting uncoveredTiles to [].
In case your tile data will get more complex (e.g. you'll swap numbers with images) only grid reducer will have to change, while uncoveredTiles will stay the same.
Also I would consider introducing separate reducer for scores not to mess different logical concerns in single place.

How to get real prediction from TensorFlow

I'm really new to TensorFlow and MI in general. I've been reading a lot, been searching days, but haven't really found anything useful, so..
My main problem is that every single tutorial/example uses images/words/etc., and the outcome is just a vector with numbers between 0 and 1 (yeah, that's the probability). Like that beginner tutorial, where they want to identify numbers in images. So the result set is just a "map" to every single digit's (0-9) probability (kind of). Here comes my problem: I have no idea what the result could be, it could be 1, 2, 999, anything.
So basically:
input: [1, 2, 3, 4, 5]
output: [2, 4, 6, 8, 10]
This really is just a simplified example. I would always have the same number of inputs and outputs, but how can I get the predicted values back with TensorFlow, not just something like [0.1, 0.1, 0.2, 0.2, 0.4]? I'm not really sure how clear my question is, if it's not, please ask.