tensorflow tf.gather_nd to handle unknow dimention - tensorflow

any simple way to implement below code,
especially handle unknow dimention, i want to add this code to loss function.Thanks.
result =[]
for i in range(0,x.shape[0]):
tmp2 = tf.gather_nd(x[i], y[i])
result.append(tmp2)
finalResult = tf.stack(result)
example
x shape=(?,3,2)
y shape= (?,1)
x :
[[[ 0 1]
[ 2 3]
[ 4 5]]
[[ 6 7]
[ 8 9]
[10 11]]
[[12 13]
[14 15]
[16 17]]...]
y :
[[1]
[0]
[2]...]
finalResult :
[[ 2 3]
[ 6 7]
[16 17]...]

jdehesa's reply is helpful. Thanks so much.
have to add the indices of the first dimension to query.
(By the way, i made a mistake in loss function. it has to be differentiable.
but it's another issue.) anyway, thanks again.

Related

replace element in numpy array pending a different element's value

I have the following numpy array (as an example):
my_array = [[3, 7, 0]
[20, 4, 0]
[7, 54, 0]]
I want to replace the 0's in the 3rd column of each row with a value of 5 only if the first index is odd.
So the expected outcome would be:
my_array = [[3, 7, 5]
[20, 4, 0]
[7, 54, 5]]
I tried numpy.where and numpy.place, but couldn't get the expected results.
Is there an elegant way to do this with numpy functions?
you can do this by indexing as:
my_array[my_array[:, 0] % 2 != 0, 2] = 5
# my_array[:, 0] % 2 != 0 --- Boolean shows modifying rows --> [ True False True]

Numpy 3D Array dimensions and slicing

I have trouble understanding the Numpy representation for 3D arrays. I'm used to it being (rows,columns,depth) but with Numpy it seems to be (depth,rows,columns).
E.g.:
c = np.arange(12)
c = c.reshape(2,3,2)
d = c
d = d.reshape(2,2,3)
print(c)
print(d)
[[[ 0 1]
[ 2 3]
[ 4 5]]
[[ 6 7]
[ 8 9]
[10 11]]]
d:
[[[ 0 1 2]
[ 3 4 5]]
[[ 6 7 8]
[ 9 10 11]]]
d is the representation I wish for. Now if I want to access the second 2D array I can write:
print(d[1,:,:])
[[ 6 7 8]
[ 9 10 11]]
So why is the representation so unintuitive for Numpy? And how would I access all uneven indexed(the first,3th,5th ...) 2D arrays of a 3D array given both representations?

Error message - "closing bracket expected" when initializing a list

I have defined a turtles-own list called color-affinity. Each of the 14 entries in this list is composed of a named netlogo color and a corresponding random number up to but not including 5.
I am trying to initialize this list in the setup procedure by calling the function: setup-turtle-color-affinity.
I'm working on Netlogo 6.1 (the latest version). The code is below.
turtles-own [
color-affinity
]
...
..
.
to setup
clear-all
create-turtles population
setup-turtle-color-affinity
setup-patches
reset-ticks
end
...
..
.
to setup-turtle-color-affinity
ask turtles
[ setup-color-affinity ]
end
to setup-color-affinity
[
; Here, I want to set up the list so that each turtle gets a random named netlogo color and a corresponding random "affinity" score of up to 5. However, whenever I try this (and I've tried various combinations of syntax) it gives me an error saying "closing bracket expected".
]
end
This may need a little more detail to get a useful answer- for example, how your color list is set up? In Netlogo, the color names read simply as numbers- grey is 5, red is 15, etc. What kind of format are you after for color-affinity?
If you're after a list of list pairs for each turtle, where each pair is a color value and the affinity value, maybe something like this could work for you:
turtles-own [
color-affinity
]
to setup
ca
let color-values ( range 5 145 10 )
crt 5 [
set color-affinity map [ c -> list c ( random 4 + 1 ) ] color-values
show color-affinity
]
reset-ticks
end
Output:
(turtle 1): [[5 4] [15 3] [25 2] [35 4] [45 2] [55 1] [65 2] [75 1] [85 2] [95 3] [105 3] [115 1] [125 3]]
(turtle 3): [[5 2] [15 2] [25 2] [35 1] [45 2] [55 4] [65 4] [75 4] [85 3] [95 2] [105 1] [115 2] [125 2]]
(turtle 2): [[5 2] [15 4] [25 1] [35 1] [45 1] [55 4] [65 3] [75 2] [85 4] [95 1] [105 4] [115 4] [125 2]]
(turtle 0): [[5 1] [15 1] [25 3] [35 4] [45 4] [55 1] [65 4] [75 2] [85 1] [95 4] [105 1] [115 1] [125 1]]
(turtle 4): [[5 3] [15 3] [25 4] [35 4] [45 2] [55 2] [65 4] [75 1] [85 2] [95 3] [105 1] [115 4] [125 3]]
Edit:
I don't know of a way to automatically pull the color names (not to say there isn't one!)- you may have to do something like this table extension approach:
extensions [ table ]
globals [ color-table ]
to setup-color-table
set color-table table:make
let color-names [
"gray" "red" "orange" "brown" "yellow"
"green" "lime" "turquoise" "cyan" "sky"
"blue" "violet" "magenta" "pink"
]
let color-values ( range 5 145 10 )
( foreach color-values color-names [
[ cv cn ] ->
table:put color-table cv cn
]
)
show table:get color-table 15
show table:get color-table 65
show table:get color-table 115
end
Output:
observer: "red"
observer: "lime"
observer: "violet"

The shape of the predicted_ids in the outputs of `tf.contrib.seq2seq.BeamSearchDecoder`

What is the shape of the contents in the outputs of tf.contrib.seq2seq.BeamSearchDecoder. I know that it is an instance of class BeamSearchDecoderOutput(scores, predicted_ids, parent_ids), but what is the shape of the scores, predicted_ids and parent_ids?
I wrote followig toy code to explore it a little bit myself.
tgt_vocab_size = 20
embedding_decoder = tf.one_hot(list(range(0, tgt_vocab_size)), tgt_vocab_size)
batch_size = 2
start_tokens = tf.fill([batch_size], 0)
end_token = 1
beam_width = 3
num_units=18
decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units)
encoder_outputs = decoder_cell.zero_state(batch_size, dtype=tf.float32)
tiled_encoder_outputs = tf.contrib.seq2seq.tile_batch(encoder_outputs, multiplier=beam_width)
my_decoder = tf.contrib.seq2seq.BeamSearchDecoder(cell=decoder_cell,
embedding=embedding_decoder,
start_tokens=start_tokens,
end_token=end_token,
initial_state=tiled_encoder_outputs,
beam_width=beam_width)
# dynamic decoding
outputs, final_context_state, _ = tf.contrib.seq2seq.dynamic_decode(my_decoder,
maximum_iterations=4,
output_time_major=True)
final_predicted_ids = outputs.predicted_ids
scores = outputs.beam_search_decoder_output.scores
predicted_ids = outputs.beam_search_decoder_output.predicted_ids
parent_ids = outputs.beam_search_decoder_output.parent_ids
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
final_predicted_ids_vals = sess.run(final_predicted_ids)
print("final_predicted_ids shape:")
print(final_predicted_ids_vals.shape)
print("final_predicted_ids_vals: \n%s" %final_predicted_ids_vals)
print("scores shape:")
print(sess.run(scores).shape)
print("scores values: \n %s" % sess.run(scores))
print("predicted_ids shape: ")
print(sess.run(predicted_ids).shape)
print("predicted_ids values: \n %s" % sess.run(predicted_ids))
print("parent_ids shape:")
print(sess.run(parent_ids).shape)
print("parent_ids values: \n %s" % sess.run(parent_ids))
The print is as follows:
final_predicted_ids shape:
(4, 2, 3)
final_predicted_ids_vals:
[[[ 1 8 8]
[ 1 8 8]]
[[ 1 13 13]
[ 1 13 13]]
[[ 1 13 13]
[ 1 13 13]]
[[ 1 13 2]
[ 1 13 2]]]
scores shape:
(4, 2, 3)
scores values:
[[[ -2.8376358 -2.843168 -2.8478816]
[ -2.8376358 -2.843168 -2.8478816]]
[[ -2.8478816 -5.655898 -5.6810265]
[ -2.8478816 -5.655898 -5.6810265]]
[[ -2.8478816 -8.478384 -8.495466 ]
[ -2.8478816 -8.478384 -8.495466 ]]
[[ -2.8478816 -11.292251 -11.307263 ]
[ -2.8478816 -11.292251 -11.307263 ]]]
predicted_ids shape:
(4, 2, 3)
predicted_ids values:
[[[ 8 13 1]
[ 8 13 1]]
[[ 1 13 13]
[ 1 13 13]]
[[ 1 13 12]
[ 1 13 12]]
[[ 1 13 2]
[ 1 13 2]]]
parent_ids shape:
(4, 2, 3)
parent_ids values:
[[[0 0 0]
[0 0 0]]
[[2 0 1]
[2 0 1]]
[[0 1 1]
[0 1 1]]
[[0 1 1]
[0 1 1]]]
The outputs of tf.contrib.seq2seq.dynamic_decode(BeamSearchDecoder) is actually an instance of class FinalBeamSearchDecoderOutput which consists of:
predicted_ids: Final outputs returned by the beam search after all decoding is finished. A tensor of shape [batch_size, num_steps, beam_width] (or [num_steps, batch_size, beam_width] if output_time_major is True). Beams are ordered from best to worst.
beam_search_decoder_output: An instance of BeamSearchDecoderOutput that describes the state of the beam search.
So need to make sure the final predictions/translations are of shape [beam_width, batch_size, num_steps] by transpose([2, 0, 1]) or tf.transpose(final_predicted_ids) if output_time_major=True.

Remapping numpy arrarys to dictionary

I want to remap a numpy array according to a dictionary.
Let us assume I have a numpy array with N rows and 3 columns. Now I want to remap the values according to its indices which are written in tuples in a dictionary.
This works fine:
import numpy as np
a = np.arange(6).reshape(2,3)
b = np.zeros(6).reshape(2,3)
print a
print A
dictt = { (0,0):(0,2), (0,1):(0,1), (0,2):(0,0), (1,0):(1,2), (1,1):(1,1), (1,2):(1,0) }
for key in dictt:
b[key] = a[dictt[key]]
print b
a = [[0 1 2]
[3 4 5]]
b = [[ 2. 1. 0.]
[ 5. 4. 3.]]
Let us assume I have N rows, where N is an even number. Now I want to apply the same mapping (which are valid for those 2 rows in the upper example) to all the other rows.
Hence I want to have an array from:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
to:
b = [[ 2. 1. 0.]
[ 5. 4. 3.]]
[[ 8. 7. 6.]
[ 11. 10. 9.]]
Any ideas? I would like to do it fast since these are 192000 entries in each array which should be remapped.
For simplicity I would just use [::-1].
a = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
b = [item[::-1] for item in a]
>>> b
[[2, 1, 0], [5, 4, 3], [8, 7, 6]]