I have defined a turtles-own list called color-affinity. Each of the 14 entries in this list is composed of a named netlogo color and a corresponding random number up to but not including 5.
I am trying to initialize this list in the setup procedure by calling the function: setup-turtle-color-affinity.
I'm working on Netlogo 6.1 (the latest version). The code is below.
turtles-own [
color-affinity
]
...
..
.
to setup
clear-all
create-turtles population
setup-turtle-color-affinity
setup-patches
reset-ticks
end
...
..
.
to setup-turtle-color-affinity
ask turtles
[ setup-color-affinity ]
end
to setup-color-affinity
[
; Here, I want to set up the list so that each turtle gets a random named netlogo color and a corresponding random "affinity" score of up to 5. However, whenever I try this (and I've tried various combinations of syntax) it gives me an error saying "closing bracket expected".
]
end
This may need a little more detail to get a useful answer- for example, how your color list is set up? In Netlogo, the color names read simply as numbers- grey is 5, red is 15, etc. What kind of format are you after for color-affinity?
If you're after a list of list pairs for each turtle, where each pair is a color value and the affinity value, maybe something like this could work for you:
turtles-own [
color-affinity
]
to setup
ca
let color-values ( range 5 145 10 )
crt 5 [
set color-affinity map [ c -> list c ( random 4 + 1 ) ] color-values
show color-affinity
]
reset-ticks
end
Output:
(turtle 1): [[5 4] [15 3] [25 2] [35 4] [45 2] [55 1] [65 2] [75 1] [85 2] [95 3] [105 3] [115 1] [125 3]]
(turtle 3): [[5 2] [15 2] [25 2] [35 1] [45 2] [55 4] [65 4] [75 4] [85 3] [95 2] [105 1] [115 2] [125 2]]
(turtle 2): [[5 2] [15 4] [25 1] [35 1] [45 1] [55 4] [65 3] [75 2] [85 4] [95 1] [105 4] [115 4] [125 2]]
(turtle 0): [[5 1] [15 1] [25 3] [35 4] [45 4] [55 1] [65 4] [75 2] [85 1] [95 4] [105 1] [115 1] [125 1]]
(turtle 4): [[5 3] [15 3] [25 4] [35 4] [45 2] [55 2] [65 4] [75 1] [85 2] [95 3] [105 1] [115 4] [125 3]]
Edit:
I don't know of a way to automatically pull the color names (not to say there isn't one!)- you may have to do something like this table extension approach:
extensions [ table ]
globals [ color-table ]
to setup-color-table
set color-table table:make
let color-names [
"gray" "red" "orange" "brown" "yellow"
"green" "lime" "turquoise" "cyan" "sky"
"blue" "violet" "magenta" "pink"
]
let color-values ( range 5 145 10 )
( foreach color-values color-names [
[ cv cn ] ->
table:put color-table cv cn
]
)
show table:get color-table 15
show table:get color-table 65
show table:get color-table 115
end
Output:
observer: "red"
observer: "lime"
observer: "violet"
Related
I'm trying to sum my df's rows as follows,
let's say I have the beneath df (each cell in a row contains a vector/list of the same size!)
In the real problem, I have a large number of columns and it can vary. But I do have a list that contains the names of those columns.
df = pd.DataFrame([
[[1,2,3],[1,2,3],[1,2,3]],
[[1,1,1],[1,1,1],[1,1,1]],
[[2,2,2],[2,2,2],[2,2,2]]
], columns=['a','b','c'])
I'm trying to create a new Column that will contain the sum of all the vectors in every row- as np.array would do! and get this following vectors as a result:
[3,6,9]
[3,3,3]
[6,6,6]
and not like the .sum(axis=1) does..
[1,2,3,1,2,3,1,2,3]
[1,1,1,1,1,1,1,1,1]
[2,2,2,2,2,2,2,2,2]
Can anyone think of an idea, thanks in advance :)
If same lengths of lists create numpy array and sum for improve performance:
df['Sum'] = np.array(df.to_numpy().tolist()).sum(axis=1).tolist()
print (df)
a b c Sum
0 [1, 2, 3] [1, 2, 3] [1, 2, 3] [3, 6, 9]
1 [1, 1, 1] [1, 1, 1] [1, 1, 1] [3, 3, 3]
2 [2, 2, 2] [2, 2, 2] [2, 2, 2] [6, 6, 6]
Another way using pd.Series.explode:
df['sum'] = df.apply(pd.Series.explode).sum(axis=1).groupby(level=0).agg(list)
Output:
a b c sum
0 [1, 2, 3] [1, 2, 3] [1, 2, 3] [3.0, 6.0, 9.0]
1 [1, 1, 1] [1, 1, 1] [1, 1, 1] [3.0, 3.0, 3.0]
2 [2, 2, 2] [2, 2, 2] [2, 2, 2] [6.0, 6.0, 6.0]
What is the shape of the contents in the outputs of tf.contrib.seq2seq.BeamSearchDecoder. I know that it is an instance of class BeamSearchDecoderOutput(scores, predicted_ids, parent_ids), but what is the shape of the scores, predicted_ids and parent_ids?
I wrote followig toy code to explore it a little bit myself.
tgt_vocab_size = 20
embedding_decoder = tf.one_hot(list(range(0, tgt_vocab_size)), tgt_vocab_size)
batch_size = 2
start_tokens = tf.fill([batch_size], 0)
end_token = 1
beam_width = 3
num_units=18
decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units)
encoder_outputs = decoder_cell.zero_state(batch_size, dtype=tf.float32)
tiled_encoder_outputs = tf.contrib.seq2seq.tile_batch(encoder_outputs, multiplier=beam_width)
my_decoder = tf.contrib.seq2seq.BeamSearchDecoder(cell=decoder_cell,
embedding=embedding_decoder,
start_tokens=start_tokens,
end_token=end_token,
initial_state=tiled_encoder_outputs,
beam_width=beam_width)
# dynamic decoding
outputs, final_context_state, _ = tf.contrib.seq2seq.dynamic_decode(my_decoder,
maximum_iterations=4,
output_time_major=True)
final_predicted_ids = outputs.predicted_ids
scores = outputs.beam_search_decoder_output.scores
predicted_ids = outputs.beam_search_decoder_output.predicted_ids
parent_ids = outputs.beam_search_decoder_output.parent_ids
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
final_predicted_ids_vals = sess.run(final_predicted_ids)
print("final_predicted_ids shape:")
print(final_predicted_ids_vals.shape)
print("final_predicted_ids_vals: \n%s" %final_predicted_ids_vals)
print("scores shape:")
print(sess.run(scores).shape)
print("scores values: \n %s" % sess.run(scores))
print("predicted_ids shape: ")
print(sess.run(predicted_ids).shape)
print("predicted_ids values: \n %s" % sess.run(predicted_ids))
print("parent_ids shape:")
print(sess.run(parent_ids).shape)
print("parent_ids values: \n %s" % sess.run(parent_ids))
The print is as follows:
final_predicted_ids shape:
(4, 2, 3)
final_predicted_ids_vals:
[[[ 1 8 8]
[ 1 8 8]]
[[ 1 13 13]
[ 1 13 13]]
[[ 1 13 13]
[ 1 13 13]]
[[ 1 13 2]
[ 1 13 2]]]
scores shape:
(4, 2, 3)
scores values:
[[[ -2.8376358 -2.843168 -2.8478816]
[ -2.8376358 -2.843168 -2.8478816]]
[[ -2.8478816 -5.655898 -5.6810265]
[ -2.8478816 -5.655898 -5.6810265]]
[[ -2.8478816 -8.478384 -8.495466 ]
[ -2.8478816 -8.478384 -8.495466 ]]
[[ -2.8478816 -11.292251 -11.307263 ]
[ -2.8478816 -11.292251 -11.307263 ]]]
predicted_ids shape:
(4, 2, 3)
predicted_ids values:
[[[ 8 13 1]
[ 8 13 1]]
[[ 1 13 13]
[ 1 13 13]]
[[ 1 13 12]
[ 1 13 12]]
[[ 1 13 2]
[ 1 13 2]]]
parent_ids shape:
(4, 2, 3)
parent_ids values:
[[[0 0 0]
[0 0 0]]
[[2 0 1]
[2 0 1]]
[[0 1 1]
[0 1 1]]
[[0 1 1]
[0 1 1]]]
The outputs of tf.contrib.seq2seq.dynamic_decode(BeamSearchDecoder) is actually an instance of class FinalBeamSearchDecoderOutput which consists of:
predicted_ids: Final outputs returned by the beam search after all decoding is finished. A tensor of shape [batch_size, num_steps, beam_width] (or [num_steps, batch_size, beam_width] if output_time_major is True). Beams are ordered from best to worst.
beam_search_decoder_output: An instance of BeamSearchDecoderOutput that describes the state of the beam search.
So need to make sure the final predictions/translations are of shape [beam_width, batch_size, num_steps] by transpose([2, 0, 1]) or tf.transpose(final_predicted_ids) if output_time_major=True.
any simple way to implement below code,
especially handle unknow dimention, i want to add this code to loss function.Thanks.
result =[]
for i in range(0,x.shape[0]):
tmp2 = tf.gather_nd(x[i], y[i])
result.append(tmp2)
finalResult = tf.stack(result)
example
x shape=(?,3,2)
y shape= (?,1)
x :
[[[ 0 1]
[ 2 3]
[ 4 5]]
[[ 6 7]
[ 8 9]
[10 11]]
[[12 13]
[14 15]
[16 17]]...]
y :
[[1]
[0]
[2]...]
finalResult :
[[ 2 3]
[ 6 7]
[16 17]...]
jdehesa's reply is helpful. Thanks so much.
have to add the indices of the first dimension to query.
(By the way, i made a mistake in loss function. it has to be differentiable.
but it's another issue.) anyway, thanks again.
a = np.arange(12).reshape(2,3,2)
[[[ 0 1]
[ 2 3]
[ 4 5]]
[[ 6 7]
[ 8 9]
[10 11]]]
how to exchange position of [4 5] and [10 11] use numpy? Thanks
Those rows can be sliced with:
In [1418]: a[:,2,:]
Out[1418]:
array([[ 4, 5],
[10, 11]])
viewed in reverse order with:
In [1419]: a[::-1,2,:]
Out[1419]:
array([[10, 11],
[ 4, 5]])
and replaced with:
In [1420]: a[:,2,:] = a[::-1,2,:]
In [1421]: a
Out[1421]:
array([[[ 0, 1],
[ 2, 3],
[10, 11]],
[[ 6, 7],
[ 8, 9],
[ 4, 5]]])
I want to remap a numpy array according to a dictionary.
Let us assume I have a numpy array with N rows and 3 columns. Now I want to remap the values according to its indices which are written in tuples in a dictionary.
This works fine:
import numpy as np
a = np.arange(6).reshape(2,3)
b = np.zeros(6).reshape(2,3)
print a
print A
dictt = { (0,0):(0,2), (0,1):(0,1), (0,2):(0,0), (1,0):(1,2), (1,1):(1,1), (1,2):(1,0) }
for key in dictt:
b[key] = a[dictt[key]]
print b
a = [[0 1 2]
[3 4 5]]
b = [[ 2. 1. 0.]
[ 5. 4. 3.]]
Let us assume I have N rows, where N is an even number. Now I want to apply the same mapping (which are valid for those 2 rows in the upper example) to all the other rows.
Hence I want to have an array from:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
to:
b = [[ 2. 1. 0.]
[ 5. 4. 3.]]
[[ 8. 7. 6.]
[ 11. 10. 9.]]
Any ideas? I would like to do it fast since these are 192000 entries in each array which should be remapped.
For simplicity I would just use [::-1].
a = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
b = [item[::-1] for item in a]
>>> b
[[2, 1, 0], [5, 4, 3], [8, 7, 6]]