In the original code flags were set like tf.apps.flags.DEFINE_string(
'master', '', 'The address of the TensorFlow master to use.'). then i changed the tf.app.flags to tf.flags
originally FLAGS = tf.app.flags.FLAGS, changed to tf.flags.FLAGS similarly.
but the error in the tf.constant has been there in both the cases. how to fix it?
i feel like this error has something to do with the python versions. but cant figure it out
replica_id=tf.constant(FLAGS.task, dtype=tf.int32, shape=()),
Try this, for me it works just fine:
import tensorflow as tf
FLAGS = tf.flags.FLAGS
tf.flags.DEFINE_integer('task', 10, "my value for the constant")
# now define your constant
replica_id = tf.constant(value=FLAGS.task, dtype=tf.float32)
# see if it works:
with tf.Session() as sess:
print(sess.run(replica_id))
Related
Here's my beginning code:
import numpy as np
Create predict() function accepting a prediction probabilities list and threshold value
def predict(predict_prob,thresh):
pred = []
for i in range(len(predict_prob)):
if predict_prob[i] >= thresh:
pred.append(1)
else:
pred.append(0)
return pred
Here's what I'm trying to do:
invoke the predict() function to calculate the model predictions using those variables. Save this output as preds and print it out.
prediction probabilities
probs = [0.886,0.375,0.174,0.817,0.574,0.319,0.812,0.314,0.098,0.741,
0.847,0.202,0.31,0.073,0.179,0.917,0.64,0.388,0.116,0.72]
threshold value
thresh = 0.5
prediction values
preds = predict(predict_prob, thresh) #calling predict ERROR
Here's the error:
NameError: name 'predict_prob' is not defined
But it is defined earlier when I created the function, so not sure why it's giving an error when I ran the earlier code and it didn't error on me.
You are calling the function the wrong way. In this case, it should be:
preds = predict(probs, thresh)
You can't use 'predict_prob' outside of the function. It is defined inside and because of that it is a local variable. Also, you have 'probs' and 'thresh' perfectly prepared.
I hope that helps.
First some of my code:
...
fc_1 = layers.Dense(256, activation='relu')(drop_reshape)
bi_LSTM_2 = layers.Lambda(buildGruLayer)(fc_1)
...
def buildGruLayer(inputs):
gru_cells = []
gru_cells.append(tf.contrib.rnn.GRUCell(256))
gru_cells.append(tf.contrib.rnn.GRUCell(128))
gru_layers = tf.keras.layers.StackedRNNCells(gru_cells)
inputs = tf.unstack(inputs, axis=1)
outputs, _ = tf.contrib.rnn.static_rnn(
gru_layers,
inputs,
dtype='float32')
return outputs
Error I am getting when running static_rnn is:
raise TypeError("Cannot convert value %r to a TensorFlow DType." % type_value)
TypeError: Cannot convert value None to a TensorFlow DType.
The shape that comes into the Layer in the shape (64,238,256).
Anyone has a clue what the problem could be. I already googled the error but couldn't find anything. Any help is much appreciated.
If anyone still needs a solution to this. Its because you need to specify the dtype for the GRUCell, e.g tf.float32
Its default is None which in the documentation defaults to the first dimension of your input data (i.e batch dimension, which in tensorflow is a ? or None)
Check the dtype argument from :
https://www.tensorflow.org/api_docs/python/tf/compat/v1/nn/rnn_cell/GRUCell
when i run my small keras model i got this error
FailedPreconditionError: Attempting to use uninitialized value bn6/beta
[[{{node bn6/beta/read}} = IdentityT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
full traceback error
code:
"input layer"
command_input = keras.layers.Input(shape=(1,1))
image_measurements_features = keras.layers.Input(shape=(1, 640))
"command module"
command_module_layer1=keras.layers.Dense(128,activation='relu')(command_input)
command_module_layer2=keras.layers.Dense(128,activation='relu')(command_module_layer1)
"concatenation layer"
j=keras.layers.concatenate([command_module_layer2,image_measurements_features])
"desicion module"
desicion_module_layer1=keras.layers.Dense(512,activation='relu')(j)
desicion_module_layer2=keras.layers.Dense(256,activation='relu')(desicion_module_layer1)
desicion_module_layer3=keras.layers.Dense(128,activation='relu')(desicion_module_layer2)
desicion_module_layer4=keras.layers.Dense(3,activation='relu')(desicion_module_layer3)
initt = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(initt)
big_hero_4=keras.models.Model(inputs=[command_input, image_measurements_features], outputs=desicion_module_layer4)
big_hero_4.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
"train the model"
historyy=big_hero_4.fit([x, y],z,batch_size=None, epochs=1,steps_per_epoch=1000)
do you have any solutions for this error ?
Why keras doesn't initialize the layers automatically without using global variables initializer (the error exists before and after adding the global initializer)
You initialize your model and then make and compile it. That's the wrong order, first define your model, compile it and then initialize. Same code, just different order
I got this to work. Forget about the session when using keras, it only complicates things.
import keras
import tensorflow as tf
import numpy as np
command_input = keras.layers.Input(shape=(1,1))
image_measurements_features = keras.layers.Input(shape=(1, 640))
command_module_layer1 = keras.layers.Dense(128 ,activation='relu')(command_input)
command_module_layer2 = keras.layers.Dense(128 ,activation='relu')(command_module_layer1)
j = keras.layers.concatenate([command_module_layer2, image_measurements_features])
desicion_module_layer1 = keras.layers.Dense(512,activation='relu')(j)
desicion_module_layer2 = keras.layers.Dense(256,activation='relu')(desicion_module_layer1)
desicion_module_layer3 = keras.layers.Dense(128,activation='relu')(desicion_module_layer2)
desicion_module_layer4 = keras.layers.Dense(3,activation='relu')(desicion_module_layer3)
big_hero_4 = keras.models.Model(inputs=[command_input, image_measurements_features], outputs=desicion_module_layer4)
big_hero_4.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
# Mock data
x = np.zeros((1, 1, 1))
y = np.zeros((1, 1, 640))
z = np.zeros((1, 1, 3))
historyy=big_hero_4.fit([x, y], z, batch_size=None, epochs=1,steps_per_epoch=1000)
This code should start training with no issues. If you still have the same error it might be due to some other part of your code if there is more.
I want to project the updated weights of my network (after performing optimization) to a special space in which I need the value of that tensor to be passed. The function which applies projection gets a numpy array as an input. Is there a way I can do this?
I used tf.assign() as a solution but since my function accepts arrays and not tensors it failed.
Here is a sketch of what I want to do:
W = tf.Variable(...)
...
opt = tf.train.AdamOptimizer(learning_rate).minimize(loss, var_list=['W'])
W = my_function(W)
It seems that tf.control_dependencies is what you need
one simple exmaple:
import tensorflow as tf
var = tf.get_variable('var', initializer=0.0)
# replace `tf.add` with your custom function
addop = tf.add(var, 1)
with tf.control_dependencies([addop]):
updateop = tf.assign(var, addop)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # pylint: disable=no-member
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
updateop.eval()
print(var.eval())
updateop.eval()
print(var.eval())
updateop.eval()
print(var.eval())
output:
1.0
2.0
3.0
I'm getting a strange error when trying to compute the intersection over union using tensorflows tf.contrib.metrics.streaming_mean_iou.
This was the code I was using before which works perfectly fine
tensorflow as tf
label = tf.image.decode_png(tf.read_file('/path/to/label.png'),channels=1)
label_lin = tf.reshape(label, [-1,])
weights = tf.cast(tf.less_equal(label_lin, 10), tf.int32)
mIoU, update_op = tf.contrib.metrics.streaming_mean_iou(label_lin, label_lin,num_classes = 11,weights = weights)
init = tf.local_variables_initializer()
sess.run(init)
sess.run([update_op])
However when I use a mask like this
mask = tf.image.decode_png(tf.read_file('/path/to/mask_file.png'),channels=1)
mask_lin = tf.reshape(mask, [-1,])
mask_lin = tf.cast(mask_lin,tf.int32)
mIoU, update_op = tf.contrib.metrics.streaming_mean_iou(label_lin, label_lin,num_classes = 11,weights = mask_lin)
init = tf.local_variables_initializer()
sess.run(init)
sess.run([update_op])
It keeps on failing after an irregular number of iterations showing this error:
*** Error in `/usr/bin/python': corrupted double-linked list: 0x00007f29d0022fd0 ***
I checked the shape and data type of both mask_lin and weights. They are the same, so I cannot really see what is going wrong here.
Also the fact that the error comes after calling update_op an irregular number of times is strange. Maybe TF empties the mask_lin object after calling several sess.run()'s ?
Or is this some TF bug? But then again why would it work with weights...