fedprox tensorflow federated (TypeError: cannot unpack non-iterable LearningProcessOutput object) - tensorflow

iterative_process = tff.learning.algorithms.build_unweighted_fed_prox(
model_fn,
proximal_strength= 0.5,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.01),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
state, metrics = iterative_process.next(state, federated_train_data)
print('round 1, metrics={}'.format(metrics))
On executing the round 1, it throws (TypeError: cannot unpack non-iterable LearningProcessOutput object).
It was working fine when we use Fedavg, but not with fedprox

iterative_process.next returns LearningProcessOutput which is not iterable, as the error says.
You can replace it by
output = iterative_process.next(...)
state = output.state
metrics = output.metrics
or just use the output directly.

Related

Initialise tensors with ones in tensorflow 1

I want to initialise a variable declared using get_variable function with 1s .
I tried the following methods :
1.tf.get_variable(name = 'yd1', shape = shape_t, dtype = tf.float32,initializer = tf.ones())
Error received -> TypeError: ones() takes at least 1 argument (0 given)
tf.get_variable(name = 'yd1', shape = shape_t ,dtype = tf.float32,initializer = tf.ones(shape=shape_t))
Error received -> ValueError("If initializer is a constant, do not specify shape.")
What is the best way to initialise a variable with ones?
tf.zeros_initializer can be used to initialise with 0s, but there is no equivalent for ones in tf 1
You need to use tf.ones_initializer:
tf.get_variable(name='yd1', shape=shape_t, dtype=tf.float32,
initializer=tf.ones_initializer())
Alternatively, as the second error message says, you can use a constant value, but then do not pass a shape:
tf.get_variable(name='yd1', initializer=tf.ones(shape=shape_t, dtype=tf.float32))
tf.ones_initializer is not available in tf 1, It was introduced in tf 2.
The following code does the work
tf.get_variable(name = 'yd1', shape = shape_t ,dtype = tf.float32,initializer = tf.constant_initializer(1))

why keras layers initialization doesn't work

when i run my small keras model i got this error
FailedPreconditionError: Attempting to use uninitialized value bn6/beta
[[{{node bn6/beta/read}} = IdentityT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
full traceback error
code:
"input layer"
command_input = keras.layers.Input(shape=(1,1))
image_measurements_features = keras.layers.Input(shape=(1, 640))
"command module"
command_module_layer1=keras.layers.Dense(128,activation='relu')(command_input)
command_module_layer2=keras.layers.Dense(128,activation='relu')(command_module_layer1)
"concatenation layer"
j=keras.layers.concatenate([command_module_layer2,image_measurements_features])
"desicion module"
desicion_module_layer1=keras.layers.Dense(512,activation='relu')(j)
desicion_module_layer2=keras.layers.Dense(256,activation='relu')(desicion_module_layer1)
desicion_module_layer3=keras.layers.Dense(128,activation='relu')(desicion_module_layer2)
desicion_module_layer4=keras.layers.Dense(3,activation='relu')(desicion_module_layer3)
initt = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(initt)
big_hero_4=keras.models.Model(inputs=[command_input, image_measurements_features], outputs=desicion_module_layer4)
big_hero_4.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
"train the model"
historyy=big_hero_4.fit([x, y],z,batch_size=None, epochs=1,steps_per_epoch=1000)
do you have any solutions for this error ?
Why keras doesn't initialize the layers automatically without using global variables initializer (the error exists before and after adding the global initializer)
You initialize your model and then make and compile it. That's the wrong order, first define your model, compile it and then initialize. Same code, just different order
I got this to work. Forget about the session when using keras, it only complicates things.
import keras
import tensorflow as tf
import numpy as np
command_input = keras.layers.Input(shape=(1,1))
image_measurements_features = keras.layers.Input(shape=(1, 640))
command_module_layer1 = keras.layers.Dense(128 ,activation='relu')(command_input)
command_module_layer2 = keras.layers.Dense(128 ,activation='relu')(command_module_layer1)
j = keras.layers.concatenate([command_module_layer2, image_measurements_features])
desicion_module_layer1 = keras.layers.Dense(512,activation='relu')(j)
desicion_module_layer2 = keras.layers.Dense(256,activation='relu')(desicion_module_layer1)
desicion_module_layer3 = keras.layers.Dense(128,activation='relu')(desicion_module_layer2)
desicion_module_layer4 = keras.layers.Dense(3,activation='relu')(desicion_module_layer3)
big_hero_4 = keras.models.Model(inputs=[command_input, image_measurements_features], outputs=desicion_module_layer4)
big_hero_4.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
# Mock data
x = np.zeros((1, 1, 1))
y = np.zeros((1, 1, 640))
z = np.zeros((1, 1, 3))
historyy=big_hero_4.fit([x, y], z, batch_size=None, epochs=1,steps_per_epoch=1000)
This code should start training with no issues. If you still have the same error it might be due to some other part of your code if there is more.

How to initialize tf.metrics members in TensorFlow?

The below is a part of my project code.
with tf.name_scope("test_accuracy"):
test_mean_abs_err, test_mean_abs_err_op = tf.metrics.mean_absolute_error(labels=label_pl, predictions=test_eval_predict)
test_accuracy, test_accuracy_op = tf.metrics.accuracy(labels=label_pl, predictions=test_eval_predict)
test_precision, test_precision_op = tf.metrics.precision(labels=label_pl, predictions=test_eval_predict)
test_recall, test_recall_op = tf.metrics.recall(labels=label_pl, predictions=test_eval_predict)
test_f1_measure = 2 * test_precision * test_recall / (test_precision + test_recall)
tf.summary.scalar('test_mean_abs_err', test_mean_abs_err)
tf.summary.scalar('test_accuracy', test_accuracy)
tf.summary.scalar('test_precision', test_precision)
tf.summary.scalar('test_recall', test_recall)
tf.summary.scalar('test_f1_measure', test_f1_measure)
# validation metric init op
validation_metrics_init_op = tf.variables_initializer(\
var_list=[test_mean_abs_err_op, test_accuracy_op, test_precision_op, test_recall_op], \
name='validation_metrics_init')
However, when I run it, errors occur like this:
Traceback (most recent call last):
File "./run_dnn.py", line 285, in <module>
train(wnd_conf)
File "./run_dnn.py", line 89, in train
name='validation_metrics_init')
File "/export/local/anaconda2/lib/python2.7/site-
packages/tensorflow/python/ops/variables.py", line 1176, in
variables_initializer
return control_flow_ops.group(*[v.initializer for v in var_list], name=name)
AttributeError: 'Tensor' object has no attribute 'initializer'
I realize that I cannot create a validation initializer like that. I want to re-calculate the corresponding metrics when I save a new checkpoint model and apply a new round of validation. So, I have to re-initialize the metrics to be zero.
But how to reset all these metrics to be zero? Many thanks to your help!
I sovled the problem in the following way after referring to the blog (Avoiding headaches with tf.metrics).
# validation metrics
validation_metrics_var_scope = "validation_metrics"
test_mean_abs_err, test_mean_abs_err_op = tf.metrics.mean_absolute_error(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_accuracy, test_accuracy_op = tf.metrics.accuracy(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_precision, test_precision_op = tf.metrics.precision(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_recall, test_recall_op = tf.metrics.recall(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_f1_measure = 2 * test_precision * test_recall / (test_precision + test_recall)
tf.summary.scalar('test_mean_abs_err', test_mean_abs_err)
tf.summary.scalar('test_accuracy', test_accuracy)
tf.summary.scalar('test_precision', test_precision)
tf.summary.scalar('test_recall', test_recall)
tf.summary.scalar('test_f1_measure', test_f1_measure)
# validation metric init op
validation_metrics_vars = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope=validation_metrics_var_scope)
validation_metrics_init_op = tf.variables_initializer(var_list=validation_metrics_vars, name='validation_metrics_init')
a minimal working example that can be run line by line in a python terminal:
import tensorflow as tf
s = tf.Session()
acc = tf.metrics.accuracy([0,1,0], [0.1, 0.9, 0.8])
ini = tf.variables_initializer(tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES))
s.run([ini])
s.run([acc])

Tensorflow 0.12.0rc tf.summary.scalar() error using placeholders

Prior to tf 0.12.0rc I've used summary placeholders of the form:
tag_ph = tf.placeholder(tf.string)
val_ph = tf.placeholder(tf.float)
sum_op = tf.scalar_summary(tag_ph, val_ph)
...
feed_dict = {tag_ph:[some string], val_ph:[some val]}
sum_str = sess.run(sum_op, feed_dict)
writer.add_summary(sum_str)
After upgrading to 0.12.0 and changing tf.scalar_summary() to tf.summary.scalar() the use of a placeholder for the name parameter gives the following error:
TypeError: expected string or bytes-like object
There is no error if I use a static string for name, but I'd like to change the string as the evaluation progresses. How can I do that?
Minimal example:
tag = 'test'
val = 1.234
tag_ph = tf.placeholder(tf.string, [])
val_ph = tf.placeholder(tf.float32, [])
scalar_op = tf.summary.scalar(tag_ph, val_ph)
with tf.Session() as sess:
writer = tf.summary.FileWriter('/tmp/summary_placeholders', sess.graph)
feed_dict = {tag_ph:tag, val_ph:val}
sum_str = sess.run(scalar_op, feed_dict)
writer.add_summary(sum_str)
writer.flush()
This same code (after reverting tf.summary names) works in TF 0.11.0
If the question is how to write non-Tensorflow data as summaries in version >=0.12, here's an example:
import tensorflow as tf
summary_writer = tf.summary.FileWriter('custom_summaries')
summary = tf.Summary()
mydata = {"a": 1, "b": 2}
for name, data in mydata.items():
summary.value.add(tag=name, simple_value=data)
summary_writer.add_summary(summary, global_step=1)
summary_writer.flush()
TensorBoard merges summaries from all files in logdir and displays them, ie, if you do tensorboard --logdir=. you'll see something like this

Computing Edit Distance (feed_dict error)

I've written some code in Tensorflow to compute the edit-distance between one string and a set of strings. I can't figure out the error.
import tensorflow as tf
sess = tf.Session()
# Create input data
test_string = ['foo']
ref_strings = ['food', 'bar']
def create_sparse_vec(word_list):
num_words = len(word_list)
indices = [[xi, 0, yi] for xi,x in enumerate(word_list) for yi,y in enumerate(x)]
chars = list(''.join(word_list))
return(tf.SparseTensor(indices, chars, [num_words,1,1]))
test_string_sparse = create_sparse_vec(test_string*len(ref_strings))
ref_string_sparse = create_sparse_vec(ref_strings)
sess.run(tf.edit_distance(test_string_sparse, ref_string_sparse, normalize=True))
This code works and when run, it produces the output:
array([[ 0.25],
[ 1. ]], dtype=float32)
But when I attempt to do this by feeding the sparse tensors in through sparse placeholders, I get an error.
test_input = tf.sparse_placeholder(dtype=tf.string)
ref_input = tf.sparse_placeholder(dtype=tf.string)
edit_distances = tf.edit_distance(test_input, ref_input, normalize=True)
feed_dict = {test_input: test_string_sparse,
ref_input: ref_string_sparse}
sess.run(edit_distances, feed_dict=feed_dict)
Here is the error traceback:
Traceback (most recent call last):
File "<ipython-input-29-4e06de0b7af3>", line 1, in <module>
sess.run(edit_distances, feed_dict=feed_dict)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 597, in _run
for subfeed, subfeed_val in _feed_fn(feed, feed_val):
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 558, in _feed_fn
return feed_fn(feed, feed_val)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 268, in <lambda>
[feed.indices, feed.values, feed.shape], feed_val)),
TypeError: zip argument #2 must support iteration
Any idea what is going on here?
TL;DR: For the return type of create_sparse_vec(), use tf.SparseTensorValue instead of tf.SparseTensor.
The problem here comes from the return type of create_sparse_vec(), which is tf.SparseTensor, and is not understood as a feed value in the call to sess.run().
When you feed a (dense) tf.Tensor, the expected value type is a NumPy array (or certain objects that can be converted to an array). When you feed a tf.SparseTensor, the expected value type is a tf.SparseTensorValue, which is similar to a tf.SparseTensor but its indices, values, and shape properties are NumPy arrays (or certain objects that can be converted to arrays, like the lists in your example.
The following code should work:
def create_sparse_vec(word_list):
num_words = len(word_list)
indices = [[xi, 0, yi] for xi,x in enumerate(word_list) for yi,y in enumerate(x)]
chars = list(''.join(word_list))
return tf.SparseTensorValue(indices, chars, [num_words,1,1])