there is no graph with tensorboard - tensorflow

I am reading a book on Tensorflow and I find this code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
const1 = tf.constant(2)
const2 = tf.constant(3)
add_opp = tf.add(const1,const2)
mul_opp = tf.mul(add_opp, const2)
with tf.Session() as sess:
result, result2 = sess.run([mul_opp,add_opp])
print(result)
print(result2)
tf.train.SummaryWriter('./',sess.graph)
so it is very simple, nothing fancy and it is supposed to produce some output that can be visualized with tensorboard.
So I run the script, it prints the results but apparently SummaryWriter produces nothing.
I run tensorboard -logdir='./' and of course there is no graph.
What could I be doing wrong?
And also how do you terminate tensorboard? I tried ctrl-C and ctrl-Z and it does not work. (also I am in a japanese keyboard so there is no backslash just in case)

The tf.train.SummaryWriter must be closed (or flushed) in order to ensure that data, including the graph, have been written to it. The following modification to your program should work:
writer = tf.train.SummaryWriter('./', sess.graph)
writer.close()

A very wierd thing happened to me
I am learning to work with tensorflow
import tensorflow as tf
a = tf.constant(3)
b = tf.constant(4)
c = a+b
with tf.Session() as sess:
File_Writer = tf.summary.FileWriter('/home/theredcap/Documents/CS/Personal/Projects/Test/tensorflow/tensorboard/' , sess.graph )
print(sess.run(c))
Inorder to see the graph on tensorboard
I typed
tensorboard --logdir = "the above mentioned path"
But nothing was displayed on the tensorboard
Then I went to the github README page
https://github.com/tensorflow/tensorboard/blob/master/README.md
And it said to run the command in this manner
tensorboard --logdir path/to/logs
I did the same, and finally I was able to view my graph

Related

Convert Tensor to numpy array in TF 2.x

I am trying to load Universal Sentence Encoder and this is my code snippet:
import tensorflow as tf
import tensorflow_hub as hub
import os, requests, tarfile
def extractUSEEmbeddings(words):
# Extracts USE embeddings
# Replace `USE_folder` with any directory in your machine, where you want USE to be downloaded
try:
embed = hub.KerasLayer(USE_folder)
except Exception as e:
print ("Downloading USE embeddings...")
r = requests.get("https://tfhub.dev/google/universal-sentence-encoder-large/5?tf-hub-format=compressed")
open("USE.tar.gz", "wb").write(r.content)
tar = tarfile.open("USE.tar.gz", "r:gz")
tar.extractall(path=USE_folder)
tar.close()
os.remove("USE.tar.gz")
embed = hub.KerasLayer(USE_folder)
pass
word_embeddings = embed(words)
return word_embeddings.numpy()
I get the error 'Tensor' object has no attribute 'numpy'. When I run the same code on Jupyter notebook, with the same versions of tensorflow (2.2.0) and tensorflow-hub (0.9.0), I do not get any error and it works perfectly fine.
I printed the type of Tensor in both cases, and realized that this is because I get an Eager Tensor (tensorflow.python.framework.ops.EagerTensor) in Jupyter, which has a numpy method whereas in my script, the Tensor is of type tensorflow.python.framework.ops.Tensor. However, I am now unable to figure out how to switch on Eager Execution in my script, since in TF 2.x it is supposed to be enabled by default.
I have tried all the solutions given in this thread, but none of them work for me.
Why am I not getting an Eager Tensor when run through the terminal, but get it through Jupyter? Does my problem have anything to do with the fact that I am using tensorflow-hub here, and is that why none of the solutions are working for me? Most importantly, how do I convert Tensor in tf 2.x to a numpy array?

how to properly saving loaded h5 model to pb with TF2

I load a saved h5 model and want to save the model as pb.
The model is saved during training with the tf.keras.callbacks.ModelCheckpoint callback function.
TF version: 2.0.0a
edit: same issue also with 2.0.0-beta1
My steps to save a pb:
I first set K.set_learning_phase(0)
then I load the model with tf.keras.models.load_model
Then, I define the freeze_session() function.
(optional I compile the model)
Then using the freeze_session() function with tf.keras.backend.get_session
The error I get, with and without compiling:
AttributeError: module 'tensorflow.python.keras.api._v2.keras.backend'
has no attribute 'get_session'
My Question:
Does TF2 not have the get_session anymore?
(I know that tf.contrib.saved_model.save_keras_model does not exist anymore and I also tried tf.saved_model.save which not really worked)
Or does get_session only work when I actually train the model and just loading the h5 does not work
Edit: Also with a freshly trained session, no get_session is available.
If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
update:
Since the official release of TF2.x graph/session concept has changed. The savedmodel api should be used.
You can use the tf.compat.v1.disable_eager_execution() with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
I do save the model to pb from h5 model:
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
I use TF2 to convert model like:
pass keras.callbacks.ModelCheckpoint(save_weights_only=True) to model.fit and save checkpoint while training;
After training, self.model.load_weights(self.checkpoint_path) load checkpoint;
self.model.save(h5_path, overwrite=True, include_optimizer=False) save as h5;
convert h5 to pb just like above;
I'm wondering the same thing, as I'm trying to use get_session() and set_session() to free up GPU memory. These functions seem to be missing and aren't in the TF2.0 Keras documentation. I imagine it has something to do with Tensorflow's switch to eager execution, as direct session access is no longer required.
use
from tensorflow.compat.v1.keras.backend import get_session
in keras 2 & tensorflow 2.2
then call
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
from tensorflow.compat.v1.keras.backend import get_session
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")

How to print full (not truncated) tensor in tensorflow?

Whenever I try printing I always get truncated results
import tensorflow as tf
import numpy as np
np.set_printoptions(threshold=np.nan)
tensor = tf.constant(np.ones(999))
tensor = tf.Print(tensor, [tensor])
sess = tf.Session()
sess.run(tensor)
As you can see I've followed a guide I found on Print full value of tensor into console or write to file in tensorflow
But the output is simply
...\core\kernels\logging_ops.cc:79] [1 1 1...]
I want to see the full tensor, thanks.
This is solved easily by checking the Tensorflow API for tf.Print. Pass summarize=n where n is the number of elements you want displayed.
You can do it as follows in TensorFlow 2.x:
import tensorflow as tf
tensor = tf.constant(np.ones(999))
tf.print(tensor, summarize=-1)
From TensorFlow docs -> summarize: The first and last summarize elements within each dimension are recursively printed per Tensor. If set to -1, it will print all elements of every tensor.
https://www.tensorflow.org/api_docs/python/tf/print
To print all tensors without truncation in TensorFlow 2.x:
import numpy as np
import sys
np.set_printoptions(threshold=sys.maxsize)

Unable to plot tensorflow graph

Here's the code I've written in python3.6. When I try to plot using tensorboard I see two graphs namely main and auxilary but they do not seem to correspond to my code:
import tensorflow as tf
a = tf.constant(1.0)
b = tf.constant(2.0)
c=a*b
sess=tf.Session()
writer=tf.summary.FileWriter("E:/python_prog",sess.graph)
print(sess.run(c))
writer.close()
sess.close().
I run the code in anaconda(Windows) prompt:
(tfp3.6) E:\python_prog>tensorboard --logdir="E:\python_prog"
Starting TensorBoard b'54' at http://DESKTOP-31KSN08:6006
(Press CTRL+C to quit)

Reproducible results with keras

How can I get reproducible results with keras? I followed these steps but I am still getting different results every time I run the Jupyter notebook. I also tried setting shuffle=False when calling model.fit().
My configuration:
conda 4.3.25
keras 2.0.6 with tensorflow backend
tensorflow-gpu 1.2.1
python 3.5
windows 10
See the answer I posted at another question. The main idea is to: first, disable the GPU. And then, seed the libraries like "numpy, random, etc". To sum up, including the code below at the beginning of your code may help solve your problem.
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import numpy as np
import tensorflow as tf
import random as rn
from keras import backend as K
sd = 1
np.random.seed(sd)
rn.seed(sd)
os.environ['PYTHONHASHSEED']=str(sd)
config = tf.ConfigProto(intra_op_parallelism_threads=1,inter_op_parallelism_threads=1)
tf.set_random_seed(sd)
sess = tf.Session(graph=tf.get_default_graph(), config=config)
K.set_session(sess)