I'm a Tensorflow beginner.
I've tried to print Hello world using below Tensorflow 1.15.0 code.
import tensorflow as tf
h = tf.constant("Hello")
w = tf.constant(" World!")
hw = h + w
with tf.Session() as sess:
ans = sess.run(hw)
print(ans)
When I run the code using jupyter notebook, b'Hello World!' came out.
What I expected is only 'Hello World!". Why does the b come out in front of my output?
Many thank
The b prefix indicates that it is a byte string and not unicode string. You can use tf.print() to print it properly
This question has already been answered here: The print of string constant is always attached with 'b' inTensorFlow
Related
This should be a simple task: Download a model saved in tensorflow_hub format, load using tensorflow_hub, and use..
This is the model I am trying to use (simCLR stored in Google Cloud): https://console.cloud.google.com/storage/browser/simclr-checkpoints/simclrv2/pretrained/r50_1x_sk0;tab=objects?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false
I downloaded the /hub folder as they say, using
gsutil -m cp -r \
"gs://simclr-checkpoints/simclrv2/pretrained/r50_1x_sk0/hub" \
.
The /hub folder contains the files:
/saved_model.pb
/tfhub_module.pb
/variables/variables.index
/variables/variables.data-00000-of-00001
So far so good.
Now in python3, tensorflow2, tensorflow_hub 0.12 I run the following code:
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
path_to_hub = '/home/my_name/my_path/simclr/hub'
# Attempt 1
m = tf.keras.models.Sequential([hub.KerasLayer(path_to_hub, input_shape=(224,224,3))])
# Attempt 2
m = tf.keras.models.Sequential(hub.KerasLayer(hubmod))
m.build(input_shape=[None,224,224,3])
# Attempt 3
m = hub.KerasLayer(hub.load(hubmod))
# Toy Data Test
X = np.random.random((1,244,244,3)).astype(np.float32)
y = m.predict(X)
None of these 3 options to load the hub model work, with the following errors:
Attempt 1 :
ValueError: Error when checking input: expected keras_layer_2_input to have shape (224, 224, 3) but got array with shape (244, 244, 3)
Attempt 2:
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node sequential_3/keras_layer_3/StatefulPartitionedCall/base_model/conv2d/Conv2D}}]] [Op:__inference_keras_scratch_graph_46402]
Function call stack:
keras_scratch_graph
Attempt 3:
ValueError: Expected a string, got <tensorflow.python.training.tracking.tracking.AutoTrackable object at 0x7fa71c7a2dd0>
These 3 attempts are all code taken from tensorflow_hub tutorials and are repeated in other answers in stackoverflow, but none works, and I don't know how to continue from those error messages.
Appreciate any help, thanks.
Update 1:
Same issues happen if I try with this ResNet50 hub/
https://storage.cloud.google.com/simclr-gcs/checkpoints/ResNet50_1x.zip
As #Frightera pointed out, there was an error with the input shapes. Also the error on "Attempt 2" was solved by allowing for memory growth on the selected GPU. "Attempt 3" still does not work, but at least there are two methods for loading and using a model saved in /hub format:
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
tf.config.experimental.set_memory_growth(gpus[0], True)
hubmod = 'https://tfhub.dev/google/imagenet/mobilenet_v2_035_96/feature_vector/5'
# Alternative 1 - Works!
m = tf.keras.models.Sequential([hub.KerasLayer(hubmod, input_shape=(96,96,3))])
print(m.summary())
# Alternative 2 - Works!
m = tf.keras.models.Sequential(hub.KerasLayer(hubmod))
m.build(input_shape=[None, 96,96,3])
print(m.summary())
# Alternative 3 - Doesnt work
#m = hub.KerasLayer(hub.load(hubmod))
#m.build(input_shape=[None, 96,96,3])
#print(m.summary())
# Test
X = np.random.random((1,96,96,3)).astype(np.float32)
y = m.predict(X)
print(y.shape)
When I try to roll two concatenate tensor it crash my notebook :
Please someone know what is incorrect here ? Thanks you very much
x = tf.convert_to_tensor([1,2],dtype="int32")
y = tf.zeros(shape=(2),dtype="int32")
z = tf.concat([x,y],axis=0)
tf.roll(z,1,axis=0)
I am trying to read text in a file Shakespear.txt line by line using
tensorflow TextLineDataset. Split the words in a line and write the words in another file txt.txt one word per line. Here is my code
import tensorflow as tf
tf.enable_eager_execution()
BATCH_SIZE=2
#from tensorflow.keras.model import Sequential
dataset_in_lines=tf.data.TextLineDataset("Shakespear.txt")
dataset=dataset_in_lines.map(lambda string: tf.string_split([string]).values)
with open("txt.txt","w") as f:
for k in dataset.take(2):
for x in k:
f.write("\n".join(x))
When i run it it gives the error: Cannot iterate over a scalar tensor
in the f.write line. Please help me figure out the issue
It will be helpful if you could share the shakespear.txt file, but based on your error, it seems liek it is receiving the tensor not the actual value.
So, you first need to get the value from tensor k, you can use k.numpy().
Replace for x in k: with for x in k.numpy():
Let us know if it works.
I found a better way, replace dataset=dataset_in_lines.map(lambda string:tf.string_split([string]).values) with tokenizer.tokenize. The following code achieves the objective(see https://www.tensorflow.org/tutorials/load_data/text for more details)
import tensorflow as tf
tf.enable_eager_execution()
import tensorflow_datasets as tfds
tokenizer = tfds.features.text.Tokenizer()
dataset_in_lines=tf.data.TextLineDataset("Shakespear.txt")
vocabulary_set = set()
for x in dataset_in_lines:
k=tokenizer.tokenize(x.numpy())
vocabulary_set.update(k)
with open("txt.txt","w") as f:
for x in vocabulary_set:
f.write(x+"\n")
Referring to this Link, (the Link)
I try to practice using tf.contrib.factorization.KMeansClustering for clustering. The simple codes as follow works okay:
import numpy as np
import tensorflow as tf
# ---- Create Data Sample -----
k = 5
n = 100
variables = 5
points = np.random.uniform(0, 1000, [n, variables])
# ---- Clustering -----
input_fn=lambda: tf.train.limit_epochs(tf.convert_to_tensor(points, dtype=tf.float32), num_epochs=1)
kmeans=tf.contrib.factorization.KMeansClustering(num_clusters=6)
kmeans.train(input_fn=input_fn)
centers = kmeans.cluster_centers()
# ---- Print out -----
cluster_indices = list(kmeans.predict_cluster_index(input_fn))
for i, point in enumerate(points):
cluster_index = cluster_indices[i]
print ('point:', point, 'is in cluster', cluster_index, 'centered at', centers[cluster_index])
My question is why would this "input_fn" code does the trick?
If I change the code to this, it will run into an infinite loop. Why??
input_fn=lambda:tf.convert_to_tensor(points, dtype=tf.float32)
From the document (here), it seems that train() is expecting argument of input_fn, which is simply a A 'tf.data.Dataset' object , like Tensor(X). So, why do I have to do all these tricky things regarding lambda: tf.train.limit_epochs()?
Can anyone who is familiar with the fundamental of tensorflow estimators help to explain? Many Thanks!
My question is why would this "input_fn" code does the trick? If I change the code to this, it will run into an infinite loop. Why??
The documentation states that input_fn is called repeatedly until it returns a tf.errors.OutOfRangeError. Adorning your tensor with tf.train.limit_epochs ensures that the error is eventually raised, which signals to KMeans that it should stop training.
Trying debug statements in Python/tensorflow1.0 using jupyter , but does not get any output printed from tf.Print
Thought sess.run(during training in below code) should have evaluated db1 tensor and print output which did not happen
However db1.eval in evaluate phase , printing entire tensor X with out "message X:".
def combine_inputs(X):
db1=tf.Print(X,[X],message='X:')
return (tf.matmul(X, W) + b,db1)
<<training code>>
_,summary=sess.run([train_op,merged_summaries])
## merged_summaries tensor triggers combine_inputs function. There are
## other tensor functions/coding in between , not giving entire code to keep
## it simple; code works as expected except tf.Print
<<evaluate code>>
print(db1.eval())
Confused on following
a) Why tf.Print is not printing during sess.run during training?
b) Why explicit db1.eval is necessary , expected tf.Print to trigger with
sess.run. If eval is required , could copy tensor X in my code to db1
and evaluate it with out tf.Print. Correct?
Tried going through other questions (like below one). Suggested to implement memory_util or predefined function. As learner could not understand why tf.Print does not work in my scenario
If anyone encountered similar issues , please assist. Thanks!
Similar question in stackoverflow
According to the documentation, tf.Print prints to standard error (as of version 1.1), and it's not compatible with jupyter notebook. That's why you can't see any output.
Check here:
https://www.tensorflow.org/api_docs/python/tf/Print
You can check the terminal where you launched the jupyter notebook to see the message.
import tensorflow as tf
tf.InteractiveSession()
a = tf.constant(1)
b = tf.constant(2)
opt = a + b
opt = tf.Print(opt, [opt], message="1 + 2 = ")
opt.eval()
In the terminal, I can see:
2018-01-02 23:38:07.691808: I tensorflow/core/kernels/logging_ops.cc:79] 1 + 2 = [3]