Any new version of tf.placeholder? - tensorflow

I have a problem using tf.placeholder, as it has been removed in the new version of TensorFlow, 2.0.
What should I do now to use this functionality?

You just apply data directly as input to the layer. For example:
import tensorflow as tf
import numpy as np
x_train = np.random.normal(size=(3, 2))
astensor = tf.convert_to_tensor(x_train)
logits = tf.keras.layers.Dense(2)(astensor)
print(logits.numpy())
# [[ 0.21247671 1.97068912]
# [-0.17184766 -1.61471399]
# [-0.03291694 -0.71419362]]
The TF1.x equivalent of the code above would be:
import tensorflow as tf
import numpy as np
input_ = np.random.normal(size=(3, 2))
x = tf.placeholder(tf.float32, shape=(None, 2))
logits = tf.keras.layers.Dense(2)(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(logits, feed_dict={x:input_}))
# [[-0.17604277 1.8991518 ]
# [-1.5802367 -0.7124136 ]
# [-0.5170298 3.2034855 ]]

Related

Why I can't get the internal output of a trained model?

import tensorflow.keras as keras
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
if __name__ == '__main__':
model = keras.models.load_model('model/model_test_0.99408.h5', custom_objects={'leaky_relu': tf.nn.leaky_relu})
model.summary()
inputs = keras.layers.Input(shape=(28, 28, 1))
y = model(inputs)
feature = model.get_layer('conv2d_4').output
model = keras.Model(inputs=inputs, outputs=[y, feature])
model.summary()
why i can't get the output of 'conv2d_4' that is the internal layer of the model? And i get the following error.
Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(None, 28, 28, 1), dtype=float32) at layer "conv2d". The following previous layers were accessed without issue: []
We can try restacking the model, assigning feature to the required layer,
import tensorflow.keras as keras
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
if __name__ == '__main__':
model = keras.models.load_model('model/model_test_0.99408.h5', custom_objects={'leaky_relu': tf.nn.leaky_relu})
model.summary()
inputs = keras.layers.Input(shape=(28, 28, 1))
y = inputs
for layer in vgg.layers:
if layer.name == 'conv2d_4':
feature = y
y = layer( y )
model = keras.Model(inputs=inputs, outputs=[y, feature])
model.summary()

print a tensor object of type (None, )

I am running in google colab and the tensor flow version is: 2.2.0 and the keras version is: 2.3.0-tf
Question:
How can I print the value of african_elephant_output? I tried print (african_elephant_output). This prints only the following:
Tensor("Mul_1:0", shape=(None,), dtype=float32)
Location of Code is: See code at In [31]
Relevant code is:
from keras.applications import VGG16
from keras import backend as K
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input, decode_predictions
import numpy as np
import tensorflow as tf
print (tf.__version__)
print (tf.keras.__version__)
# The local path to our target image
img_path = '/content/pic.jpg'
# `img` is a PIL image of size 224x224
img = image.load_img(img_path, target_size=(224, 224))
# `x` is a float32 Numpy array of shape (224, 224, 3)
x = image.img_to_array(img)
# We add a dimension to transform our array into a "batch"
# of size (1, 224, 224, 3)
x = np.expand_dims(x, axis=0)
# Finally we preprocess the batch
# (this does channel-wise color normalization)
x = preprocess_input(x)
K.clear_session()
# Note that we are including the densely-connected classifier on top;
# all previous times, we were discarding it.
model = VGG16(weights='imagenet')
preds = model.predict(x)
print('Predicted:', decode_predictions(preds, top=3)[0])
np.argmax(preds[0])
# this prints 386
# This is the "african elephant" entry in the prediction vector
african_elephant_output = model.output[:, 386]
Based on Tensorflow documentation and my experience you should be able to get the contents of a Tensor using .numpy() API. So african_elephant_output.numpy().
You can also check the tutorial from Tensorflow here for reference.

Tensorflow-Keras reproducibility problem on Google Colab

I have a simple code to run on Google Colab (I use CPU mode):
import numpy as np
import pandas as pd
## LOAD DATASET
datatrain = pd.read_csv("gdrive/My Drive/iris_train.csv").values
xtrain = datatrain[:,:-1]
ytrain = datatrain[:,-1]
datatest = pd.read_csv("gdrive/My Drive/iris_test.csv").values
xtest = datatest[:,:-1]
ytest = datatest[:,-1]
import tensorflow as tf
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.utils import to_categorical
## SET ALL SEED
import os
os.environ['PYTHONHASHSEED']=str(66)
import random
random.seed(66)
np.random.seed(66)
tf.set_random_seed(66)
from tensorflow.keras import backend as K
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
## MAIN PROGRAM
ycat = to_categorical(ytrain)
# build model
model = tf.keras.Sequential()
model.add(Dense(10, input_shape=(4,)))
model.add(Activation("sigmoid"))
model.add(Dense(3))
model.add(Activation("softmax"))
#choose optimizer and loss function
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
# train
model.fit(xtrain, ycat, epochs=15, batch_size=32)
#get prediction
classes = model.predict_classes(xtest)
#get accuration
accuration = np.sum(classes == ytest)/len(ytest) * 100
I have read the setup to create a reproducibility code here Reproducible results using Keras with TensorFlow backend and I put all code in the same cell. But the result (e.g. the loss) is always different every time I run that cell (run the cell using shift + enter).
In my case, the result from the code above can be reproduced, if only:
I run using "runtime" > "restart and run all" or,
I put that code in a single file and run it using the command line (python3 file.py)
is there something I miss to make the result reproducible without restart the runtime?
You should also fix the seed for kernel_initializer in your Dense layers. So, your model will be like:
model = tf.keras.Sequential()
model.add(Dense(10, kernel_initializer=keras.initializers.glorot_uniform(seed=66), input_shape=(4,)))
model.add(Activation("sigmoid"))
model.add(Dense(3, kernel_initializer=keras.initializers.glorot_uniform(seed=66)))
model.add(Activation("softmax"))
I tried most of the solutions on the web and just the following codes worked for me :
seed=0
import os
os.environ['PYTHONHASHSEED'] = str(seed)
# For working on GPUs from "TensorFlow Determinism"
os.environ["TF_DETERMINISTIC_OPS"] = str(seed)
import numpy as np
np.random.seed(seed)
import random
random.seed(seed)
import tensorflow as tf
tf.random.set_seed(seed)
note that you should call this code before every run(at least for me)
if you want run your code on CPU:
seed=0
import os
os.environ['PYTHONHASHSEED'] = str(seed)
# For working on GPUs from "TensorFlow Determinism"
os.environ['CUDA_VISBLE_DEVICE'] = ''
import numpy as np
np.random.seed(seed)
import random
random.seed(seed)
import tensorflow as tf
tf.random.set_seed(seed)
I've tried to get Tensorflow 2.0 working reproducibly using Keras and Google Colab (CPU), with a version of the Iris dataset processing similar to that described above by #malioboro. This seems to work - might be useful:
# Install TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# Setup repro section from Keras FAQ with TF1 to TF2 adjustments
import numpy as np
import tensorflow as tf
import random as rn
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(42)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
rn.seed(12345)
# Force TensorFlow to use single thread.
# Multiple threads are a potential source of non-reproducible results.
# For further details, see: https://stackoverflow.com/questions/42022950/
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see:
# https://www.tensorflow.org/api_docs/python/tf/set_random_seed
tf.compat.v1.set_random_seed(1234)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
# Rest of code follows ...
# Some adopted from: https://janakiev.com/notebooks/keras-iris/
# Some adopted from the question.
#
# Load Data
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, StandardScaler
iris = load_iris()
X = iris['data']
y = iris['target']
names = iris['target_names']
feature_names = iris['feature_names']
# One hot encoding
enc = OneHotEncoder()
Y = enc.fit_transform(y[:, np.newaxis]).toarray()
# Scale data to have mean 0 and variance 1
# which is importance for convergence of the neural network
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
# Split the data set into training and testing
X_train, X_test, Y_train, Y_test = train_test_split(
X_scaled, Y, test_size=0.5, random_state=2)
n_features = X.shape[1]
n_classes = Y.shape[1]
## MAIN PROGRAM
from tensorflow.keras.layers import Dense, Activation
# build model
model = tf.keras.Sequential()
model.add(Dense(10, input_shape=(4,)))
model.add(Activation("sigmoid"))
model.add(Dense(3))
model.add(Activation("softmax"))
#choose optimizer and loss function
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
# train
model.fit(X_train, Y_train, epochs=20, batch_size=32)
#get prediction
classes = model.predict_classes(X_test)

How can I pass multiple parameters to a DistributionLambda layer from Tensorflow probability?

I am building a model using Keras and Tensorflow probability that should output the parameters of a Gamma function (alpha and beta) instead of a single parameter as shown in the example below (t is passed to a Normal distribution function).
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.05), loss=negloglik)
model.fit(x, y, epochs=500, verbose=False)
# Make predictions.
yhat = model(x_tst)
Instead of this I would like to output alpha and beta from two Dense layers and then pass this parameters to a Gamma distribution function.
Something like this?
import tensorflow as tf
tf.enable_eager_execution()
print(tf.__version__) # 1.14.1-dev20190503
import tensorflow_probability as tfp
tfd = tfp.distributions
X = np.random.rand(4, 1).astype(np.float32)
d0 = tf.keras.layers.Dense(2)(X)
s0, s1 = tf.split(d0, 2)
dist = tfp.layers.DistributionLambda(lambda t: tfd.Gamma(t[0], t[1]))(s0, s1)
dist.sample()
# <tf.Tensor: id=10580, shape=(2,), dtype=float32, numpy=array([1.1754944e-38, 1.3052921e-01], dtype=float32)>

ResNet50 From keras gives different results for predict and output

I want to fine-tune the ResNet50 from Keras but first I found that given the same input, the prediction from ResNet50 is different from the output of the model. Actually, the value of the output seems to be 'random'. What am I doing wrong?
Thanks in advance!
Here it is my code:
import tensorflow as tf
from resnet50 import ResNet50
from keras.preprocessing import image
from imagenet_utils import preprocess_input
import numpy as np
from keras import backend as K
img_path = 'images/tennis_ball.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x_image = preprocess_input(x)
#Basic prediction
model_basic = ResNet50(weights='imagenet', include_top=False)
x_prediction = model_basic.predict(x_image)
#Using tensorflow to obtain the output
input_tensor = tf.placeholder(tf.float32, shape=[None, 224,224, 3], name='input_tensor')
model = ResNet50(weights='imagenet', include_top=False, input_tensor=input_tensor)
x = model.output
# Tensorflow session
session = tf.Session()
session.run(tf.global_variables_initializer())
K.set_session(session)
feed_dict = {input_tensor: x_image, K.learning_phase(): 0}
# Obatin the output given the same input
x_output = session.run(x, feed_dict=feed_dict)
# Different results
print('Value of the prediction: {}'.format(x_prediction))
print('Value of the output: {}'.format(x_output))
Here it is an example of the logs:
Value of the prediction: [[[[ 1.26408589e+00 3.91489342e-02 8.43058806e-03 ...,
5.63185453e+00 4.49339962e+00 5.13037841e-04]]]]
Value of the output: [[[[ 2.62883282 2.20199227 9.46755123 ..., 1.24660134 1.98682189
0.63490123]]]]
The problem was that session.run(tf.global_variables_initializer()) initializes your parameters to random values.
The problem was solve by using:
session = K.get_session()
instead of:
session = tf.Session()
session.run(tf.global_variables_initializer())