TensorFlow - how to suppress printing in scientific notation? - tensorflow

How can I suppress TensorFlow printing in scientific notation? I'm using TensorFlow 2.6.
Example:
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
import tensorflow as tf
x = tf.constant([0.0001, 0.0002, 0.0003], dtype=tf.float32)
print(x)
Example output:
tf.Tensor([1.e-04 2.e-04 3.e-04], shape=(3,), dtype=float32)
Would prefer:
tf.Tensor([0.0001, 0.0002, 0.0003], shape=(3,), dtype=float32)
I realize I could add the line np.set_printoptions(suppress=True) and then convert to numpy when printing like this:
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
import tensorflow as tf
import numpy as np
np.set_printoptions(suppress=True)
x = tf.constant([0.0001, 0.0002, 0.0003], dtype=tf.float32)
print(x.numpy())
But I would prefer the option to suppress scientific notation directly in TensorFlow if possible.

You can use tf.print(), which prints the specified inputs to a desired output stream or logging level.
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
import tensorflow as tf
x = tf.constant([0.0001, 0.0002, 0.0003], dtype=tf.float32)
tf.print(x)
Output:
[0.0001 0.0002 0.0003]

Related

Tensorflow 2 /Google Colab / EfficientNet Training - AttributeError: 'Node' object has no attribute 'output_masks'

I am trying to train EfficientNetB1 on Google Colab and constantly running into different issues with correct import statements from Keras or Tensorflow.Keras, currently this is how my imports look like
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.layers.pooling import AveragePooling2D
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import pickle
import cv2
import os
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
import efficientnet.keras as enet
from tensorflow.keras.layers import Dense, Dropout, Activation, BatchNormalization, Flatten, Input
and this is how my model looks like
load the ResNet-50 network, ensuring the head FC layer sets are left
# off
baseModel = enet.EfficientNetB1(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)), pooling='avg')
# Adding 2 fully-connected layers to B0.
x = baseModel.output
x = BatchNormalization()(x)
x = Dropout(0.7)(x)
x = Dense(512)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Dropout(0.5)(x)
x = Dense(512)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
# Output layer
predictions = Dense(len(lb.classes_), activation="softmax")(x)
model = Model(inputs = baseModel.input, outputs = predictions)
# loop over all layers in the base model and freeze them so they will
# *not* be updated during the training process
for layer in baseModel.layers:
layer.trainable = False
But for the life of me I can't figure out why I am getting the below error
AttributeError Traceback (most recent call last)
<ipython-input-19-269fe6fc6f99> in <module>()
----> 1 baseModel = enet.EfficientNetB1(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)), pooling='avg')
2
3 # Adding 2 fully-connected layers to B0.
4 x = baseModel.output
5 x = BatchNormalization()(x)
5 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py in _collect_previous_mask(input_tensors)
1439 inbound_layer, node_index, tensor_index = x._keras_history
1440 node = inbound_layer._inbound_nodes[node_index]
-> 1441 mask = node.output_masks[tensor_index]
1442 masks.append(mask)
1443 else:
AttributeError: 'Node' object has no attribute 'output_masks'
The problem is the way you import the efficientnet.
You import it from the Keras package and not from the TensorFlow.Keras package.
Change your efficientnet import to
import efficientnet.tfkeras as enet
Not sure, but this error maybe caused by wrong TF version. Google Colab for now comes with TF 1.x by default. Try this to change the TF version and see if this resolves the issue.
try:
%tensorflow_version 2.x
except:
print("Failed to load")

Any new version of tf.placeholder?

I have a problem using tf.placeholder, as it has been removed in the new version of TensorFlow, 2.0.
What should I do now to use this functionality?
You just apply data directly as input to the layer. For example:
import tensorflow as tf
import numpy as np
x_train = np.random.normal(size=(3, 2))
astensor = tf.convert_to_tensor(x_train)
logits = tf.keras.layers.Dense(2)(astensor)
print(logits.numpy())
# [[ 0.21247671 1.97068912]
# [-0.17184766 -1.61471399]
# [-0.03291694 -0.71419362]]
The TF1.x equivalent of the code above would be:
import tensorflow as tf
import numpy as np
input_ = np.random.normal(size=(3, 2))
x = tf.placeholder(tf.float32, shape=(None, 2))
logits = tf.keras.layers.Dense(2)(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(logits, feed_dict={x:input_}))
# [[-0.17604277 1.8991518 ]
# [-1.5802367 -0.7124136 ]
# [-0.5170298 3.2034855 ]]

tensorflow distribution create probability greater than 1

I am using tensorflow distribution API for sampling, following is the sample code I am using, but I found the probability is greater than 1, then log probability is smaller than 0. I have tried both CPU and GPU, both produce this weird result. the tensorflow is 1.3.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from sklearn.datasets import load_boston
from sklearn.preprocessing import scale
from matplotlib import pyplot as plt
import numpy as np
learning_rate = 0.01
total_features, total_prices = load_boston(True)
# Keep 300 samples for training
train_features = scale(total_features[:300])
train_prices = total_prices[:300]
x = tf.placeholder(tf.float32, [None, 13])
l1 = tf.layers.dense(inputs=x, units=20, activation=tf.nn.elu)
l2 = tf.layers.dense(inputs=l1, units=20, activation=tf.nn.elu)
mu = tf.squeeze(tf.layers.dense(inputs=l2, units=1))
sigma = tf.squeeze(tf.layers.dense(inputs=l2, units=1))
sigma = tf.nn.softplus(sigma) + 1e-5
normal_dist = tf.contrib.distributions.Normal(mu, sigma)
samples = tf.squeeze(normal_dist._sample_n(1))
log_prob = -normal_dist.log_prob(samples)
prob = normal_dist.prob(samples)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
avg_cost = 0.0
feed_dict = {x: train_features}
p = sess.run(prob, feed_dict)
lp = sess.run(log_prob, feed_dict)
The p is my probability output
and lp is log probability
Thank you!
The functions .prob and .log_prob are the PDF and Log PDF of the normal distribution: https://en.wikipedia.org/wiki/Probability_density_function. Note that the PDF doesn't have to evaluate to a value between 0 and 1; It's integral over a range (which is related to the CDF) has to be between 0 and 1.
Consider the case where mu = 0 and sigma = 1e-4. If we use the PDF of the normal distribution: https://en.wikipedia.org/wiki/Normal_distribution, then PDF(0) ~= 4000! However, if we were to integrate the PDF and get the CDF (or use the CDF directly), then we will always get a value between 0 and 1.

Possible compatibility issue with Keras, TensorFlow and scikit (tf.global_variables())

I'm trying to do a small test with my dataset on Keras Regressor (using TensorFlow), but I'm having a small issue. The error seems to be on the function cross_val_score from scikit. It starts on it and the last error message is:
File "/usr/local/lib/python2.7/dist-packages/Keras-2.0.2-py2.7.egg/keras/backend/tensorflow_backend.py", line 298, in _initialize_variables
variables = tf.global_variables()
AttributeError: 'module' object has no attribute 'global_variables'
My full code is basically the example found in http://machinelearningmastery.com/regression-tutorial-keras-deep-learning-library-python/ with small changes.
I've looked upon the " 'module' object has no attribute 'global_variables' " error and it seems to be about the Tensorflow version, but I'm using the most recent one (1.0) and there is no function in the code that works directly with tf that I can change. Below is my full code, is there anyway i can change it so it works? Thanks for the help
import numpy
import pandas
import sys
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.datasets import load_svmlight_file
# define base mode
def baseline_model():
# create model
model = Sequential()
model.add(Dense(68, activation="relu", kernel_initializer="normal", input_dim=68))
model.add(Dense(1, kernel_initializer="normal"))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
X, y, query_id = load_svmlight_file(str(sys.argv[1]), query_id=True)
scaler = StandardScaler()
X = scaler.fit_transform(X.toarray())
# fix random seed for reproducibility
seed = 1
numpy.random.seed(seed)
# evaluate model with standardized dataset
estimator = KerasRegressor(build_fn=baseline_model, nb_epoch=100, batch_size=5, verbose=0)
kfold = KFold(n_splits=5, random_state=seed)
results = cross_val_score(estimator, X, y, cv=kfold)
print("Results: %.2f (%.2f) MSE" % (results.mean(), results.std()))
You are probably using an older Tensorflow version install tensorflow 1.2.0rc2 and you should be fine.

how to send values to function using feed_dict in tensorflow?

I'm beginner in tensorflow and i'm working on a project which i want to send values to a placeholder and use this placeholder in a function so i will simplify what i want.
This is a simple code
import tensorflow as tf
import glob
from PIL import Image
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
x = tf.placeholder(dtype=tf.float32,shape=[1])
def fun():
print(x)
return x
with tf.Session() as sess:
sess.run(fun(),feed_dict={x:[5.]})
I want to use X value inside the function but when i print it i get the shape only however i used sess.run to run the function so that i expect to print the value not the shape but also when i use print(sess.run(x)) it give me error and say i must feed X with value so what am i missing ?
You should write it like that:
x = tf.placeholder(dtype=tf.float32,shape=[1])
def fun():
return x
with tf.Session() as sess:
y=sess.run(fun(),feed_dict={x:[5.]})
print(y)