Tensorflow: has no attribute 'numpy_input_fn' - tensorflow

I am using Eclipse's PyDev for tensorflow version:0.12.1
I directly copy the sample code from tensorflow documentation,
but a attribute is not found and it returned
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,
AttributeError: module 'tensorflow.contrib.learn.python.learn.learn_io' has no attribute 'numpy_input_fn'
Tried to re-download pydev and tensorflow but none of them work
The source code:
import tensorflow as tf
import numpy as np
features = [tf.contrib.layers.real_valued_column("x", dimension=1)]
estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,num_epochs=1000)
estimator.fit(input_fn=input_fn, steps=1000)
estimator.evaluate(input_fn=input_fn)

I encountered the same problem and fixed it by upgrading:

pip install --upgrade tensorflow
upgrade tensorflowd to '1.1.0'

Related

TensorFlow - how to suppress printing in scientific notation?

How can I suppress TensorFlow printing in scientific notation? I'm using TensorFlow 2.6.
Example:
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
import tensorflow as tf
x = tf.constant([0.0001, 0.0002, 0.0003], dtype=tf.float32)
print(x)
Example output:
tf.Tensor([1.e-04 2.e-04 3.e-04], shape=(3,), dtype=float32)
Would prefer:
tf.Tensor([0.0001, 0.0002, 0.0003], shape=(3,), dtype=float32)
I realize I could add the line np.set_printoptions(suppress=True) and then convert to numpy when printing like this:
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
import tensorflow as tf
import numpy as np
np.set_printoptions(suppress=True)
x = tf.constant([0.0001, 0.0002, 0.0003], dtype=tf.float32)
print(x.numpy())
But I would prefer the option to suppress scientific notation directly in TensorFlow if possible.
You can use tf.print(), which prints the specified inputs to a desired output stream or logging level.
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
import tensorflow as tf
x = tf.constant([0.0001, 0.0002, 0.0003], dtype=tf.float32)
tf.print(x)
Output:
[0.0001 0.0002 0.0003]

Any new version of tf.placeholder?

I have a problem using tf.placeholder, as it has been removed in the new version of TensorFlow, 2.0.
What should I do now to use this functionality?
You just apply data directly as input to the layer. For example:
import tensorflow as tf
import numpy as np
x_train = np.random.normal(size=(3, 2))
astensor = tf.convert_to_tensor(x_train)
logits = tf.keras.layers.Dense(2)(astensor)
print(logits.numpy())
# [[ 0.21247671 1.97068912]
# [-0.17184766 -1.61471399]
# [-0.03291694 -0.71419362]]
The TF1.x equivalent of the code above would be:
import tensorflow as tf
import numpy as np
input_ = np.random.normal(size=(3, 2))
x = tf.placeholder(tf.float32, shape=(None, 2))
logits = tf.keras.layers.Dense(2)(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(logits, feed_dict={x:input_}))
# [[-0.17604277 1.8991518 ]
# [-1.5802367 -0.7124136 ]
# [-0.5170298 3.2034855 ]]

tflearn to_categorical type error

I keep getting a typeError when I try to use to_categorical from tflearn. The output error is:`
trainY = to_categorical(y = trainY, nb_classes=2)
File "C:\Users\saleh\Anaconda3\lib\site-packages\tflearn\data_utils.py", line 46, in to_categorical
return (y[:, None] == np.unique(y)).astype(np.float32)
TypeError: list indices must be integers or slices, not tuple
This is the reproducible code that I am trying to run:
import tflearn
from tflearn.data_utils import to_categorical
from tflearn.datasets import imdb
#IMDB dataset loading
train, test, _ = imdb.load_data(path = 'imdb.pkl', n_words = 10000, valid_portion = 0.1)
trainX, trainY = train
testX, testY = test
#converting labels to binary vectors
trainY = to_categorical(y = trainY, nb_classes=2) # **This is where I get the error**
testY = to_categorical(y = testY, nb_classes=2)
Cannot reproduce your error:
import tflearn
from tflearn.data_utils import to_categorical
from tflearn.datasets import imdb
train, test, _ = imdb.load_data(path = 'imdb.pkl', n_words = 10000, valid_portion = 0.1)
trainX, trainY = train
testX, testY = test
trainY[0:5]
# [0, 0, 0, 1, 0]
trainY = to_categorical(y = trainY, nb_classes=2)
trainY[0:5]
# array([[ 1., 0.],
# [ 1., 0.],
# [ 1., 0.],
# [ 0., 1.],
# [ 1., 0.]])
System configuration:
Python 2.7.12
Tensorflow 1.3.0
TFLearn 0.3.2
Ubuntu 16.04
UPDATE: It seems that some recent TFLearn commit has broken to_categorical - see here and here. I suggest to uninstall your current version and install the latest stable one with pip install tflearn (this is actually what I have done myself above).

How to dump tensorflow XLA LLVM IR?

I used to use the following command in Tensorflow 1.2:
export TF_XLA_FLAGS='--dump_ir_before_passes=true --dump_temp_products_to=./tmp'
for dumping LLVM IR in Tensorflow. However, the definition file of this flag link_to_the_flag_definition is removed in Tensorflow 1.3 and I wonder now how can I get LLVM IR dump?
Here's a test file for your convenience:
import tensorflow as tf
import numpy as np
import os
from tensorflow.python.client import timeline
import json
run_metadata = tf.RunMetadata()
sess = tf.Session()
jit_scope = tf.contrib.compiler.jit.experimental_jit_scope
x = tf.placeholder(np.float32, shape=[1000000])
y = tf.placeholder(np.float32, shape=[1000000])
c = tf.constant(0.1)
with jit_scope():
z = tf.add(tf.scalar_mul(0.1,x), y)
ix = np.ones((1000000), dtype=np.float32)
iy = np.ones((1000000), dtype=np.float32)
sess.run(z,
feed_dict={x: ix, y: iy},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.ctf.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format())
I have found the flag --xla_dump_ir_to here: https://github.com/tensorflow/tensorflow/issues/11462. It was added in Tensorflow 1.3.

AttributeError: 'tensorflow.python.ops.rnn' has no attribute 'rnn'

I am following this tutorial on Recurrent Neural Networks.
This is the imports:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.python.ops import rnn
from tensorflow.contrib.rnn import core_rnn_cell
This is code for input processing:
x = tf.transpose(x, [1,0,2])
x = tf.reshape(x, [-1, chunk_size])
x = tf.split(x, n_chunks, 0)
lstm_cell = core_rnn_cell.BasicLSTMCell(rnn_size)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
I am getting the following error for the outputs, states:
AttributeError: module 'tensorflow.python.ops.rnn' has no attribute 'rnn'
TensorFlow was updated recently, so what should be the new code for the offending line
For people using the newer version of tensorflow, add this to the code:
from tensorflow.contrib import rnn
lstm_cell = rnn.BasicLSTMCell(rnn_size)
outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
instead of
from tensorflow.python.ops import rnn, rnn_cell
lstm_cell = rnn_cell.BasicLSTMCell(rnn_size,state_is_tuple=True)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
PS: #BrendanA suggested that tf.nn.rnn_cell.LSTMCell be used instead of rnn_cell.BasicLSTMCell
Thanks #suku
I get the following error: ImportError: No module named 'tensorflow.contrib.rnn.python.ops.core_rnn'
To solve:
from tensorflow.contrib.rnn.python.ops import core_rnn
replaced by:
from tensorflow.python.ops import rnn, rnn_cell
and in my code I had used core_rnn.static_rnn:
outputs,_ = core_rnn.static_rnn(cell, input_list, dtype=tf.float32)
I got the this error:
NameError: name 'core_rnn' is not defined
This is solved by replacing the line by:
outputs,_ = rnn.static_rnn(cell, input_list, dtype=tf.float32)
python: 3.6 64bit
rensorflow:1.10.0
Use static_rnn method instead of rnn.
outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
instead of:
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)