I am using tensorflow version 1.3. But the tutorial that I following is written on the version 1.0 and I am quite new on tensorflow. The problem that I get is:
module' object has no attribute 'prepare_attention
And the code is ;
tf.contrib.seq2seq.prepare_attention(attention_states, attention_option = "bahdanau", num_units = decoder_cell.output_size)
I couldn't figure out what the use instead of tf.contrib.seq2seq.prepare_attention() function. Is there anyone who can help?
Degrade your tensorflow and it'll work. The problem is that prepare_attention is deprecated and hence we use an older version of tf to work with it
Okay, all you need to do is create a new environment with python 3.5.4 and then install tensorflow 1.0.0. That's it. Everything will work fine.
tf.contrib.seq2seq.prepare_attention works only when the TensorFlow version is 1.0, I have version 2.3.1
My solution:
tf.contrib.seq2seq.prepare_attention = tf.compat.v1.nn.rnn_cell.prepare_attention
Related
i'm trying to train a Retinanet on my Dataset with this command line :
retinanet-train --batch-size 4 --steps 349 --epochs 50 --weights logos/resnet50_coco_best_v2.1.0.h5 --snapshot-path logos/snapshots csv logos/retinanet_train.csv logos/retinanet_classes.csv
And I get this Error :
AttributeError: module 'tensorflow' has no attribute 'ConfigProto'
I know that , this is related to the version of Tensorlow , in the new version ConfigProto disappeared , but i want to fix it without 're-installing' the old version the 1.14, cause otherwise it will be a mess.
Any suggestion would be super appreciated , thank you.
Since tf.ConfigProto is deprecated in TF 2.0, use tf.compat.v1.ConfigProto() instead by replacing the occurences of tf.ConfigProto() in the retinanet-train code (assuming that's where tf.ConfigProto() is being called). Link to tensorflow doc here.
When to use mxnet-cu101mkl = {version = "==1.5.0",sys_platform = "== 'linux'"}, I get error that I cannot longer import ndarray or nd:
ImportError: cannot import name 'ndarray'
I have no problem with this when using the same code with mxnet-cu101 (no mkl).
Is this just a bug or is this subpackage no longer supported?
I can confirm that mxnet-cu100mkl works fine (version 1.5.0). Very slight CUDA version difference to yours but the package shouldn't change. I think you might be importing a different mxnet here, possibly a folder called mxnet for example. Check the following:
import mxnet as mx
print(mx.__file__)
It should show the path to mxnet within site-packages for you Python environment. e.g.
/home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/__init__.py
Running this example on ML engine using Cloud composer but am receiving the following error:
AttributeError: 'module' object has no attribute 'estimator'
Even though I am importing import tensorflow as tf and it exits on the following line:
estimator = tf.estimator.Estimator(model_fn = image_classifier,
Runtime version is 1.8 similar to the version using the repo.
t3 = MLEngineTrainingOperator(
task_id='ml_engine_training_op',
project_id=PROJECT_ID,
job_id=job_id,
package_uris=["gs://us-central1-ml/trainer-0.1.tar.gz"],
training_python_module=MODULE_NAME,
training_args=training_args,
region=REGION,
scale_tier='BASIC_GPU',
runtimeVersion = '1.8',
dag=dag
)
Please check the setup.py, make sure you put tensorflow in it as
REQUIRED_PACKAGES = ['tensorflow==1.8.0']. or some other version. Then don't forget to re-generate tar and upload.
Also, in my case, MLEngineTrainingOperator doesn't seem to pick runtime_version or python_version at all into ML Engine.
System information
custom code: no, it is the one in https://www.tensorflow.org/get_started/estimator
system: Apple
OS: Mac OsX 10.13
TensorFlow version: 1.3.0
Python version: 3.6.3
GPU model: AMD FirePro D700 (actually, two such GPUs)
Describe the problem
Dear all,
I am running the simple iris program:
https://www.tensorflow.org/get_started/estimator
under python 3.6.3 and tensorflow 1.3.0.
The program executes correctly, apart from the very last part, i.e. the one related to the confusion matrix.
In fact, the result I get for the confusion matrix is:
New Samples, Class Predictions: [array([b'1'], dtype=object), array([b'2'], dtype=object)]
rather than the expected output:
New Samples, Class Predictions: [1 2]
Has anything about confusion matrix changed in the latest release?
If so, how should I modify that part of the code?
Thank you very much for your help!
Best regards
Ivan
Source code / logs
https://www.tensorflow.org/get_started/estimator
This looks like a numpy issue. array([b'1'], dtype=object) is one way numpy represents the string '1'.
When I invoke tf.contrib.layers.convolution2d the tensorflow execution terminates with an error about one of the parameters used
got an unexpected keyword argument 'weight_init'
The parameter passed are the follows:
layer_one = tf.contrib.layers.convolution2d(
float_image_batch,
num_output_channels=32,
kernel_size=(5,5),
activation_fn=tf.nn.relu,
weight_init=tf.random_normal,
stride=(2, 2),
trainable=True)
That is exactly as described in the book that I'm reading. I suspect a possible syntax problem with weight_init=tf.random_normal written directly inside the call, but I don't know how to fix. I'm using Tensorflow 0.12.0
The book that you are reading (You didn't mention which one) might be using an older version of TensorFlow when the initial values for the weight tensor was passed through the weight_init argument. In the TensorFlow library version you are using (You didn't mention your TF version), probably that argument is replaced with weight_initializer. The latest (TensorFlow v0.12.0) documentation for tf.contrib.layers.convolution2d is here.
To fix your problem, you can change the following line in your code:
weight_init=tf.random_normal
to
weight_initializer=tf.random_normal_initializer()
According to the documentation, by default, tf.random_normal_initialier uses a 0.0 mean, a standard deviation of 1.0 and the datatype to be tf.float32. You may change the arguments as per your need using this line instead:
weight_initializer=tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)