What kind of tensorflow placeholder shape to use to contain such sequence? - tensorflow

This is the sequence I intend to input:
sequence = [[[113, 162, 159], [3, 163, 417], [393, 77, 333], [420, 214, 382], [308, 441, 175], [152, 80, 477], [184, 101, 54], [417, 277, 487], [494, 329, 315], [413, 386, 319]],
[425, 132, 407],
[405]]
However, I am unable to determine what shape of placeholder to use for it.
x = tf.placeholder(tf.float32, shape=[None, None, 3], name='probable_solutions')
sess = tf.Session()
init_op = tf.global_variables_initializer()
sess.run(init_op)
sess.run(x, feed_dict={x: [sequence[0], sequence[1], sequence[2]]})
This gives me the following error:
ValueError: setting an array element with a sequence.
Here's the full code-
https://pastebin.com/cq44wcir
(I've also marked a few questions in the pastebin code - you can find them by searching for '#~~#', no quotes, in the text)

Collected from :
first of all before sending it to feed_dict, sequence should be a numpy array.
of course, you can convert it to numpy array easily but that's not the solution.
It's clear that you are trying to create an array from a list which isn't shaped like a multi-dimensional array.
Any array which isn't "Generalized" cannot be used as feed_dict

Related

Doing .to() on a tensorflow EagerTensor

I am taking in a batch of images, and every image I am then splitting into patches using tf.image.extract_patches. I then want to pass those patches through a model to get embeddings/feats. The problem is that I get this error:
File "/home/fingerprint_firstpart.py", line 152, in <module>
main(args)
File "/home/fingerprint_firstpart.py", line 117, in main
patches_embs=get_patches_and_embs(image,fprinter)
File "/home/fingerprint_firstpart.py", line 55, in get_patches_and_embs
feat = fprinter(patches)
File "/home/fingerprinter.py", line 165, in __call__
x = x.to(self.device)
File "/home/kar/anaconda3/envs/styleanalysis/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 401, in __getattr__
self.__getattribute__(name)
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to'
Here's the code in question:
def get_patches_and_embs(images, fprinter):
patches = tf.image.extract_patches(
images=images,
sizes=[1,384, 384,1],
strides= [1, 384, 384,1],
rates=[1, 1, 1, 1],
padding="VALID",
)
patches = tf.reshape(patches, [-1, 384, 384, 3])
patches = tf.transpose(patches, perm=[0, 3, 1, 2]) #at this point this is a tensor of size batch x 3 x 384 x 384, as the model needs as input
feat = fprinter(patches)
return feat
Now, if I take the patches tensorflow tensor and convert it to numpy, and then to pytorch the prgram works just fine. However, I'd like to avoid that if at all possible, so is there any way to do .to() on a tensorflow tensor?

Concat ragged arrays in Keras

I have several RaggedTensors that I want to concatenate; I am using Keras. Vanilla Tensorflow will happily concatenate them, so I tried the code:
card_feature = layers.concatenate([ragged1, ragged2, ragged3])
but it gave the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/timeroot/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 925, in __call__
return self._functional_construction_call(inputs, args, kwargs,
File "/home/timeroot/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1084, in _functional_construction_call
base_layer_utils.create_keras_history(inputs)
File "/home/timeroot/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 191, in create_keras_history
_, created_layers = _create_keras_history_helper(tensors, set(), [])
File "/home/timeroot/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 222, in _create_keras_history_helper
raise ValueError('Tensorflow ops that generate ragged or sparse tensor '
ValueError: Tensorflow ops that generate ragged or sparse tensor outputs are currently not supported by Keras automatic op wrapping. Please wrap these ops in a Lambda layer:
```
weights_mult = lambda x: tf.sparse.sparse_dense_matmul(x, weights)
output = tf.keras.layers.Lambda(weights_mult)(input)
```
so then I tried:
concat_lambda = lambda xs: tf.concat(xs, axis=2)
card_feature = layers.Lambda(concat_lambda)([ragged1, ragged2, ragged3])
but it gave the exact same error, even though I had wrapped it. Is this a bug / is there a workaround?
Code to concatenate 3 Ragged Tensors is shown below:
import tensorflow as tf
print(tf.__version__)
Ragged_Tensor1 = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
Ragged_Tensor2 = tf.ragged.constant([[5, 3]])
Ragged_Tensor3 = tf.ragged.constant([[6,7,8], [9,10]])
print(tf.concat([Ragged_Tensor1, Ragged_Tensor2, Ragged_Tensor3], axis=0))
Output is shown below:
2.3.0
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], [], [5, 3], [6, 7, 8], [9, 10]]>
But it looks like you are trying to concatenate Ragged Tensor Operations. Please share your complete code so that we can try to help you.

How to use tensorflow sequence_numeric_column with an RNNClassifier?

I was looking throw the tensorflow contrib API and I wanted to use the RNNClassifier available with Tensorflow 1.13. Contrary to non sequence estimators, this one needs sequence feature columns only. However I was not able to make it work on a toy dataset. I keep getting an error while using sequence_numeric_column.
Here is the structure of my toy dataset:
idSeq,kind,label,size
0,0,dwarf,117.6
0,0,dwarf,134.4
0,0,dwarf,119.0
0,1,human,168.0
0,1,human,145.25
0,2,elve,153.9
0,2,elve,218.49999999999997
0,2,elve,210.9
1,0,dwarf,166.6
1,0,dwarf,168.0
1,0,dwarf,131.6
1,1,human,150.5
1,1,human,208.25
1,1,human,210.0
1,2,elve,199.5
1,2,elve,161.5
1,2,elve,197.6
where idSeq allow us to see which rows belong to which sequence.
I want to predict the "kind" column thanks to the "size" column.
Below there is my code about make my RNN training on my dataset.
import numpy as np
import pandas as pd
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
tf.logging.set_verbosity(tf.logging.INFO)
dataframe = pd.read_csv("data_rnn.csv")
dataframe_test = pd.read_csv("data_rnn_test.csv")
train_x = dataframe
train_y = dataframe.loc[:,(["kind"])]
size_feature_col = tf.contrib.feature_column.sequence_numeric_column('size ')
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[size_feature_col ],
num_units=[32, 16],
cell_type='lstm',
model_dir=None,
n_classes=3,
optimizer='Adagrad'
)
def make_dataset(
batch_size,
x,
y=None,
shuffle=False,
shuffle_buffer_size=1000,
shuffle_seed=1):
"""
An input function for training, evaluation or prediction.
Parameters
----------------------
batch_size: integer
the size of the batch to use for the training of the neural network
x: pandas dataframe
dataframe that contains the features of the samples to study
y: pandas dataframe or array (Default: None)
dataframe or array that contains the values to predict of the samples
to study. If none, we want a dataset for evaluation or prediction.
shuffle: boolean (Default: False)
if True, we shuffle the elements of the dataset
shuffle_buffer_size: integer (Default: 1000)
if we shuffle the elements of the dataset, it is the size of the buffer
used for it.
shuffle_seed : integer
the random seed for the shuffling
Returns
---------------------
dataset.make_one_shot_iterator().get_next(): Tensor
a nested structure of tf.Tensors containing the next element of the
dataset to study
"""
def input_fn():
if y is not None:
dataset = tf.data.Dataset.from_tensor_slices((dict(x), y))
else:
dataset = tf.data.Dataset.from_tensor_slices(dict(x))
if shuffle:
dataset = dataset.shuffle(
buffer_size=shuffle_buffer_size,
seed=shuffle_seed).batch(batch_size).repeat()
else:
dataset = dataset.batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return input_fn
batch_size = 50
random_seed = 1
input_fn_train = make_dataset(
batch_size=batch_size,
x=train_x,
y=train_y,
shuffle=True,
shuffle_buffer_size=len(train_x),
shuffle_seed=random_seed)
estimator.train(input_fn=input_fn_train, steps=5000)
But I only got the following error :
INFO:tensorflow:Calling model_fn.
Traceback (most recent call last):
File "main.py", line 125, in <module>
estimator.train(input_fn=input_fn_train, steps=5000)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1154, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1112, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/contrib/estimator/python/estimator/rnn.py", line 512, in _model_fn
config=config)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/contrib/estimator/python/estimator/rnn.py", line 332, in _rnn_model_fn
logits, sequence_length_mask = logit_fn(features=features, mode=mode)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/contrib/estimator/python/estimator/rnn.py", line 226, in rnn_logit_fn
features=features, feature_columns=sequence_feature_columns)
File "/root/.local/lib/python3.5/site-packages/tensorflow/contrib/feature_column/python/feature_column/sequence_feature_column.py", line 120, in sequence_input_layer
trainable=trainable)
File "/root/.local/lib/python3.5/site-packages/tensorflow/contrib/feature_column/python/feature_column/sequence_feature_column.py", line 496, in _get_sequence_dense_tensor
sp_tensor, default_value=self.default_value)
File "/root/.local/lib/python3.5/site-packages/tensorflow/python/ops/sparse_ops.py", line 1432, in sparse_tensor_to_dense
sp_input = _convert_to_sparse_tensor(sp_input)
File "/root/.local/lib/python3.5/site-packages/tensorflow/python/ops/sparse_ops.py", line 68, in _convert_to_sparse_tensor
raise TypeError("Input must be a SparseTensor.")
TypeError: Input must be a SparseTensor.
So I don't understand what I've done wrong because on the documentation, it is written that we have to give a sequence column to the RNNEstimator. They do not say anything about giving sparse tensor.
Thanks in advance for your help and advices.

Tensorflow tutorial estimator Failed to convert object of type <type 'dict'> to Tensor

I am running the tutorial code A Guide to TF Layers: Building a Convolutional Neural Network on API r.1.3
https://www.tensorflow.org/tutorials/layers
My code is here.
https://gist.github.com/Po-Hsuan-Huang/91e31d59fd3aa07f40272b75fe2a924d
The error shows:
runfile('/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py', wdir='/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST')
Extracting MNIST-data/train-images-idx3-ubyte.gz
Extracting MNIST-data/train-labels-idx1-ubyte.gz
Extracting MNIST-data/t10k-images-idx3-ubyte.gz
Extracting MNIST-data/t10k-labels-idx1-ubyte.gz
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_tf_random_seed': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_save_checkpoints_steps': None, '_model_dir': '/tmp/mnist_convnet_model', '_save_summary_steps': 100}
Traceback (most recent call last):
File "<ipython-input-1-c9b70e26f791>", line 1, in <module>
runfile('/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py', wdir='/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST')
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
builtins.execfile(filename, *where)
File "/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py", line 129, in <module>
main(None)
File "/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py", line 117, in main
hooks=[logging_hook])
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 241, in train
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 630, in _train_model
model_fn_lib.ModeKeys.TRAIN)
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 615, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py", line 24, in cnn_model_fn
input_layer = tf.reshape(features, [-1, 28, 28, 1])
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2619, in reshape
name=name)
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 493, in apply_op
raise err
TypeError: Failed to convert object of type <type 'dict'> to Tensor. Contents: {'x': <tf.Tensor 'random_shuffle_queue_DequeueMany:1' shape=(100, 784) dtype=float32>}. Consider casting elements to a supported type.
I traced down a little bit, and found the function estimator._call_input_fn() does not use parameter 'mode' at all, thus not able to create a tuple comprises features and labels. Is it the tutorial that needs to be modified, or there is some problem with this function. I don't understand why mode is unused here.
Thanks !
def _call_input_fn(self, input_fn, mode):
"""Calls the input function.
Args:
input_fn: The input function.
mode: ModeKeys
Returns:
Either features or (features, labels) where features and labels are:
features - `Tensor` or dictionary of string feature name to `Tensor`.
labels - `Tensor` or dictionary of `Tensor` with labels.
Raises:
ValueError: if input_fn takes invalid arguments.
"""
del mode # unused
input_fn_args = util.fn_args(input_fn)
kwargs = {}
if 'params' in input_fn_args:
kwargs['params'] = self.params
if 'config' in input_fn_args:
kwargs['config'] = self.config
with ops.device('/cpu:0'):
return input_fn(**kwargs)
Your gist doesn't actually contain any of your code... Either way, from your error message I think you have just mistranscribed a bit of code from the tutorial.
Your error log indicates you have
"/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py", line 24, in cnn_model_fn
input_layer = tf.reshape(features, [-1, 28, 28, 1])
Whereas the tutorial has:
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])

Running LinearClassifier.fit with SparseTensor

I'm trying to create a LinearClassifer with a sparse binary numpy coo matrix (reports) using a SparseTensor. This is with TensorFlow 0.9.0
I do this as follows:
reports_indices = list()
rows,cols = reports.nonzero()
for row,col in zip(rows,cols):
reports_indices.append([row,col])
x_sparsetensor = tf.SparseTensor(
indices=reports_indices,
values=[1] * len(reports_indices),
shape=[reports.shape[0],reports.shape[1]])
The dimensions of reports is 10K by 1.5K.
I then setup the LinearClassifier as follows:
m = tf.contrib.learn.LinearClassifier()
m.fit(x=x_sparsetensor,y=response_vector.todense(),input_fn=None)
Response vector is binary and has a length of 10K. This results in the following error:
Traceback (most recent call last):
File "ddi_prr.py", line 38, in <module>
m.fit(x=x_sparsetensor,y=response_vector.todense(),input_fn=None)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 173, in fit
input_fn, feed_fn = _get_input_fn(x, y, batch_size)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 67, in _get_input_fn
x, y, n_classes=None, batch_size=batch_size)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/io/data_feeder.py", line 117, in setup_train_data_feeder
X, y, n_classes, batch_size, shuffle=shuffle, epochs=epochs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/io/data_feeder.py", line 240, in __init__
batch_size)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/io/data_feeder.py", line 44, in _get_in_out_shape
x_shape = list(x_shape[1:]) if len(x_shape) > 1 else [1]
TypeError: object of type 'Tensor' has no len()
Is my construction incorrect for some reason? It seems that LinearClassifier.fit can't be instantiated with a SparseTensor for x, is that true? Thanks in advance for any help.
As far as I know, passing SparseTensors as x or y arguments to .fit is not supported:
x: matrix or tensor of shape [n_samples, n_features...]. Can be
iterator that returns arrays of features. The training input samples
for fitting the model. If set, input_fn must be None.
Also, SparseTensor is a sparse equivalent of Tensor -- an object representing symbolic computation to be executed. I believe what you would like to use as x is SparseTensorValue.
You can try pass it using other way of passing data to Estimator: input_fn function:
def get_input_fn(sparse_x, y):
def input_fn():
return sparse_x, y
m.fit(input_fn=get_input_fn(x, y))
if it won't work, you may try to produce the SparseTensors inside the input_fn function.