I am using tensorflow to test my trained model on test images. I am feeding the images to tensorflow as below:
image_ab, image_aba = sess.run(fetches, feed_dict={self.image_a: image_a,
self.is_train: False})
I printed the image_a and image_ab and observed that image_a is not in the same order as the input images i give.
For some reasons i want the output also to be in the same order as input images.
Does tensorflow usually takes input in the same order as the input given?
I assume you mean image_ab is not in the same order. Because image_a is the input that you feed to tensorflow. If this input is not ordered correctly, it will be your preprocessing, not tensorflow.
Tensorflow usually works on batches of data. For images, the convention for batch dimensions is:
[batch, x, y, colors]
The operations that tensorflow performs are parallelized along the batch. If you simply plug convolutional layers together, the order of the batch should be preserved.
However, it is surely possible to reorder things in tensorflow:
import numpy as np
import tensorflow as tf
x = tf.placeholder(shape=(2,1), dtype="float32")
y = tf.concat([x[1], x[0]], axis=0)
sess = tf.Session()
sess.run([x,y], feed_dict={x:np.random.rand(2,1)})
This code will read in x, change the order of its entries and produce y.
So tensorflow can reorder your images. You could search your code for a pattern like the one in my example.
Related
I have an autoencoder defined using tf.keras in tensorflow 1.15. I cannot upgrade to tensorflow to 2.0 for some specific reasons.
This particular autoencoder is used for anomaly detection. I currently compute the AUC score of the autoencoder as follows:
All anomalous inputs are labelled 1 and all normal inputs are labelled 0. This is y_true
I feed the autoencoder with unseen inputs and then measure the reconstruction error, like so: errors = np.mean(np.square(data - model.predict(data)), axis=-1)
The mean of this array is then said to the predicted label, y_pred.
I then compute the AUC using auc = metrics.roc_auc_score(y_true, y_pred).
This approach works well. I now need to move towards using tf.data.dataset to feed in my data. Previously, it was numpy arrays. The issue is, I am unable to convert tf.data.dataset to a numpy array and hence unable to compute the mean squared error as seen in 2.
Once I have a tf.data.Dataset, I feed it for prediction like so: results = model.predict(x_test)
This yields a numpy array, results. I want to compute the mean square error of results with x_test. However, x_test is of type tf.data.Dataset. So the question is, how can I convert a tf.data.dataset to a numpy array in tensorflow 1.15 or what is an alternative method to do this?
I'm using tensorflow with keras to train to a char-RNN using google colabs. I train my model for 10 epochs and save it, using 'model.save()' as shown in the documentation for saving models. Immediately after, I load it again just to check, I try to call model.fit() on the loaded model and I get a "Dimensions must be equal" error using the exact same training set. The training data is in a tensorflow dataset organised in batches as shown in the documentation for tf datasets. Here is a minimal working example:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
X = np.random.randint(0,50,(10000))
seq_len = 150
batch_size = 20
dataset = tf.data.Dataset.from_tensor_slices(X)
dataset = dataset.batch(seq_len+1,drop_remainder=True)
dataset = dataset.map(lambda x: (x[:-1],x[1:]))
dataset = dataset.shuffle(20).batch(batch_size,drop_remainder=True)
def make_model(vocabulary_size,embedding_dimension,rnn_units,batch_size,stateful):
model = Sequential()
model.add(Embedding(vocabulary_size,embedding_dimension,
batch_input_shape=[batch_size,None]))
model.add(LSTM(rnn_units,return_sequences=True,stateful=stateful))
model.add(Dense(vocabulary_size))
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',metrics=['accuracy'])
model.summary()
return model
vocab_size = 51
emb_dim = 20
rnn_units = 10
model = make_model(vocab_size,emb_dim,rnn_units,batch_size,False)
model.fit(dataset,epochs=10)
model.save('/content/test_model')
model2 = tf.keras.models.load_model('/content/test_model')
model2.fit(dataset,epochs=10)
The first training line, "model.fit()", runs fine but the last line returns the error:
ValueError: Dimensions must be equal, but are 20 and 150 for '{{node
Equal}} = Equal[T=DT_INT64, incompatible_shape_error=true](ArgMax,
ArgMax_1)' with input shapes: [20], [20,150].
I want to be able to resume training later, as my real dataset is much larger. Therefore, saving only the weights is not an ideal option.
Any advice?
Thanks!
If you have saved checkpoints than, from those checkpoints, you can resume with reduced dataset. Your neural network / layers and dimensions should be same.
The problem is the 'accuracy' metric. For some reason, there is some mishandling of dimensions on the predictions when the model is loaded with this metric, as I found in this thread (see last comment). Running model.compile() on the loaded model with the same metric allows training to continue. However, it shouldn't be necessary to compile the model again. Moreover, this means that the optimiser state is lost, as explained in this answer, thus, this is not very useful for resuming training.
On the other hand, using 'sparse_categorical_accuracy' from the start works just fine. I am able to load the model and continue training without having to recompile. In hindsight, this choice is more appropriate given that the outputs of my last layer are logits over the distribution of characters. Thus, this is not a binary but a multiclass classification problem. Nonetheless, I verified that both 'accuracy' and 'sparse_categorical_accuracy' returned the same values in my specific example. Thus, I believe that keras is internally converting accuracy to categorical accuracy, but something goes wrong when doing this on a model that has been just loaded which forces the need to recompile.
I also verified that if the saved model was compiled with 'accuracy', loading the model and recompiling with 'sparse_categorical_accuracy' will allow resuming training. However, as mentioned before, this would discard the state of the optimiser and I suspect that it would be no better than just making a new model and loading only the weights from the saved one.
Reposting my original question since even after significant improvements to clarity, it was not revived by the community.
I am looking for a way to split feature and corresponding label data into train and test using TensorFlow inbuilt methods. My data is already in two tensors (i.e. tf.Tensor objects), named features and labels.
I know how to do this easily for numpy arrays using sklearn.model_selection as shown in this post. Additionally, I was pointed to this method which requires the data to be in a single tensor. Also, I need the train and test sets to be disjoint, unlike in this method (meaning they can't have common data points after the split).
I am looking for a way to do the same using built-in methods in Tensorflow.
There may be too many conditions in my requirement, but basically what is needed is an equivalent method to sklearn.model_selection.train_test_split() in Tensorflow such as the below:
import tensorflow as tf
X_train, X_test, y_train, y_test = tf.train_test_split(features,
labels,
test_size=0.1,
random_state=123)
You can achieve this by using TF in the following way
from typing import Tuple
import tensorflow as tf
def split_train_test(features: tf.Tensor,
labels: tf.Tensor,
test_size: float,
random_state: int = 1729) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor]:
# Generate random masks
random = tf.random.uniform(shape=(tf.shape(features)[0],), seed=random_state)
train_mask = random >= test_size
test_mask = random < test_size
# Gather values
train_features, train_labels = tf.boolean_mask(features, mask=train_mask), tf.boolean_mask(labels, mask=train_mask)
test_features, test_labels = tf.boolean_mask(features, mask=test_mask), tf.boolean_mask(labels, mask=test_mask)
return train_features, test_features, train_labels, test_labels
What we are doing here is first creating a random uniform tensor with the size of the length of the data.
Then we follow by creating boolean masks according to the ratio given by test_size and finally we extract the relevant part for train/test using tf.boolean_mask
I have timeseries data (ECG). I have annotations for blocks of 30seconds.
each block has 1000 data points. We have 500 of those data blocks.
The target, the annotations are e.g. in range 1 to 5.
To be clear please see Figure
About X-DATA
How translate that into the Keras notation for input data [Samples,timesteps, features]?
My guess:
Samples=Blocks (500)
timesteps=values(1000)
features= ECG as itselve (1)
resulting in [500,1000,1]
About Y-Data(target)
My target or y data would result in
[500,1,1]
after one hot encoding it would be
[500,5,1]
The problem is that Keras expect the X and y data to be of same dimensions. But increasing my ydata to 1000 per timestep would not make sense to me.
Thanks for your help
p.s. cannot answer directly as I am with my parent in law. Thanks in advance
I think you're thinking about y incorrectly. From my understanding based on you're graph.
y actually is (500, 5) after one hot encoding. That is, for every block there is a single outcome.
Also there is no need for X and y to have the same dimensions in Keras (unless you have a seq2seq requirement which is not the case here).
What we do want is the model to give us a probability distribution over
the possible labels for each block, and that we'll achieve using a softmax
on the last (Dense) layer.
Here is how I simulated your problem:
import numpy as np
from keras.models import Model
from keras.layers import Dense, LSTM
# using eye doesn't capture one-hot but works for the example
series = np.random.rand(500, 1000, 1)
labels = np.eye(500, 5)
inp = Input(shape=(1000, 1))
lstm = LSTM(128)(inp)
out = Dense(5, activation='softmax')(lstm)
model = Model(inputs=[inp], outputs=[out])
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(series, labels)
Suppose i am having a data set with: number of observations = 1000, each observation is a sequence of fixed length = 10(lets say), and each point in the sequence having 2 features(numerical). how we can input such data to an rnn in tensorflow ?
Any small suggestions also accepted. Thanks
According to your description, Your dataset is 1000x10x2
which looks something like this:
import numpy as np
data=np.random.randint(0,10,[1000,10,2])
Now as you said your sequence is fixed size so you don't need padding , now you have to just decide batch_size and then iterations
suppose batch size is 5:
batch_size=5
iterations=int(len(train_dataset)//batch_size)
Now feed your input to tensorflow lstm cell , your model would be something like this:
Here is example without batch size,
import numpy as np
import tensorflow as tf
from tensorflow.contrib import rnn
data=np.random.randint(0,10,[1000,10,2])
input_x=tf.placeholder(tf.float32,[1000,10,2])
with tf.variable_scope('encoder') as scope:
cell=rnn.LSTMCell(150)
model=tf.nn.dynamic_rnn(cell,inputs=input_x,dtype=tf.float32)
output_,(fs,fc)=model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(model, feed_dict={input_x: data})
print(output)
if you want to use batch then you have to either reshape data for LSTM or you have to use embedding, because LSTM takes rank 3