I have an issue with tf.datasets and tf.keras.predict(). I don't know why the length of the output array of predict() is larger than the original lenght of data used. Here is a sketch:
Before I used arrays. And if I applied predict() on a array of lenght x I get an output of lenght x... This is my expected behaviour.
I have a csv of test data with some lenght (10000). Now I use
LABEL_COLUMN = 'label'
LABELS = [0, 1]
def get_dataset(file_path, **kwargs):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=1, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True,
**kwargs)
return dataset
to convert this to a tf.dataset.
val='data/test.csv'
val_data= get_dataset(val)
Now using
scores=bert_model.predict(val_data)
gives an array ouput which is very much larger than of the original csv file (10000)...
I am really off. Also I ask myself how does keras know what "keys" of the tf.dataset to use for predictrions.
The structure of the 1. elemnt of the dataset looks like "val[0]":
({'input_ids': <tf.Tensor: shape=(15,), dtype=int32, numpy=
array([ 3, 2019, 479, 1169, 4013, 26918, 259, 4, 14576,
3984, 889, 648, 1610, 26918, 4])>, 'token_type_ids': <tf.Tensor: shape=(15,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1])>, 'attention_mask': <tf.Tensor: shape=(15,), dtype=int32, numpy=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])>}, <tf.Tensor: shape=(), dtype=int64, numpy=0>)
why does my label column has no key with name "label"? The first 3 keys all have their names and the model is trained with these 3 columns.
I use above structure with label column as input for predict...
Any idea? Is it due to the function of making a dataset from a csv?
Related
I am using TensorFlow 2.1 and trying to use variable-length input sequences for a recurrent neural network after conversion to TFLite. I first converted my Keras model to TFLite model using TFLite Converter. I built the model with input shapes [(None, 40), (1, 6, 2, 32)]. But after conversion into TFLite, the model accepts only input shape [(1, 40), (1, 6, 2, 32)]. I want my TFLite model to accept variable values for None. I tried to resize my tensor input using resize_tensor_input, but still the shape of the input is not changing. I have pasted my code snippet and its output below.
interpreter = tf.lite.Interpreter(model_path=my_model_path)
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(0, [3, 40], strict=False)
interpreter.allocate_tensors()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0]["index"], np.random.uniform(size=[3, 40]).astype(np.float32))
interpreter.set_tensor(input_details[1]["index"], np.zeros(shape=[1, 6, 2, 32]).astype(np.float32))
tf.print("Input details : ", input_details)
interpreter.invoke()
result = interpreter.get_tensor(output_details[0]["index"])
final_state = interpreter.get_tensor(output_details[1]["index"])
I printed my input details inside the code pasted above and the output is pasted below. Here I am getting my first input shape as [1, 40] instead of [3, 40].
Input details : [{'dtype': <class 'numpy.float32'>,
'index': 0,
'name': 'input_1',
'quantization': (0.0, 0),
'shape': array([ 1, 40], dtype=int32)},
{'dtype': <class 'numpy.float32'>,
'index': 1,
'name': 'input_2',
'quantization': (0.0, 0),
'shape': array([ 1, 6, 2, 32], dtype=int32)}]
Output shape : (1, 1, 32)
Final state shape : (1, 6, 2, 32)
What am I doing wrong in the above code? Or if my approach to achieve the desired result is wrong, please help me to find the right method to achieve the same, or is there any other workaround for the same?
object_for_each_prior = tf.constant([1 for i in range(8732)])
-><tf.Tensor: shape=(8732,), dtype=int32, numpy=array([1, 1, 1, ..., 1, 1, 1], dtype=int32)>
Then if I want to get the position 1148,1149
prior_for_each_object = tf.constant([1148,1149])
object_for_each_prior[prior_for_each_object]
Then I got the following error
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <tf.Tensor: shape=(2,), dtype=int32, numpy=array([1148, 1149], dtype=int32)>
If I want to get the tensor's number by index how should I approach it?
Use tf.gather_nd function to index tensors.
Here's example:
>>> object_for_each_prior = tf.constant([1 for i in range(8732)])
>>> prior_for_each_object = tf.gather_nd(object_for_each_prior, indices=[[1148], [1149]])
>>> prior_for_each_object
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([1, 1])>
>>> prior_for_each_object.numpy()
array([1, 1])
refer this doc to know more about tf.gatherr_nd.
I'm having a bit of trouble with this. To start off, here is what my data is like:
test_data, test_labels, train_data, train_labels
train_data[0]
[1, 5, 5, 0, 0, 1, 1, 1, 25, 1, 1, 10, 0, 1, 1, 1, 0, 1, 39, 2, 0, 1, 1, 12, 3]
train_labels[0]
0
It's the exact same for test_data and test_labels (it's just a 50/50 split of input data). The array size for each array in test_data will always be 25 elements. The label is either 0 for good or 1 for bad.
Now, I've tried lot's of things so far and can't come up with how to reshape these arrays. I'm essentially trying to do this:
model.add(keras.layers.LSTM(256, input_shape=unknown, return_sequences=False, return_state=False, dropout=0.2))
model.add(keras.layers.Dense(256))
model.add(keras.layers.Dropout(0.3))
model.add(keras.layers.Dense(2, activation=tf.nn.softmax))
history = self.model.fit(self.train_data,
self.train_labels,
epochs=50,
batch_size=64,
verbose=1,
validation_split=0.2)
Another question, is 2 correct for the last dense layer, or should it be 1 in this case?
I am learning the TensorFlow, building a multilayer_perceptron model. I am looking into some examples like the one at: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/multilayer_perceptron.ipynb
I then have some questions in the code below:
def multilayer_perceptron(x, weights, biases):
:
:
pred = multilayer_perceptron(x, weights, biases)
:
:
with tf.Session() as sess:
sess.run(init)
:
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Accuracy:", accuracy.eval({x: X_test, y: y_test_onehot}))
I am wondering what do tf.argmax(prod,1) and tf.argmax(y,1) mean and return (type and value) exactly? And is correct_prediction a variable instead of real values?
Finally, how do we get the y_test_prediction array (the prediction result when the input data is X_test) from the tf session? Thanks a lot!
tf.argmax(input, axis=None, name=None, dimension=None)
Returns the index with the largest value across axis of a tensor.
input is a Tensor and axis describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
For your specific case let's use two arrays and demonstrate this
pred = np.array([[31, 23, 4, 24, 27, 34],
[18, 3, 25, 0, 6, 35],
[28, 14, 33, 22, 20, 8],
[13, 30, 21, 19, 7, 9],
[16, 1, 26, 32, 2, 29],
[17, 12, 5, 11, 10, 15]])
y = np.array([[31, 23, 4, 24, 27, 34],
[18, 3, 25, 0, 6, 35],
[28, 14, 33, 22, 20, 8],
[13, 30, 21, 19, 7, 9],
[16, 1, 26, 32, 2, 29],
[17, 12, 5, 11, 10, 15]])
Evaluating tf.argmax(pred, 1) gives a tensor whose evaluation will give array([5, 5, 2, 1, 3, 0])
Evaluating tf.argmax(y, 1) gives a tensor whose evaluation will give array([5, 5, 2, 1, 3, 0])
tf.equal(x, y, name=None) takes two tensors(x and y) as inputs and returns the truth value of (x == y) element-wise.
Following our example, tf.equal(tf.argmax(pred, 1),tf.argmax(y, 1)) returns a tensor whose evaluation will givearray(1,1,1,1,1,1).
correct_prediction is a tensor whose evaluation will give a 1-D array of 0's and 1's
y_test_prediction can be obtained by executing pred = tf.argmax(logits, 1)
The documentation for tf.argmax and tf.equal can be accessed by following the links below.
tf.argmax() https://www.tensorflow.org/api_docs/python/math_ops/sequence_comparison_and_indexing#argmax
tf.equal() https://www.tensorflow.org/versions/master/api_docs/python/control_flow_ops/comparison_operators#equal
Reading the documentation:
tf.argmax
Returns the index with the largest value across axes of a tensor.
tf.equal
Returns the truth value of (x == y) element-wise.
tf.cast
Casts a tensor to a new type.
tf.reduce_mean
Computes the mean of elements across dimensions of a tensor.
Now you can easily explain what it does. Your y is one-hot encoded, so it has one 1 and all other are zero. Your pred represents probabilities of classes. So argmax finds the positions of best prediction and correct value. After that you check whether they are the same.
So now your correct_prediction is a vector of True/False values with the size equal to the number of instances you want to predict. You convert it to floats and take the average.
Actually this part is nicely explained in TF tutorial in the Evaluate the Model part
tf.argmax(input, axis=None, name=None, dimension=None)
Returns the index with the largest value across axis of a tensor.
For the case in specific, it receives pred as argument for it's input and 1 as axis. The axis describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
Example: Given the list [2.11,1.0021,3.99,4.32] argmax will return 3 which is the index of the highest value.
correct_prediction is a tensor that will be evaluated later. It is not a regular python variable. It contains the necessary information to compute the value later.
For this specific case, it will be part of another tensor accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) and will be evaluated by eval on accuracy.eval({x: X_test, y: y_test_onehot}).
y_test_prediction should be your correct_prediction tensor.
For those who do not have much time to understand tf.argmax:
x = np.array([[1, 9, 3],[4, 5, 6]])
tf.argmax(x, axis = 0)
output:
[array([1, 0, 1], dtype=int64)]
tf.argmax(x, axis = 1)
Output:
[array([1, 2], dtype=int64)]
source
I get the following error:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 6 arrays but instead got the following list of 3 arrays: [array([[ 0, 0, 0, ..., 18, 12, 1],
[ 0, 0, 0, ..., 18, 11, 1],
[ 0, 0, 0, ..., 18, 9, 1],
...,
[ 0, 0, 0, ..., 18, 15, 1],
[ 0, 0, 0, ..., 18, 9, ...
in my keras model.
I think the model is mistaking something?
This happens when I feed input to my model. The same input works perfectly well in another program.
It's impossible to diagnose your exact problem without more information.
I usually specify the input_shape parameter of the first layer based on my training data X.
e.g.
model = Sequential()
model.add(Dense(32, input_shape=X.shape[0]))
I think you'll want X to look something like this:
[
[[ 0, 0, 0, ..., 18, 11, 1]],
[[ 0, 0, 0, ..., 18, 9, 1]],
....
]
So you could try reshaping it with the following line:
X = np.array([[sample] for sample in X])
The problem really comes from giving the wrong input to the network.
In my case the problem was that my custom image generator was passing the entire dataset as input rather than a certain pair of image-label. This is because I thought that generator.flow(x,y, batch_size) of Keras already has a yield structure inside, however the correct generator structure should be as follows(with a separate yield):
def generator(batch_size):
(images, labels) = utils.get_data(1000) # gets 1000 samples from dataset
labels = to_categorical(labels, 2)
generator = ImageDataGenerator(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90.,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2)
generator.fit(images)
gen = generator.flow(images, labels, batch_size=32)
while 1:
x_batch, y_batch = gen.next()
yield ([x_batch, y_batch])
I realize the question is old but it might save some time for someone to find the issue.