Trying to understand shuffle within mini-batch in tensorflow Dataset - tensorflow

From here I understand what shuffle, batch and repeat do. I'm working on Medical image data where each mini-batch has slices from one patient record. I'm looking for a way to shuffle within the minibatch while training. I cannot increase the buffer size because I don't want slices from different records to get mixed up. Could someone please explain how this can be done?
dataset = tf.data.Dataset.from_tensor_slices(tf.range(1, 20))
data = dataset.batch(5).shuffle(5).repeat(1)
for element in data.as_numpy_iterator():
print(element)
Current Output :
[ 6 7 8 9 10]
[1 2 3 4 5]
[11 12 13 14 15]
[16 17 18 19]
Expected Output :
[ 6 8 9 7 10]
[3 4 1 5 2]
[15 12 11 14 13]
[16 17 19 20 17]

I just realized, there is no need to shuffle within the mini-batch as shuffling within the minibatch doesn't contribute to improving training in any way. Appretiate if anyone has other views on this.

Related

Tensorflow LSTM: Input data is of wrong dimension

My tensor training data set structure is as follows:
[[1 3 0 99 4 ... 9 0 9],
...
[3 2 3 1 3 ... 9 9 8]]
In which there are 798 "rows" or individual arrays and each array is of length 9000, or has 9000 "columns".
The label data is like this: [ [1 0 1 0 0 ... 0 0 0 1]]. It is one array with 798 columns.
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(100,batch_input_shape=(798,1000,9000)))
model.compile(loss='mean_absolute_error',optimizer='adam',metrics=['accuracy'])
model.fit(training_data,training_labels,batch_size=150,epochs=10,validation_data=(validation_data,validation_labels))
But I kept getting the error:
ValueError: Input 0 of layer sequential_2 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 9000]
I'm not sure what I'm doing wrong. Should I specify the batch_input_shape? Should I specify the input_shape? What do samples, batch size, time step, and features mean in relation to my data?

Reading text file with multiple blocks separated by #

I have a text file with multiple blocks separated by #. The number of rows in each block is different. I would integrate a variable for each block. The text file looks like the following:
# a b c
### grid 1
1 2 3
2 3 4
3 4 5
### grid 2
11 12 13
12 13 14
13 14 15
### grid 3
21 22 23
22 23 24
23 24 25
24 25 26
I wound integrate a*c for each block. Using block one as an example, the result should be 1*3 + 2*4 + 3*5. Any ideas to implement it using numpy or pandas?
Once you have loaded a block into memory, you'll get an array like:
In [115]: arr = np.arange(1,4)+np.arange(0,3)[:,None]
In [116]: arr
Out[116]:
array([[1, 2, 3],
[2, 3, 4],
[3, 4, 5]])
the sum of products is then easy:
In [117]: np.dot(arr[:,0], arr[:,2])
Out[117]: 26
In [118]: 1*3+2*4+3*5
Out[118]: 26
I found an answer from #Fred Foo, which reads the file quite good.
from itertools import groupby
def contains_data(ln):
# just an example; there are smarter ways to do this
return ln[0] not in "#\n"
with open("example") as f:
datasets = [[ln.split() for ln in group] \
for has_data, group in groupby(f, contains_data) \
if has_data]
dim1 = len(datasets)
cooling_intgrl = np.zeros(dim1)
for i in range(dim1):
block = np.array(datasets[i]).astype(float)
length = block[:,0]
cooling = block[:,2]
result = np.dot(length, cooling)
cooling_intgrl[i] = result
This works very well for me.

Converting .tflite to .pb

Problem: How can i convert a .tflite (serialised flat buffer) to .pb (frozen model)? The documentation only talks about one way conversion.
Use-case is: I have a model that is trained on converted to .tflite but unfortunately, i do not have details of the model and i would like to inspect the graph, how can i do that?
I found the answer here
We can use Interpreter to analysis the model and the same code looks like following:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
Netron is the best analysis/visualising tool i found, it can understand lot of formats including .tflite.
I don't think there is a way to restore tflite back to pb as some information are lost after conversion. I found an indirect way to have a glimpse on what is inside tflite model is to read back each of the tensor.
interpreter = tf.contrib.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# trial some arbitrary numbers to find out the num of tensors
num_layer = 89
for i in range(num_layer):
detail = interpreter._get_tensor_details(i)
print(i, detail['name'], detail['shape'])
and you would see something like below. As there are only limited of operations that are currently supported, it is not too difficult to reverse engineer the network architecture. I have put some tutorials too on my Github
0 MobilenetV1/Logits/AvgPool_1a/AvgPool [ 1 1 1 1024]
1 MobilenetV1/Logits/Conv2d_1c_1x1/BiasAdd [ 1 1 1 1001]
2 MobilenetV1/Logits/Conv2d_1c_1x1/Conv2D_bias [1001]
3 MobilenetV1/Logits/Conv2d_1c_1x1/weights_quant/FakeQuantWithMinMaxVars [1001 1 1 1024]
4 MobilenetV1/Logits/SpatialSqueeze [ 1 1001]
5 MobilenetV1/Logits/SpatialSqueeze_shape [2]
6 MobilenetV1/MobilenetV1/Conv2d_0/Conv2D_Fold_bias [32]
7 MobilenetV1/MobilenetV1/Conv2d_0/Relu6 [ 1 112 112 32]
8 MobilenetV1/MobilenetV1/Conv2d_0/weights_quant/FakeQuantWithMinMaxVars [32 3 3 3]
9 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/Relu6 [ 1 14 14 512]
10 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/depthwise_Fold_bias [512]
11 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/weights_quant/FakeQuantWithMinMaxVars [ 1 3 3 512]
12 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Conv2D_Fold_bias [512]
13 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Relu6 [ 1 14 14 512]
14 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/weights_quant/FakeQuantWithMinMaxVars [512 1 1 512]
15 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/Relu6 [ 1 14 14 512]
16 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/depthwise_Fold_bias [512]
17 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/weights_quant/FakeQuantWithMinMaxVars [ 1 3 3 512]
18 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Conv2D_Fold_bias [512]
19 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Relu6 [ 1 14 14 512]
20 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/weights_quant/FakeQuantWithMinMaxVars [512 1 1 512]
I have done this with TOCO, using tf 1.12
tensorflow_1.12/tensorflow/bazel-bin/tensorflow/contrib/lite/toco/toco --
output_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.pb --
output_format=TENSORFLOW_GRAPHDEF --input_format=TFLITE --
input_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.tflite --
inference_type=FLOAT --input_type=FLOAT --input_array="" --output_array="" --
input_shape=1,450,450,3 --dump_grapHviz=./
(you can remove the dump_graphviz option)

Keras (tf backend) memory allocation problems

I am using Keras with Tensorflow backend.
I am facing a batch size limitation due to high memory usage
My data is composed of 4 1D signals treated with a sample size of 801 for each channel. Global sample size is 3204
Input data:
4 channels of N 1D signals of length 7003
Input generated by applying a sliding window on 1D signals
Give input data shape (N*6203, 801, 4)
N is the number of signals used to build one batch
My Model:
Input 801 x 4
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
Flatten
Dense 2000
Dense 5
With my GPU (Quadro K6000, 12189 MiB) i can fit only N=2 without warning
With N=3 I get a ran out of memory warning
With N=4 I get a ran out of memory error
It sound like batch_size is limitated by the space used by all tensors.
Input 801 x 4 x 1
Conv 797 x 4 x 20
MaxPooling 398 x 4 x 20
Conv 394 x 4 x 20
MaxPooling 197 x 4 x 20
Conv 193 x 4 x 20
MaxPooling 96 x 4 x 20
Conv 92 x 4 x 20
Dense 2000
Dense 5
With a 1D signal of 7001 with 4 channels -> 6201 samples
Total = N*4224 MiB.
N=2 -> 8448 MiB fit in GPU
N=3 -> 12672 MiB work but warning: failed to allocate 1.10 GiB then 3.00 GiB
N=4 -> 16896 MiB fail, only one message: failed to allocate 5.89 GiB
Does it work like that ? Is there any way to reduce the memory usage ?
To give a time estimation: 34 batch run in 40s and I got N total = 10^6
Thank you for your help :)
Example with python2.7: https://drive.google.com/open?id=1N7K_bxblC97FejozL4g7J_rl6-b9ScCn

Multiple 3D arrays to one 2D array

I have many 3D arrays in different files. I want to turn them into 2D arrays and then join them in 1 array.
I managed to get the 2D array, but not the format.
Ex:
Original 3D array of (4x2x2):
[[[ 0 1]
[ 2 3]]
[[ 4 5]
[ 6 7]]
[[ 8 9]
[10 11]]
[[12 13]
[14 15]]]
I want it to become 2D (2x8):
[[0 1 4 5 8 9 12 13]
[2 3 6 7 10 11 14 15]]
This is my code:
import numpy as np
x=np.arange(16).reshape((4,2,2)) #Depth, Row, Column
y=x.reshape((x.shape[1], -1), order='F')
If there is a better way to do this, please feel free to improve my code.
You can use np.swapaxes to swap the first two axes and then reshape, like so -
y = x.swapaxes(0,1).reshape(x.shape[1],-1)