Keras (tf backend) memory allocation problems - tensorflow

I am using Keras with Tensorflow backend.
I am facing a batch size limitation due to high memory usage
My data is composed of 4 1D signals treated with a sample size of 801 for each channel. Global sample size is 3204
Input data:
4 channels of N 1D signals of length 7003
Input generated by applying a sliding window on 1D signals
Give input data shape (N*6203, 801, 4)
N is the number of signals used to build one batch
My Model:
Input 801 x 4
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
Flatten
Dense 2000
Dense 5
With my GPU (Quadro K6000, 12189 MiB) i can fit only N=2 without warning
With N=3 I get a ran out of memory warning
With N=4 I get a ran out of memory error
It sound like batch_size is limitated by the space used by all tensors.
Input 801 x 4 x 1
Conv 797 x 4 x 20
MaxPooling 398 x 4 x 20
Conv 394 x 4 x 20
MaxPooling 197 x 4 x 20
Conv 193 x 4 x 20
MaxPooling 96 x 4 x 20
Conv 92 x 4 x 20
Dense 2000
Dense 5
With a 1D signal of 7001 with 4 channels -> 6201 samples
Total = N*4224 MiB.
N=2 -> 8448 MiB fit in GPU
N=3 -> 12672 MiB work but warning: failed to allocate 1.10 GiB then 3.00 GiB
N=4 -> 16896 MiB fail, only one message: failed to allocate 5.89 GiB
Does it work like that ? Is there any way to reduce the memory usage ?
To give a time estimation: 34 batch run in 40s and I got N total = 10^6
Thank you for your help :)
Example with python2.7: https://drive.google.com/open?id=1N7K_bxblC97FejozL4g7J_rl6-b9ScCn

Related

TensorFlow linear regresison task - very high loss problem

I'm trying to build a linear model on my own yield
# Create features
X = np.array([-7.0, -4.0, -1.0, 2.0, 5.0, 8.0, 11.0, 14.0])
# Create labels
y = np.array([3.0, 6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0])
model = tf.keras.Sequential([
tf.keras.layers.Dense(50, activation = "elu", input_shape = [1]),
tf.keras.layers.Dense(1)
])
model.compile(loss = "mae",
optimizer = tf.keras.optimizers.Adam(learning_rate = 0.01),
metrics = ["mae"])
model.fit(X, y, epochs = 150)
When I train with the above X and y data, the loss value starts from a normal value.
experience salary
0 0 2250
1 1 2750
2 5 8000
3 8 9000
4 4 6900
5 15 20000
6 7 8500
7 3 6000
8 2 3500
9 12 15000
10 10 13000
11 14 18000
12 6 7500
13 11 14500
14 12 14900
15 3 5800
16 2 4000
But when I use such a dataset, the initial loss value starts as 800.(same as above model btw)
What could be the reason for this?
Your learning rate is significantly high. You should opt for much lower initial learning rates, such as 0.0001 or 0.00001.
Otherwise you are using 'linear' activation on the last layer (default one) and the correct loss function and metric. Also note that the default batch_size in absence of explicit mentioning is 32.
UPDATING : as determined by the author of the question, underfitting was also fundamental to the problem. Adding multiple more layers helped solved the problem.

Yolov4 Darknet Training Error on Macbook Pro M1

I am creating a custom Yolov4 to detect characters & digits in an image. I have installed Darknet on my Macbook M1 referring to this repo: https://github.com/AlexeyAB/darknet
The annotated dataset is ready for training. However, when the training beings, an error is shown saying that the GPU and OpenCV are not being used and the training stops abruptly.
Here is the error in the terminal:
GPU isn't used
OpenCV isn't used - data augmentation will be slow
valid: Using default 'data/train.txt'
yolov4-obj
mini_batch = 4, batch = 64, time_steps = 1, train = 1
layer filters size/strd(dil) input output
0 conv 32 3 x 3/ 1 416 x 416 x 3 -> 416 x 416 x 32 0.299 BF
1 conv 64 3 x 3/ 2 416 x 416 x 32 -> 208 x 208 x 64 1.595 BF
2 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF
3 route 1 -> 208 x 208 x 64
4 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF
5 conv 32 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 32 0.177 BF
6 conv 64 3 x 3/ 1 208 x 208 x 32 -> 208 x 208 x 64 1.595 BF
......
......
Total BFLOPS 59.817
avg_outputs = 494379
Loading weights from yolov4.conv.137...
seen 64, trained: 0 K-images (0 Kilo-batches_64)
Done! Loaded 137 layers from weights-file
Learning Rate: 0.001, Momentum: 0.949, Decay: 0.0005
Detection layer: 139 - type = 28
Detection layer: 150 - type = 28
Detection layer: 161 - type = 28
Resizing, random_coef = 1.40
608 x 608
Create 64 permanent cpu-threads
mosaic=1 - compile Darknet with OpenCV for using mosaic=1
mosaic=1 - compile Darknet with OpenCV for using mosaic=1
Opencv is installed using brew install opencv#2 and M1 GPU works fine with TensorFlow. But due to some reasons, it is simply not working here.
Any help on this issue will be greatly appreciated.
Thank you in advance!

Internal node predictions of xgboost model

Is it possible to calculate the internal node predictions of an xgboost model? The R package, gbm, provides a prediction for internal nodes of each tree.
The xgboost output, however only shows predictions for the final leaves of the model.
xgboost output:
Notice that the Quality column has the final prediction for the leaf node in row 6. I would like that value for each of the internal nodes as well.
Tree Node ID Feature Split Yes No Missing Quality Cover
1: 0 0 0-0 Sex=female 0.50000 0-1 0-2 0-1 246.6042790 222.75
2: 0 1 0-1 Age 13.00000 0-3 0-4 0-4 22.3424225 144.25
3: 0 2 0-2 Pclass=3 0.50000 0-5 0-6 0-5 60.1275253 78.50
4: 0 3 0-3 SibSp 2.50000 0-7 0-8 0-7 23.6302433 9.25
5: 0 4 0-4 Fare 26.26875 0-9 0-10 0-9 21.4425507 135.00
6: 0 5 0-5 Leaf NA <NA> <NA> <NA> 0.1747126 42.50
R gbm output:
In the R gbm package output, the prediction column contains values for both leaf nodes (SplitVar == -1) and the internal nodes. I would like access to these values from the xgboost model
SplitVar SplitCodePred LeftNode RightNode MissingNode ErrorReduction Weight Prediction
0 1 0.000000000 1 8 15 32.564591 445 0.001132514
1 2 9.500000000 2 3 7 3.844470 282 -0.085827382
2 -1 0.119585850 -1 -1 -1 0.000000 15 0.119585850
3 0 1.000000000 4 5 6 3.047926 207 -0.092846157
4 -1 -0.118731665 -1 -1 -1 0.000000 165 -0.118731665
5 -1 0.008846912 -1 -1 -1 0.000000 42 0.008846912
6 -1 -0.092846157 -1 -1 -1 0.000000 207 -0.092846157
Question:
How do I access or calculate predictions for the internal nodes of an xgboost model? I would like to use them for a greedy, poor man's version of SHAP scores.
The solution to this problem is to dump the xgboost json object with all_stats=True. That adds the cover statistic to the output which can be used to distribute the leaf points through the internal nodes:
def _calculate_contribution(node: AnyNode) -> float32:
if isinstance(node, Leaf):
return node.contrib
else:
return (
node.left.cover * Node._calculate_contribution(node.left)
+ node.right.cover * Node._calculate_contribution(node.right)
) / node.cover
The internal contribution is the weighted average of the child contributions. Using this method, the generated results exactly match those returned when calling the predict method with pred_contribs=True and approx_contribs=True.

Converting .tflite to .pb

Problem: How can i convert a .tflite (serialised flat buffer) to .pb (frozen model)? The documentation only talks about one way conversion.
Use-case is: I have a model that is trained on converted to .tflite but unfortunately, i do not have details of the model and i would like to inspect the graph, how can i do that?
I found the answer here
We can use Interpreter to analysis the model and the same code looks like following:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
Netron is the best analysis/visualising tool i found, it can understand lot of formats including .tflite.
I don't think there is a way to restore tflite back to pb as some information are lost after conversion. I found an indirect way to have a glimpse on what is inside tflite model is to read back each of the tensor.
interpreter = tf.contrib.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# trial some arbitrary numbers to find out the num of tensors
num_layer = 89
for i in range(num_layer):
detail = interpreter._get_tensor_details(i)
print(i, detail['name'], detail['shape'])
and you would see something like below. As there are only limited of operations that are currently supported, it is not too difficult to reverse engineer the network architecture. I have put some tutorials too on my Github
0 MobilenetV1/Logits/AvgPool_1a/AvgPool [ 1 1 1 1024]
1 MobilenetV1/Logits/Conv2d_1c_1x1/BiasAdd [ 1 1 1 1001]
2 MobilenetV1/Logits/Conv2d_1c_1x1/Conv2D_bias [1001]
3 MobilenetV1/Logits/Conv2d_1c_1x1/weights_quant/FakeQuantWithMinMaxVars [1001 1 1 1024]
4 MobilenetV1/Logits/SpatialSqueeze [ 1 1001]
5 MobilenetV1/Logits/SpatialSqueeze_shape [2]
6 MobilenetV1/MobilenetV1/Conv2d_0/Conv2D_Fold_bias [32]
7 MobilenetV1/MobilenetV1/Conv2d_0/Relu6 [ 1 112 112 32]
8 MobilenetV1/MobilenetV1/Conv2d_0/weights_quant/FakeQuantWithMinMaxVars [32 3 3 3]
9 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/Relu6 [ 1 14 14 512]
10 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/depthwise_Fold_bias [512]
11 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/weights_quant/FakeQuantWithMinMaxVars [ 1 3 3 512]
12 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Conv2D_Fold_bias [512]
13 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Relu6 [ 1 14 14 512]
14 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/weights_quant/FakeQuantWithMinMaxVars [512 1 1 512]
15 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/Relu6 [ 1 14 14 512]
16 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/depthwise_Fold_bias [512]
17 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/weights_quant/FakeQuantWithMinMaxVars [ 1 3 3 512]
18 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Conv2D_Fold_bias [512]
19 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Relu6 [ 1 14 14 512]
20 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/weights_quant/FakeQuantWithMinMaxVars [512 1 1 512]
I have done this with TOCO, using tf 1.12
tensorflow_1.12/tensorflow/bazel-bin/tensorflow/contrib/lite/toco/toco --
output_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.pb --
output_format=TENSORFLOW_GRAPHDEF --input_format=TFLITE --
input_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.tflite --
inference_type=FLOAT --input_type=FLOAT --input_array="" --output_array="" --
input_shape=1,450,450,3 --dump_grapHviz=./
(you can remove the dump_graphviz option)

How do I compute the capacity of a hard disk?

I have a worked example of how to compute the capacity of a hard disk, could anyone explain where the BOLD figures came out of?
RPM: 7200
no of sectors: 400
no of platters: 6
no of heads: 12
cylinders: 17000
avg seek time: 10millisecs
time to move between adj cylinders: 1millisec
the first line of the answer given to me is:
12 x 17 x 4 x 512 x 10^5
I just want to know where the parts in bold came from.The 512 I dont know. The 10 is from the seek time but its power 5?
The answer is
heads x cylinder x sectors x 512 (typical size of one sector in bytes)
so this is
12 x 17000 x 400 x 512
which is the same as
12 x 17 x 1000 x 4 x 100 x 512
and
100 = 10^2
1000 = 10^3
10^2 x 10^3 = 10^5
As you want the capacity, you don't need any times here.
A reference for the 512 bytes can be found at Wikipedia, for example (and it also has a similar example with the same formula a bit below).