Per-tensor post training quantization for tf2 - tensorflow2.0

Model trained using tf2 is going to be run on hardware which has performance benefits if model quantized 'per-tensor'. But there is no any option for TFLiteConverter to make it 'per-tensor' for tf2. I guess tf2 can only do 'per-axis' quantization.
I'm sure that the model is converted 'per-axis' because interpreter.get_tensor_details() shows the following for conv2 tensors.
{'scales': array([0.00020375, 0.00016656, 0.00019717, 0.00022204, 0.00015516,
0.00018166, 0.00017471, 0.00016947, 0.00020444, 0.00017661,
...], dtype=float32),
'zero_points': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...], dtype=int32),
'quantized_dimension': 0}
In case 'per-tensor' scales and zero_points would have been scalars. The model and training code are built for tf2 and converting all the things to tf1 would be nightmare.
Can I quantize tf2 trained model in 'per-tensor' manner?
Is it possible to convert tflite 'per-axis' to 'per-tensor'? Looks like I just need only to take mean from scales and zero_points.
Is it possible to train tf2 model in 'tf1 mode' without big changes?
Thanks!

Related

How i can use xgboost memory maximize? (xgboost no use full gpu)

i am trying to use xgboost.
However, it's different with keras which uses full gpu memory when learning ANN model.
As shown in below, xgboost uses small amount of gpu.
How i can make full gpu for xgboost?
grid = {'eta':[0.01, 0.1, 0.2],
'min_child_weight':[1, 2, 3, 4],
'max_depth': [3, 4, 5, 6],
'subsample': [0.5, 0.6, 0.7, 0.8],
'nthread': [8],
'colsample_bytree': [0.5, 0.6, 0.7, 0.8]}
XGBoost_model = XGBRegressor(n_estimators = 300, tree_method='gpu_hist', gpu_id=0)
XGBoost_211220 = GridSearchCV(estimator=XGBoost_model,
param_grid=grid,
scoring='neg_mean_absolute_error',
cv=10,
n_jobs=-1,
verbose=10)
XGBoost_211220.fit(sc_x_train_211220, sc_y_train_211220)
Above is my code to learn XGBoost.

Can we train specific part of tensor with tebnsorflow?

I am trying to make an adversarial image for the inceptionV3 model with tensorflow. For that I use a specific loss on the pixel of my input image. This works well
model_input_layer = model.layers[0].input
model_output_layer = model.layers[-1].output
cost_function = model_output_layer[0, object_type_to_fake]
gradient_function = K.gradients(cost_function, model_input_layer)[0]
grab_cost_and_gradients_from_model = K.function([model_input_layer, K.learning_phase()], [cost_function, gradient_function])
Now I would like to make only certain pixels trainable to create a patch on a certain square and not on the all input image.
I have tried to use variable = tf.slice(model_input_layer, [0, 100, 100, 0], [-1, 100, 100, -1]) but it does not work.
Does anyone has already done this ?

Tensorflow tf.metrics.accuracy multi-label always zero

My label looks like this:
label = [0, 1, 0, 0, 1, 1, 0]
In other words, classes 1, 4, 5 are present at the corresponding sample. I believe this is called a soft class.
I'm calculating my loss with:
logits = tf.layers.dense(encoding, 7, activation=None)
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(
labels=labels,
logits=logits
)
loss = tf.reduce_mean(cross_entropy)
According to Tensorboard, the loss is decreasing over time, as expected. However, the accuracy is flat at zero:
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(labels=labels, predictions=logits),
}
tf.summary.scalar('accuracy', eval_metric_ops['accuracy'][1])
How do I calculate the accuracy of my model when using soft classes?
Did you solve this? I think the comment about softmax_cross_entropy_with_logits is incorrect because you have a multi-label, (each label is a) binary-class problem.
Partial solution:
labels = tf.constant([1, 1, 1, 0, 0, 0]) # example
predicitons = tf.constant([0, 1, 0, 0, 1, 0]) # example
is_equal = tf.equal(label, predicitons)
accuracy = tf.reduce_mean(tf.cast(is_equal, tf.float32))
This gives a number but still need to convert it into a tf metric.

Tensorflow, is there a way to specify padding along axes ?

I want to perform a convolution over an image, in tensorflow. I want the kernel to be as tall as the image and very thin. For example:
kernel_size = [200, 24]
image_size = [200, 400]
If I use padding "SAME", instead of getting a vector out, I get a [200, 400] image back, since tensorflow pads the image at the top and bottom and convolves with the kernel over the padded image.
If, on the other hand, I use padding "VALID", the problem for the top and bottom disappears, but it also does not fully cover the horizontal direction of my image, such that, if the horizontal dimension of the image is not divisible by the kernel dimension, you lose a part of it.
Is there a way to perform "VALID" padding at the top and bottom and "SAME" padding left and right? Or is there another way of doing this?
With the default padding options of tensorflow's convolution functions there is no way to do this, you will have to pad your tensor manually to get your horizontal dimension to be divisible by the kernel dimension.
After padding manually use VALID padding in order not to have the pixels of the manual padding be considered to be anything other than padding
To pad your tensor manualy you could use tf.concat with a constant tensor with shape and value you want. If your images are always the same size this will not be difficult to figure out.
For any non-standard padding, TF has a custom padding operation: tf.pad(). So just figure out what is the appropriate padding and call something like
# 't' is [[1, 2, 3], [4, 5, 6]].
# 'paddings' is [[1, 1,], [2, 2]].
# rank of 't' is 2.
pad(t, paddings, "CONSTANT") ==> [[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 2, 3, 0, 0],
[0, 0, 4, 5, 6, 0, 0],
[0, 0, 0, 0, 0, 0, 0]]

Keras Array Input Error

I get the following error:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 6 arrays but instead got the following list of 3 arrays: [array([[ 0, 0, 0, ..., 18, 12, 1],
[ 0, 0, 0, ..., 18, 11, 1],
[ 0, 0, 0, ..., 18, 9, 1],
...,
[ 0, 0, 0, ..., 18, 15, 1],
[ 0, 0, 0, ..., 18, 9, ...
in my keras model.
I think the model is mistaking something?
This happens when I feed input to my model. The same input works perfectly well in another program.
It's impossible to diagnose your exact problem without more information.
I usually specify the input_shape parameter of the first layer based on my training data X.
e.g.
model = Sequential()
model.add(Dense(32, input_shape=X.shape[0]))
I think you'll want X to look something like this:
[
[[ 0, 0, 0, ..., 18, 11, 1]],
[[ 0, 0, 0, ..., 18, 9, 1]],
....
]
So you could try reshaping it with the following line:
X = np.array([[sample] for sample in X])
The problem really comes from giving the wrong input to the network.
In my case the problem was that my custom image generator was passing the entire dataset as input rather than a certain pair of image-label. This is because I thought that generator.flow(x,y, batch_size) of Keras already has a yield structure inside, however the correct generator structure should be as follows(with a separate yield):
def generator(batch_size):
(images, labels) = utils.get_data(1000) # gets 1000 samples from dataset
labels = to_categorical(labels, 2)
generator = ImageDataGenerator(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90.,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2)
generator.fit(images)
gen = generator.flow(images, labels, batch_size=32)
while 1:
x_batch, y_batch = gen.next()
yield ([x_batch, y_batch])
I realize the question is old but it might save some time for someone to find the issue.