Running this example on ML engine using Cloud composer but am receiving the following error:
AttributeError: 'module' object has no attribute 'estimator'
Even though I am importing import tensorflow as tf and it exits on the following line:
estimator = tf.estimator.Estimator(model_fn = image_classifier,
Runtime version is 1.8 similar to the version using the repo.
t3 = MLEngineTrainingOperator(
task_id='ml_engine_training_op',
project_id=PROJECT_ID,
job_id=job_id,
package_uris=["gs://us-central1-ml/trainer-0.1.tar.gz"],
training_python_module=MODULE_NAME,
training_args=training_args,
region=REGION,
scale_tier='BASIC_GPU',
runtimeVersion = '1.8',
dag=dag
)
Please check the setup.py, make sure you put tensorflow in it as
REQUIRED_PACKAGES = ['tensorflow==1.8.0']. or some other version. Then don't forget to re-generate tar and upload.
Also, in my case, MLEngineTrainingOperator doesn't seem to pick runtime_version or python_version at all into ML Engine.
Related
python mo_tf.py
--saved_model_dir C:\DATASETS\mask50000\exports\saved_model
--output_dir C:\DATASETS\mask50000
--reverse_input_channels
--tensorflow_custom_operations_config extensions\front\tf\mask_rcnn_support_api_v2.0.json
--tensorflow_object_detection_api_pipeline_config C:\DATASETS\mask50000\exports\pipeline.config
--log_level=DEBUG
I have been trying to convert the model using the above script, but every time I got the error:
"Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class'extensions.front.tf.tensorflow_custom_operations_config_update.TensorflowCustomOperationsConfigUpdate'>)": The function 'update_custom_layer_attributes' must be implemented in the sub-class."
I have exported the graph using exporter_main_v2.py. If more information is needed please inform me.
EDIT:
I was able to convert the model by changing the file mask_rcnn_support_api_v2.4.json.
first change:
"custom_attributes": {
"operation_to_add": "Proposal",
"clip_before_nms": false,
"clip_after_nms": true
}
second change:
"start_points": [
"StatefulPartitionedCall/concat/concat",
"StatefulPartitionedCall/concat_1/concat",
"StatefulPartitionedCall/GridAnchorGenerator/Identity",
"StatefulPartitionedCall/Cast",
"StatefulPartitionedCall/Cast_1",
"StatefulPartitionedCall/Shape"
]
that solved the problme.
OpenVINO 2020.4 is not compatible with TensorFlow 2. Support for TF 2.0 Object Detection API models was fully enabled only in OpenVINO 2021.3.
I’ve successfully converted the model mask_rcnn_inception_resnet_v2_1024x1024_coco17 to IR using the latest OpenVINO release (2021.4.752).
I share the MO conversion command here:
python mo_tf.py --saved_model_dir <model_dir>\saved_model --tensorflow_object_detection_api_pipeline_config <pipeline_dir>\pipeline.config --transformations_config <installed_dir>\extensions\front\tf\mask_rcnn_support_api_v2.0.json"
I need to run a custom GluonCV object detection module on Android.
I already fine-tuned the model (ssd_512_mobilenet1.0_custom) on a custom dataset, I tried running inference with it (loading the .params file produced during the training) and everything works perfectly on my computer. Now, I need to export this to Android.
I was referring to this answer to figure out the procedure, there are 3 suggested options:
You can use ONNX to convert models to other runtimes, for example [...] NNAPI for Android
You can use TVM
You can use SageMaker Neo + DLR runtime [...]
Regarding the first one, I converted my model to ONNX.
However, in order to use it with NNAPI, it is necessary to convert it to daq. In the repository, they provide a precomplied AppImage of onnx2daq to make the conversion, but the script returns an error. I checked the issues section, and they report that "It actually fails for all onnx object detection models".
Then, I gave a try to DLR, since it's suggested to be the easiest way.
As I understand, in order to use my custom model with DLR, I would first need to compile it with TVM (which also covers the second point mentioned in the linked post). In the repo, they provide a Docker image with some conversion scripts for different frameworks.
I modified the 'compile_gluoncv.py' script, and now I have:
#!/usr/bin/env python3
from tvm import relay
import mxnet as mx
from mxnet.gluon.model_zoo.vision import get_model
from tvm_compiler_utils import tvm_compile
shape_dict = {'data': (1, 3, 300, 300)}
dtype='float32'
ctx = [mx.cpu(0)]
classes_custom = ["CML_mug"]
block = get_model('ssd_512_mobilenet1.0_custom', classes=classes_custom, pretrained_base=False, ctx=ctx)
block.load_parameters("ep_035.params", ctx=ctx) ### this is the file produced by training on the custom dataset
for arch in ["arm64-v8a", "armeabi-v7a", "x86_64", "x86"]:
sym, params = relay.frontend.from_mxnet(block, shape=shape_dict, dtype=dtype)
func = sym["main"]
func = relay.Function(func.params, relay.nn.softmax(func.body), None, func.type_params, func.attrs)
tvm_compile(func, params, arch, dlr_model_name)
However, when I run the script it returns the error:
ValueError: Model ssd_512_mobilenet1.0_custom is not supported. Available options are
alexnet
densenet121
densenet161
densenet169
densenet201
inceptionv3
mobilenet0.25
mobilenet0.5
mobilenet0.75
mobilenet1.0
mobilenetv2_0.25
mobilenetv2_0.5
mobilenetv2_0.75
mobilenetv2_1.0
resnet101_v1
resnet101_v2
resnet152_v1
resnet152_v2
resnet18_v1
resnet18_v2
resnet34_v1
resnet34_v2
resnet50_v1
resnet50_v2
squeezenet1.0
squeezenet1.1
vgg11
vgg11_bn
vgg13
vgg13_bn
vgg16
vgg16_bn
vgg19
vgg19_bn
Am I doing something wrong? Is this thing even possible?
As a side note, after this I'd need to deploy on Android a pose detection model (simple_pose_resnet18_v1b) and an activity recognition one (i3d_nl10_resnet101_v1_kinetics400) as well.
You actually can run GluonCV model directly on Android with Deep Java Library (DJL)
What you need to do is:
hyridize your GluonCV model and save as MXNet model
Build MXNet engine for android, MXNET already support Android build
Include MXNet shared library into your android project
Use DJL in your android project, you can follow this DJL Android demo for PyTorch
The error message is self-explanatory - there is no model "ssd_512_mobilenet1.0_custom" supported by mxnet.gluon.model_zoo.vision.get_model. You are confusing GluonCV's get_model with MXNet Gluon's get_model.
Replace
block = get_model('ssd_512_mobilenet1.0_custom',
classes=classes_custom, pretrained_base=False, ctx=ctx)
with
import gluoncv
block = gluoncv.model_zoo.get_model('ssd_512_mobilenet1.0_custom',
classes=classes_custom, pretrained_base=False, ctx=ctx)
i'm trying to train a Retinanet on my Dataset with this command line :
retinanet-train --batch-size 4 --steps 349 --epochs 50 --weights logos/resnet50_coco_best_v2.1.0.h5 --snapshot-path logos/snapshots csv logos/retinanet_train.csv logos/retinanet_classes.csv
And I get this Error :
AttributeError: module 'tensorflow' has no attribute 'ConfigProto'
I know that , this is related to the version of Tensorlow , in the new version ConfigProto disappeared , but i want to fix it without 're-installing' the old version the 1.14, cause otherwise it will be a mess.
Any suggestion would be super appreciated , thank you.
Since tf.ConfigProto is deprecated in TF 2.0, use tf.compat.v1.ConfigProto() instead by replacing the occurences of tf.ConfigProto() in the retinanet-train code (assuming that's where tf.ConfigProto() is being called). Link to tensorflow doc here.
When to use mxnet-cu101mkl = {version = "==1.5.0",sys_platform = "== 'linux'"}, I get error that I cannot longer import ndarray or nd:
ImportError: cannot import name 'ndarray'
I have no problem with this when using the same code with mxnet-cu101 (no mkl).
Is this just a bug or is this subpackage no longer supported?
I can confirm that mxnet-cu100mkl works fine (version 1.5.0). Very slight CUDA version difference to yours but the package shouldn't change. I think you might be importing a different mxnet here, possibly a folder called mxnet for example. Check the following:
import mxnet as mx
print(mx.__file__)
It should show the path to mxnet within site-packages for you Python environment. e.g.
/home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/__init__.py
I am using tensorflow version 1.3. But the tutorial that I following is written on the version 1.0 and I am quite new on tensorflow. The problem that I get is:
module' object has no attribute 'prepare_attention
And the code is ;
tf.contrib.seq2seq.prepare_attention(attention_states, attention_option = "bahdanau", num_units = decoder_cell.output_size)
I couldn't figure out what the use instead of tf.contrib.seq2seq.prepare_attention() function. Is there anyone who can help?
Degrade your tensorflow and it'll work. The problem is that prepare_attention is deprecated and hence we use an older version of tf to work with it
Okay, all you need to do is create a new environment with python 3.5.4 and then install tensorflow 1.0.0. That's it. Everything will work fine.
tf.contrib.seq2seq.prepare_attention works only when the TensorFlow version is 1.0, I have version 2.3.1
My solution:
tf.contrib.seq2seq.prepare_attention = tf.compat.v1.nn.rnn_cell.prepare_attention