Running automatic annotation in cvat with tensorflow results in status code 400 "No labels found for tf annotation" - tensorflow

I'm trying to run a tensorflow for pre-annotation in cvat.
I can start the docker container, option in the menue shows up.
However, after selection of the model i get the error:
Could not infer model for the task 10
Error: Request failed with status code 400. "No labels found for tf annotation".
It seems that i need to specify some labels, in which format do i have to configure them though ?
Documentation seems sparse on this one. Maybe someone on here knows something?
Also if some stackoverflow wizzard with a lot of reputation could create the tag cvat i would be very happy :)

Related

Profiling code not working for Odoo version 15.0

I am adding profiler on my custom method on Odoo v15.0
I have referred below doc for code profiling
https://www.odoo.com/documentation/15.0/developer/howtos/profilecode.html
Below is the syntax i am using
from odoo.tools.profiler import profile
#profile
#api.model
def mymethod(...):
My code
But on execution of code i am getting below error on terminal
"ImportError: cannot import name 'profile' from 'odoo.tools.profiler'"
To debug the issue i have deep dived in base code of "/odoo/tool/profiler.py".
But unable to locate any wrapper or function called profiler.
What is correct way to use profiling using "Log a method" strategy on odoo v15.0?
Goto path and make sure you have this file for line-by-line code profiling looks like you don't have this file
From the front end in debug mode open enable profiling, this will give you all the information per user

How to add model Checkpoint as Callback, when running model on TPU?

I am trying to save my model by using tf.keras.callbacks.ModelCheckpoint with filepath as some folder in drive, but I am getting this error:
File system scheme '[local]' not implemented (file: './ckpt/tensorflow/training_20220111-093004_temp/part-00000-of-00001')
Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors.
Does anybody know what is the reason for this and the workaround for this?
Looks to me that you are trying to access the file system of your host VM from the TPU which is not directly possible.
When using the TPU and you want to access files in e.g. GoogleColab you should place it within:
with tf.device('/job:localhost'):
<YOUR_CODE>
Now to your problem:
The local host acts as parameter server when training on TPU. So if you want to checkpoint your training, the localhost must do so.
When you check the documention for said callback, you cann find the parameter options.
checkpoint_options = tf.train.CheckpointOptions(experimental_io_device='/job:localhost')
checkpoint = tf.keras.callbacks.ModelCheckpoint(<YOUR_PATH>, options = checkpoint_options)
Hope this solves your issue!
Best,
Sascha

Unable to create labels for WebGL Feature Layer, esriGeometryPolyline is not supported

I publish service from arcgis server and I am using it with the arcgis js 4.9. But one of Feature Layer I faced error :
[esri.views.2d.engine.webgl.WGLMeshFactory] ,
mapview-labeling:unsupported-geometry-type,
Unable to create labels for WebGL Feature Layer, esriGeometryPolyline is not supported"
Now I can not show Label of layer. How can I solve it?
Looking at the 4.9 source code, the esriGeometryPolyline geometry type have "null" for value of possible label placements, and this will give you the error mentioned.
You could try to upgrade to 4.11, which has the "center-along" value available for esriGeometryPolyline.
I am currently researching an error where a user has gone from 4.4 to 4.11 and has gotten various display problems, which are probably related to some setting of labels in the web map and label placement for polylines.

recommended way of profiling distributed tensorflow

Currently, I am using tensorflow estimator API to train my tf model. I am using distributed training that is almost 20-50 workers and 5-30 parameter servers based on the training data size. Since I do not have access to the session, I cannot use run metadata a=with full trace to look at the chrome trace. I see there are two other approaches :
1) tf.profiler.profile
2) tf.train.profilerhook
I am specifically using
tf.estimator.train_and_evaluate(estimator, train_spec, test_spec)
where my estimator is a prebuilt estimator.
Can someone give me some guidance (concrete code samples and code pointers will be really helpful since I am very new to tensorflow) what is the recommended way to profile estimator? Are the 2 approaches getting some different information or serve the same purpose? Also is one recommended over another?
There are two things you can try:
ProfilerContext
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/profiler/profile_context.py
Example usage:
with tf.contrib.tfprof.ProfileContext('/tmp/train_dir') as pctx:
train_loop()
ProfilerService
https://www.tensorflow.org/tensorboard/r2/tensorboard_profiling_keras
You can start a ProfilerServer via tf.python.eager.profiler.start_profiler_server(port) on all workers and parameter servers. And use TensorBoard to capture profile.
Note that this is a very new feature, you may want to use tf-nightly.
Tensorflow have recently added a way to sample multiple workers.
Please have a look at the API:
https://www.tensorflow.org/api_docs/python/tf/profiler/experimental/client/trace?version=nightly
The parameter of the above API which is important in this context is :
service_addr: A comma delimited string of gRPC addresses of the
workers to profile. e.g. service_addr='grpc://localhost:6009'
service_addr='grpc://10.0.0.2:8466,grpc://10.0.0.3:8466'
service_addr='grpc://localhost:12345,grpc://localhost:23456'
Also, please look at the API,
https://www.tensorflow.org/api_docs/python/tf/profiler/experimental/ProfilerOptions?version=nightly
The parameter of the above API which is important in this context is :
delay_ms: Requests for all hosts to start profiling at a timestamp
that is delay_ms away from the current time. delay_ms is in
milliseconds. If zero, each host will start profiling immediately upon
receiving the request. Default value is None, allowing the profiler
guess the best value.

Error in getting feature dimension on Kaldi for Voice recognition?

I have done 'Kaldi for dummies' example for Voice recognition. But I am getting the following error. Anyone knows how to fix it?
error message showing while running Kaldi for dummies