Getting this ImportErrorcan "not import name fpn_pb2", when trying to run the training using tensor flow 2 object detection API - tensorflow

I am doing research in deep learning using Tensor flow 2 object detection API. I am getting this error while running the model training. I followed the Gilbert Tanner and Edje Electronics tutorial for basic installations and environment settings. I am using the TensorFlow object detection API new GitHub Commit. I converted the all .protos files into .py files but still facing this error. I am attaching a screenshot of this error. please check this and let me know if you can help.
Thanks in Advance. Error

I had the same problem and also with target_assigner.proto, center_net.proto, you have to add all three .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto
to protoc command. So whole command should be:
protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto .\object_detection\protos\graph_rewriter.proto .\object_detection\protos\calibration.proto .\object_detection\protos\flexible_grid_anchor_generator.proto

Related

Solving PyPSA model with Gurobi in Google Colab - License/Environment problem

I am trying to solve an optimization problem with PyPSA in Google Colab using Gurobi as solver, which normally works fine on my local computer (but takes too much time). When I try to run this in Google Colab I always get the error of the size-limited license, although I have the correct non-size-limited academic license.
Before trying to run this in Google Colab I followed the steps indicated in the post "Google Colab: Installation and Licensing" (Gurobi) and created an environment to solve my model using the license:
e = gp.Env(empty=True)
e.setParam('WLSACCESSID', 'your wls accessid (string)')
e.setParam('WLSSECRET', 'your wls secret (string)')
e.setParam('LICENSEID', <your license id (integer)>)
e.start()
The problem is that the model to be optimized is inside PyPSA and must not be created by myself with a line like:
model = gp.Model(env=e)
as indicated in the aforementioned post.
So what I need is to find out how to make the PyPSA model run in the correct environment.
I am using this dictionary to specify some parameters for the Gurobi Solver when running the optimization:
solver_options = {'Crossover':0,
'Method' :2,
'BarHomogeneous' :1
}
network.lopf(snapshots = network.snapshots, pyomo = False, solver_name = 'gurobi',
solver_options=solver_options,
extra_functionality = extra_functionalities)
How can I make the PyPSA optimization problem run in the correct environment?
Thank you in advance for your help.
Regards,
Sebastian

CatBoost Error: bayesian bootstrap doesn't support taken fraction option

The error occur when running gridsearchcv using catboostclassifier and bootstrap_type='Bayesian'. Any idea what that error means?
I had similar issue while training my data with CatBoostClassifier. I had set the "subsample" parameter to "0.5". After removing this parameter, it worked for me.
More details at: enter link description here
Hope this is useful.

TensorFlow in PyCharm value error: Failed to find data adapter that can handle input

I am referring TensorFlow speciliazation from Coursera where a certain piece of code works absolutely fine in Google Colab, whereas when I try to run it locally on PyCharm, it gives following error:
Failed to find data adapter that can handle input
Any suggestions?
Can you tell me the code where the error occurred?
It should be available in logs under your PyCharm console.
Looking at your comments, it seems that the model is expecting an array while you provided a list.
I was facing the same issue. Turns out it was a in the form of a list. I had to convert the fields into a numpy array like:
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
thats it!
Try it out and let me know if it works.

Tensorboard checkpoint : Access is denied. ; Input/output error

I am trying to create a tensor board in Jupyter anaconda the following way. The error occurs when write_images = True, otherwise, this code works fine. Any reason why this happens?
log_dir="logs\\fit\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,
histogram_freq=1,
write_graph = True,
write_images = False
update_freq = 'epoch'
profile_batch = 3,
embeddings_freq=1
)
And I get
UnknownError: Failed to rename: logs\20200219-202538\train\checkpoint.tmp67d5ca45d1404cc584a86cf42d2761d3 to: logs\20200219-202538\train\checkpoint : Access is denied.
; Input/output error
Seems to be random on which epoch it occurs.
I had something similar, it seems like the path where you want to safe the checkpoint, which is referred to tensorbaord is not available or denied. Do you know colab ? I would suggest you copy your code and run your training up there (only if your dataset isnt too large). You can copy your dataset in your Google Drive and access with colab. If it is working in colab, than you may dont have a problem with your code, but propably with your anaconda restrictions.
Mount Google Drive (Colab), Colab basics
I know i couldnt solve your problem, but propably this can help you and boost your learning speed with a juicy free Cloud GPU.
I had the same issue (was running on window's machine) I manually gave full permissions to the folder (right click on folder and edit permission --> give full access to 'everyone' user) and everything went fine.
If you are working on a unix system I think you can try to do the same (chmod 777 <dir_name>).
P.S. be aware of 'full permission' and 'chmod 777'. As now, anyone access to the system can view/edit contents of the folder.

How to create an op like conv_ops in tensorflow?

What I'm trying to do
I'm new to C++ and bazel and I want to make some change on the convolution operation in tensorflow, so I decide that my first step is to create an ops just like it.
What I have done
I copied conv_ops.cc from //tensorflow/core/kernels and change the name of the ops registrated in my new_conv_ops.cc. I also changed some name of the functions in the file to avoid duplication. And here is my BUILD file.
As you can see, I copy the deps attributes of conv_ops from //tensorflow/core/kernels/BUILD. Then I use "bazel build -c opt //tensorflow/core/user_ops:new_conv_ops.so" to build the new op.
What my problem is
Then I got this error.
I tried to delete bounds_check and got same error for the next deps. Then I realize that there is some problem for including h files in //tensorflow/core/kernels from //tensorflow/core/user_ops. So how can I perfectely create a new op excatcly like conv_ops?
Adding a custom operation to TensorFlow is covered in the tutorial here. You can also look at actual code examples.
To address your specific problem, note that the tf_custom_op_library macro adds most of the necessary dependencies to your target. You can simply write the following :
tf_custom_op_library(
name="new_conv_ops.so",
srcs=["new_conv_ops.cc"]
)