Name error in object detection api in real time code - tensorflow

File "", line 18, in output_dict = run_inference_for_single_image(image_np, detection_graph)
NameError: name 'run_inference_for_single_image' is not defined.
This is the error i am getting by running the code from the below given github site i dont know how to debug that line.
object detection github link
Can someone please help me to resolve this error?

Related

NS3 and WAF compatibility

File "/home/~/ns-allinone-3.33/ns-3.33/.waf3-2.0.21-c6c9a875365426e5928462b9b74d40b5/waflib/TaskGen.py", line 123, in post
v()
File "/home/~/ns-allinone-3.33/ns-3.33/src/wscript", line 724, in apply_ns3moduleheader
for source in sorted(ns3headers.headers):
AttributeError: 'task_gen' object has no attribute 'headers'
Is it something incompatible between ns-3.33 and waf3-2.0.21? Is there a solution to the problem?
ns-3.33 uses Waf version 2.0.21. I'm guessing that the error you are seeing is from adding some code that is not compatible with ns-3.33? Do you get this error from downloading and building a fresh copy of the ns-3.33 release? Note: most questions such as these are handled in the ns-3-users#googlegroups.com forum.

Spacy's Dependency Parser

I was trying to play around with Spacy's Dependency Parser to extract Aspect for Aspect Based Sentiment Analysis.
I followed this link: https://remicnrd.github.io/Aspect-based-sentiment-analysis/
When I tried the following piece of the code on my data, I got an Error message.
import spacy
nlp = spacy.load('en')
dataset.review = dataset.review.str.lower()
aspect_terms = []
for review in nlp.pipe(dataset.review):
chunks = [(chunk.root.text) for chunk in review.noun_chunks if chunk.root.pos_ == 'NOUN']
aspect_terms.append(' '.join(chunks))
dataset['aspect_terms'] = aspect_terms
dataset.head(10)
The Error message was:
TypeError: object of type 'NoneType' has no len()
The Error was in this line:
for review in nlp.pipe(dataset.review):
Could someone please help me understand the issue here and how to resolve this. Thanks.
Writing the solution here incase it helps someone in future.
I was getting the Error because I had some empty rows for the column review.
I re-ran the code after removing the empty rows/rows with NaN values for the column reviews and it works fine :)

Getting this ImportErrorcan "not import name fpn_pb2", when trying to run the training using tensor flow 2 object detection API

I am doing research in deep learning using Tensor flow 2 object detection API. I am getting this error while running the model training. I followed the Gilbert Tanner and Edje Electronics tutorial for basic installations and environment settings. I am using the TensorFlow object detection API new GitHub Commit. I converted the all .protos files into .py files but still facing this error. I am attaching a screenshot of this error. please check this and let me know if you can help.
Thanks in Advance. Error
I had the same problem and also with target_assigner.proto, center_net.proto, you have to add all three .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto
to protoc command. So whole command should be:
protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto .\object_detection\protos\graph_rewriter.proto .\object_detection\protos\calibration.proto .\object_detection\protos\flexible_grid_anchor_generator.proto

TensorFlow in PyCharm value error: Failed to find data adapter that can handle input

I am referring TensorFlow speciliazation from Coursera where a certain piece of code works absolutely fine in Google Colab, whereas when I try to run it locally on PyCharm, it gives following error:
Failed to find data adapter that can handle input
Any suggestions?
Can you tell me the code where the error occurred?
It should be available in logs under your PyCharm console.
Looking at your comments, it seems that the model is expecting an array while you provided a list.
I was facing the same issue. Turns out it was a in the form of a list. I had to convert the fields into a numpy array like:
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
thats it!
Try it out and let me know if it works.

Tensorboard checkpoint : Access is denied. ; Input/output error

I am trying to create a tensor board in Jupyter anaconda the following way. The error occurs when write_images = True, otherwise, this code works fine. Any reason why this happens?
log_dir="logs\\fit\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,
histogram_freq=1,
write_graph = True,
write_images = False
update_freq = 'epoch'
profile_batch = 3,
embeddings_freq=1
)
And I get
UnknownError: Failed to rename: logs\20200219-202538\train\checkpoint.tmp67d5ca45d1404cc584a86cf42d2761d3 to: logs\20200219-202538\train\checkpoint : Access is denied.
; Input/output error
Seems to be random on which epoch it occurs.
I had something similar, it seems like the path where you want to safe the checkpoint, which is referred to tensorbaord is not available or denied. Do you know colab ? I would suggest you copy your code and run your training up there (only if your dataset isnt too large). You can copy your dataset in your Google Drive and access with colab. If it is working in colab, than you may dont have a problem with your code, but propably with your anaconda restrictions.
Mount Google Drive (Colab), Colab basics
I know i couldnt solve your problem, but propably this can help you and boost your learning speed with a juicy free Cloud GPU.
I had the same issue (was running on window's machine) I manually gave full permissions to the folder (right click on folder and edit permission --> give full access to 'everyone' user) and everything went fine.
If you are working on a unix system I think you can try to do the same (chmod 777 <dir_name>).
P.S. be aware of 'full permission' and 'chmod 777'. As now, anyone access to the system can view/edit contents of the folder.