Install Tensorflow Object Detection API - 'cp' error - tensorflow

I'm trying to install the Tensorflow Object Detection API, following several tutorials and the official documentation (Tensorflow Object Detection API). I've encountered multiple times the same error when running through the command prompt the following command:
>cp object detection/packages/tf2/setup.py .
'cp' is not recognized as an internal or external command, operable program or batch file.
Any help would be appreciated. I've also tried using "copy" instead of "cp" but no result.
P.S. I've installed GPU support as well.

You can use:
copy object_detection\\packages\\tf2\\setup.py .
With double back slashes. This works for me in -cmd editor

cp won't work on command prompt. Try running it on powershell as
cp '.\object detection\packages\tf2\setup.py' .

I faced the same issue but I went through these answers and figured it out thank you and I used PowerShell with minor change is the path define
cp object detection/packages/tf2/setup.py .
it should be object_detection like below
copy '.\object_detection\packages\tf2\setup.py' .
check this website for more details

copy "object detection\packages\tf2\setup.py" .
Try this command

Related

TensorFlow Error: "Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or dir

I am trying to use a repository I found on GitHub (https://github.com/AntonMu/TrainYourOwnYOLO) to train a YOLO model for a text detection application. However, when running one of the provided scripts to download and convert YOLO weights, I get the following errors screenshot of error message
I know that the issue is that I am missing libraries, however I am not having much success finding/downloading those libraries.
So far I have tried running:
sudo apt-get update
sudo apt install nvidia-cuda-toolkit
I have also tried running:
sudo find / -name 'libcudart.so*'
However, I am getting a 'Permission Denied' Message
I was going to try the advice listed on this blog: https://dominoc925.blogspot.com/2021/08/how-to-install-cuda-11-on-ubuntu-2004.html
However, I am very new to doing projects like this and am worried about running commands that will mess with my computer...is this a safe/correct solution?
If anybody could offer any other recommendations, I would greatly appreciate it. (OR if you know of any simpler methods of running YOLO on images tagged with VoTT) Thank you!
Check version TensorFlow, Cuda and Cudnn is correctly matching.
https://www.tensorflow.org/install/source

Running python commands in a terminal in Google Colab

I've uploaded a directory of codes in Google Colabs. I need to run python command lines in a terminal that I'm unable to open.
I tried each and every solution suggested in How can I run shell (terminal) in Google Colab? but to no avail.
Updated 20220202
Can you try to execute your python scripts using the exclamation mark ! from Colab's cell directly?
I believe I encountered identical issue back in my Colab days, the process will hung or froze over some time, especially when I am dealing with GBs of datasets. So, ultimately I just ran using the exclamation mark directly from my Colab's cell to resolve the issue.
Have you tried this with the following syntax?
!pip install google-colab-shell
from google_colab_shell import getshell
getshell()
getshell(height=400)
I understood that you have tried the solution from another post, but just in case that you missed out the !, and ignored the warning message, and that exclamation mark was actually causing the shell not spawning.

Is it possible to save Tensorboad session?

I'm using Tensorboard and would like to save and send my report (email outside my organization), without losing interactive abilities. I've tried to save it as a complete html but that didn't work.
Anyone encountered the same issue and found a solution?
Have you seen tensorboard.dev?
This page allows you to host your tensorboard experiment & share it with others using a link (it's still interactive) for free.
Also you can use it from the command line; try this from your CLI for more information:
$ tensorboard dev --help

Submit a Keras training job to Google cloud

I am trying to follow this tutorial:
https://medium.com/#natu.neeraj/training-a-keras-model-on-google-cloud-ml-cb831341c196
to upload and train a Keras model on Google Cloud Platform, but I can't get it to work.
Right now I have downloaded the package from GitHub, and I have created a cloud environment with AI-Platform and a bucket for storage.
I am uploading the files (with the suggested folder structure) to my Cloud Storage bucket (basically to the root of my storage), and then trying the following command in the cloud terminal:
gcloud ai-platform jobs submit training JOB1
--module-name=trainer.cnn_with_keras
--package-path=./trainer
--job-dir=gs://mykerasstorage
--region=europe-north1
--config=gs://mykerasstorage/trainer/cloudml-gpu.yaml
But I get errors, first the cloudml-gpu.yaml file can't be found, it says "no such folder or file", and trying to just remove it, I get errors because it says the --init--.py file is missing, but it isn't, even if it is empty (which it was when I downloaded from the tutorial GitHub). I am Guessing I haven't uploaded it the right way.
Any suggestions of how I should do this? There is really no info on this in the tutorial itself.
I have read in another guide that it is possible to let gcloud package and upload the job directly, but I am not sure how to do this or where to write the commands, in my terminal with gcloud command? Or in the Cloud Shell in the browser? And how do I define the path where my python files are located?
Should mention that I am working with Mac, and pretty new to using Keras and Python.
I was able to follow the tutorial you mentioned successfully, with some modifications along the way.
I will mention all the steps although you made it halfway as you mentioned.
First of all create a Cloud Storage Bucket for the job:
gsutil mb -l europe-north1 gs://keras-cloud-tutorial
To answer your question on where you should write these commands, depends on where you want to store the files that you will download from GitHub. In the tutorial you posted, the writer is using his own computer to run the commands and that's why he initializes the gcloud command with gcloud init. However, you can submit the job from the Cloud Shell too, if you download the needed files there.
The only files we need from the repository are the trainer folder and the setup.py file. So, if we put them in a folder named keras-cloud-tutorial we will have this file structure:
keras-cloud-tutorial/
├── setup.py
└── trainer
├── __init__.py
├── cloudml-gpu.yaml
└── cnn_with_keras.py
Now, a possible reason for the ImportError: No module named eager error is that you might have changed the runtimeVersion inside the cloudml-gpu.yaml file. As we can read here, eager was introduced in Tensorflow 1.5. If you have specified an earlier version, it is expected to experience this error. So the structure of cloudml-gpu.yaml should be like this:
trainingInput:
scaleTier: CUSTOM
# standard_gpu provides 1 GPU. Change to complex_model_m_gpu for 4 GPUs
masterType: standard_gpu
runtimeVersion: "1.5"
Note: "standard_gpu" is a legacy machine type.
Also, the setup.py file should look like this:
from setuptools import setup, find_packages
setup(name='trainer',
version='0.1',
packages=find_packages(),
description='Example on how to run keras on gcloud ml-engine',
author='Username',
author_email='user#gmail.com',
install_requires=[
'keras==2.1.5',
'h5py'
],
zip_safe=False)
Attention: As you can see, I have specified that I want version 2.1.5 of keras. This is because if I don't do that, the latest version is used which has compatibility issues with versions of Tensorflow earlier than 2.0.
If everything is set, you can submit the job by running the following command inside the folder keras-cloud-tutorial:
gcloud ai-platform jobs submit training test_job --module-name=trainer.cnn_with_keras --package-path=./trainer --job-dir=gs://keras-cloud-tutorial --region=europe-west1 --config=trainer/cloudml-gpu.yaml
Note: I used gcloud ai-platform instead of gcloud ml-engine command although both will work. At some point in the future though, gcloud ml-engine will be deprecated.
Attention: Be careful when choosing the region in which the job will be submitted. Some regions do not support GPUs and will throw an error if chosen. For example, if in my command I set the region parameter to europe-north1 instead of europe-west1, I will receive the following error:
ERROR: (gcloud.ai-platform.jobs.submit.training) RESOURCE_EXHAUSTED:
Quota failure for project . The request for 1 K80
accelerators exceeds the allowed maximum of 0 K80, 0 P100, 0 P4, 0 T4,
0 TPU_V2, 0 TPU_V3, 0 V100. To read more about Cloud ML Engine quota,
see https://cloud.google.com/ml-engine/quotas.
- '#type': type.googleapis.com/google.rpc.QuotaFailure violations:
- description: The request for 1 K80 accelerators exceeds the allowed maximum of
0 K80, 0 P100, 0 P4, 0 T4, 0 TPU_V2, 0 TPU_V3, 0 V100.
subject:
You can read more about the features of each region here and here.
EDIT:
After the completion of the training job, there should be 3 folders in the bucket that you specified: logs/, model/ and packages/. The model is saved on the model/ folder a an .h5 file. Have in mind that if you set a specific folder for the destination you should include the '/' at the end. For example, you should set gs://my-bucket/output/ instead of gs://mybucket/output. If you do the latter you will end up with folders output, outputlogs and outputmodel. Inside output there should be packages. The job page link should direct to output folder so make sure to check the rest of the bucket too!
In addition, in the AI-Platform job page you should be able to see information regarding CPU, GPU and Network utilization:
Also, I would like to clarify something as I saw that you posted some related questions as an answer:
Your local environment, either it is your personal Mac or the Cloud Shell has nothing to do with the actual training job. You don't need to install any specific package or framework locally. You just need to have the Google Cloud SDK installed (in Cloud Shell is of course already installed) to run the appropriate gcloud and gsutil commands. You can read more on how exactly training jobs on the AI-Platform work here.
I hope that you will find my answer helpful.
I got it to work halfway now by not uploading the files but just running the upload commands from cloud at my local terminal... however there was an error during it running ending in "job failed"
Seems it was trying to import something from the TensorFlow backend called "from tensorflow.python.eager import context" but there was an ImportError: No module named eager
I have tried "pip install tf-nightly" which was suggested at another place, but it says I don't have permission or I am loosing the connection to cloud shell(exactly when I try to run the command).
I have also tried making a virtual environment locally to match that on gcloud (with Conda), and have made an environment with Conda with Python=3.5, Tensorflow=1.14.0 and Keras=2.2.5, which should be supported for gcloud.
The python program works fine in this environment locally, but I still get the (ImportError: No module named eager) when trying to run the job on gcloud.
I am putting the flag --python-version 3.5 when submitting the job, but when I write the command "Python -V" in the google cloud shell, it says Python=2.7. Could this be the issue? I have not fins a way to update the python version with the cloud shell prompt, but it says google cloud should support python 3.5. If this is anyway the issue, any suggestions on how to upgrade python version on google cloud?
It is also possible to manually there a new job in the google cloud web interface, doing this, I get a different error message: ERROR: Could not find a version that satisfies the requirement cnn_with_keras.py (from versions: none) and No matching distribution found for cnn_with_keras.py. Where cnn_with_keras.py is my python code from the tutorial, which runs fine locally.
Really don't know what to do next. Any suggestions or tips would be very helpful!
The issue with the GPU is solved now, it was something so simple as, my google cloud account had GPU settings disabled and needed to be upgraded.

Creating network files in SUMO using NETCONVERT

Problem when calling netconvert in sumo:
I am trying to create my own scenario for simulation purposes.
I am using OpenStreetMaps for this.
python osmWebWizard.py
opens the browser and I select the area which I download.
netconvert --osm-files osm_bbox.osm.xml -o osm.net.xml
The error message I get is
Error: Cannot import network data without PROJ-Library. Please install packages proj before building sumo
Warning: Environment variable SUMO_HOME is not set, using built in type maps.
Quitting (on error).
My attempt to fix the problem is:
sudo apt-get install libproj*
But it seems like a dead end there and I am out of options.
Thank you.
EDIT
I have a gut feeling it has to do with libproj0 not being available anymore.