How can I use darknet library? - yolo

I want to use Darknet library in Python.
I've installed Darknet repository using this command.
git clone https://github.com/AlexeyAB/darknet.git
But when I type import darknet as dn
it says No module named darknet.
How can I install darknet module??
Is it possible using pip???

Check this tutorial: https://www.youtube.com/watch?v=5pYh1rFnNZs&t=408s
and the second part of it. Use those steps and then try creating a python file in which you import darknet. Make sure that the python file from where you import darknet is in the same folder as darknet.py, otherwise you need to specify the path to the darknet.py in your import statement.(ex import f1.f2.f3.darknet)

Related

Exported object detection model not saved in referenced directory

I am trying to run the Tensorflow object detection notebook from the Google colab example that can be found here. I am running the example locally and when I try to create the exported model using the model.export(export_dir) function, the model is not saved in the referenced export_dir option. Instead, the code seems to ignore the option and saves the model in a temporary directory in /tmp/something. I tried to use both full and relative path but the model goes to /tmp. My environment is Ubuntu in WSL2 and I am using a /mnt/n drive.
Any idea how to fix the issue?

How do I install TensorFlow 2 & the object_detection module?

Background
I've been trying to follow the tutorial in this video. The goal is trying to install TensorFlow and TensorFlow's object_detection module.
Goal
How do I install it so that I can follow the rest of the tutorial? I only want to install the CPU version.
Additional Information
Errors that I ran into
ERROR: Could not find a version that satisfies the requirement tensorflow==2.1.0 (from versions: None) ERROR: No matching distribution found for tensorflow
ERROR: tensorflow.whl is not a supported wheel on this platform.
##Research##
https://github.com/tensorflow/tensorflow/issues/39130
Tensorflow installation error: not a supported wheel on this platform
Prologue
I found this ridiculously complex, if anyone else has a simpler way to install this package please let everyone else know.
Main resource is https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#set-env
Summary of Steps
Newest update of python (x64 bit) which you can install here -
Create a virtual environment from that newest version of python
Get the latest version of TensorFlow from Google - https://www.tensorflow.org/install/pip#package-location
Install latest version of TensorFlow using pip with --upgrade tag and link from above step
Get latest version of protoc (data transfer protocol) - https://github.com/protocolbuffers/protobuf/releases
Install protoc and add location to path so you can easily call it later
Get TensorFlow Garden files from here - https://github.com/tensorflow/models
Copy to a location and add a folder structure models
Compile Protobufs for each model from the TensorFlow Garden using protoc
Set up COCO API to connect to the COCO dataset
Copy the setup file from TensorFlow2 in the TensorFlow Garden object_detection module
Run the installation for object_detection module & hope for the best
Detailed Descriptions
I ran into a problem when first attempting to install object_detection because my version of python wasn't supported
Get the latest version by going to this page - https://www.python.org/downloads/
Click "Download Python 3.9.X"
Once downloaded, run the installation file
Navigate to where python was installed and copy the path to the executable.
Open up command prompt by going Windows Key -> cmd
Navigate to where you would like to create the virtual environment by using the cd "path/to/change/directory/to"
then type "previously/copied/python/executable/path/python.exe" -m venv "name_of_your_virtual_environment"
TensorFlow seems to be supported by google storage api and not by pip to find the link to the latest stable TensorFlow use
this website https://www.tensorflow.org/install/pip#package-location
Now grab the TensorFlow installation link that matches your version of python.
Since mine was version 3.9 and windows I got this link - https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.6.0-cp39-cp39-win_amd64.whl
Install TensorFlow by getting the python.exe from your virtual environment "name_of_your_virtual_environment"
"name_of_your_virtual_environment/Scripts/python.exe" -m pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.6.0-cp39-cp39-win_amd64.whl
Note that you have to use the upgrade tag for some reason
Because TensorFlow is a Google thing they use a special data interchange format called Protobuffs
Find the latest version of this tool by navigating to their website - https://github.com/protocolbuffers/protobuf/releases
Find the link under newest releases that matches your operating system aka windows and architecture x64
I chose https://github.com/protocolbuffers/protobuf/releases/download/v3.17.3/protoc-3.17.3-win64.zip
To install this thing extract the .zip file and put into "C://Program Files/Google Protoc"
Get the folder location that has the protoc executable and add it to your environment variables
To edit your environmental variables press the Windows Key and search for "Environmental Variables" click on "Edit the system Environment Variables"
Then click "Environmental Variables"
Navigate to the "Path" environment variable under your user, select it and click edit
Click new and paste the executable location of protoc, aka "C:/Program Files/GoogleProtoc/bin"
Now to get the actual code for the object_detection module which is supoorted by researchers and is separate to base TensorFlow
Navigate to TensorFlow Garden - https://github.com/tensorflow/models
Download or clone the repository
Copy the files to another location using the following structure
TensorFlow
-> models (You have to add this folder)-> community
-> official
-> orbit
-> research
Restart your command prompt. It will need to be restarted to take into account changes in environmental variables. In this case
Path because you added protoc on there to make it easier to call from your command prompt
Again that is Windows Key -> Search cmd
Navigate inside the research folder with cd "TensorFlow/models/research/"
Run the command to download and compile Protobuf libraries for /f %i in ('dir /b object_detection\protos\*.proto') do protoc object_detection\protos\%i --python_out=.
Install COCO API so that you can access the dataset. It is a requirement of TensorFlow's object_detection api
Ensure you are still in the folder "TensorFlow/models/research/"
Copy the setup python file to the folder you're in using copy object_detection/packages/tf2/setup.py .
Now use pip to perform the installation "name_of_your_virtual_environment/Scripts/python.exe" -m pip install --use-feature=2020-resolver
Move the set up python file for TensorFlow 2 into the directory which will install the object_detection module.
Go into "TensorFlow/models/research/object_detection/packages/tf2/setup.py" and move that to "TensorFlow/models/research/object_detection/setup.py"
Now run the installation process for the object_detection module
Open CMD and navigate to "TensorFlow/models/research/object_detection/" by using cd command
Using your virtual environment run the script "name_of_your_virtual_environment/Scripts/python.exe" setup.py
Error Guides
ERROR: Could not find a version that satisfies the requirement tensorflow==2.1.0 (from versions: None) ERROR: No matching distribution found for tensorflow
This occurs because your version of Python isn't correct or the architecture is wrong 32bit instead of 64bit. Fix this by downloading a new version of Python and creating a new virtual environment.
ERROR: tensorflow.whl is not a supported wheel on this platform.
Similar to above your version of Python might be wrong or you have selected the wrong link from the TensorFlow repo from Google Storage API. Start at the beginning, download the newest version of Python, create your new virtual environment and then download the right version of TensorFlow that matches the Python version, your operating system (e.g. MAC, Linux or Windows).

libnvinfer.so.5: cannot open shared object file: No such file or directory

I'm using Ubuntu 16.04 and TensorFlow 1.13.1. Now I want to integrate TensorRT to improve my model's inference time. I downloaded and extracted TensorRT7's tar, and installed whls of uff and graphsurgeon in its path. I also added its path to the system's LD_LIBRARY_PATH.
However, when I tried to import tensorflow.contrib.tensorrt, it gave me a file not found error. There isn't libnvinfer.so.5 in my TensorRT7's folder but a libnvinfer.so.7 instead.
Does this mean that TensorFlow 1.13.1 doesn't support TensorRT7? Should I use TensorRT5 instead?

error with importing numpy in binder beta

I am trying to share my codes on Github using binder beta. The binder generate an environment however it generates Error on Importing numpy library. The Error is "ModuleNotFoundError: No module named 'numpy'"
How may I solve the problem?
Check that you created a .yml file in your repo, like they did it here.
The environment.yml file should list all Python libraries on which
your notebooks depend, specified as though they were created using the
following conda commands:
source activate example-environment
conda env export --no-builds -f environment.yml

Using "Spacy package" on trained model: error "Can't locate model data"

I'm attempting to train the NER within SpaCy to recognize a new set of entities. Everything works just fine until I try to save and reload the model.
I'm attempting to follow the SpaCy doc recommendations from https://spacy.io/usage/training#saving-loading, so I have been saving with:
model.to_disk("save_this_model")
and then going to the Command Line and attempting to turn it into a package using:
python -m spacy package save_this_model saved_model_package
so I can then use
spacy.load('saved_model_package')
to pull the model back up.
However, when I'm attempting to use spacy package from the Command Line, I keep getting the error message "Can't locate model data"
I've looked in the save_this_model file and there is a meta.json there, as well as folders for the various pipes (I've tried this with all pipes saved and the non-NER pipes disabled, neither works).
Does anyone know what I might be doing wrong here?
I'm pretty inexperienced, so I think it's very possible that I'm attempting to make a package incorrectly or committing some other basic error. Thank you very much for your help in advance!
The spacy package command will create an installable and loadable Python package based on your model data, which you can then pip install and store in a single .tar.gz file. If you just want to load a model you've saved out, you usually don't even need to package it – you can simply pass the path to the model directory to spacy.load. For example:
nlp = spacy.load('/path/to/save_this_model')
spacy.load can take either a path to a model directory, a model package name or the name of a shortcut link, if available.
If you're new to spaCy and just experimenting with training models, loading them from a directory is usually the simplest solution. Model packages come in handy if you want to share your model with others (because you can share it as one installable file), or if you want to integrate it into your CI workflow or test suite (because the model can be a component of your application, like any other package it depends on).
So if you do want a Python package, you'll first need to build it by running the package setup from within the directory created by spacy package:
cd saved_model_package
python setup.py sdist
You can find more details here in the docs. The above command will create a .tar.gz archive in a directory /dist, which you can then install in your environment.
pip install /path/to/en_example_model-1.0.0.tar.gz
If the model installed correctly, it should show up in the installed packages when you run pip list or pip freeze. To load it, you can call spacy.load with the package name, which is usually the language code plus the name you specified when you packaged the model. In this example, en_example_model:
nlp = spacy.load('en_example_model')