How to integrate RTPS live stream link to Yolo Algorithm - yolo

I am using this command
!./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights https://www.youtube.com/watch?v=smZRzehsXHA -i 0
into Google Colab but i am getting
Total BFLOPS 65.864
Allocate additional workspace_size = 18.91 MB
Loading weights from yolov3.weights...
seen 64
Done!
video file: https://www.youtube.com/watch?v=smZRzehsXHA
Video-stream stopped!
Stream closed.

Related

how to work with large training set when dealing with auto-encoders on google colaboratory?

I am training an auto-encoder (keras) on google colab. however, I have 25000 input image and 25000 output image. I tried to:
1- copy the large file from google drive to colab each time (takes 5-6 hours).
2- convert the set to numpy array but when normalizing the images, the size get a lot bigger (from 7GB to 24GB for example) and then I can not fit it into the ram memory.
3- I can not zip and unzip my data.
So please, if anyone knows how to convert it into numpy array( and normalize it) without having large file(24GB).
What I usually do :
Zip all the images and load the .zip file on your Google Drive
Dezip in your colab :
from zipfile import ZipFile
with ZipFile('data.zip', 'r') as zip:
zip.extractall()
All your images are dezipped and stored on the Colab Disk, now you can have a faster acces to them.
Use Generators in keras like flow_from_directory or create your own generator
use you generator when you fit your model :
moel.fit(train_generator, steps_per_epoch = ntrain // batch_size,
epochs=epochs,validation_data=val_generator,
validation_steps= nval // batch_size)
with ntrain and nval the number of images in your train and validation dataset

YOLO (Darknet): How to change the output file directory of a detection (predictions.jpg)?

The Darknet guide to detect objects in images using pre-trained weights is here
The command to run is:
./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg
The result of the detection is currently saved in the current directory.
How can i change this directory where the output file predictions.jpg is saved?
it is under detector.c there is a function like this save_image(im, "predictions");
import subprocess
comand = "/darknet detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights {0} -dont_show"
def __run_shell_command(command):
output = subprocess.check_output(command, shell=True).decode("ascii")
return output
images = ["a.jpg","b.jpg"]
for image in images:
__run_shell_command(command.format(image))

Train on colab TPU without data from GCP, for data that can be all loaded into memory

From the official TPU documentation, it says that train files must be on GCP
https://cloud.google.com/tpu/docs/troubleshooting#cannot_use_local_filesystem
But I have a smaller dataset (but the training would take a very long time due to the training being based on sampling/permutations) which can be all loaded into memory (1-2 gb). I am wondering if I can somehow just transfer the data objects to the TPU directly, and it can use that to train the files.
If it makes a difference, I am using Keras to do my TPU training.
What I looked at so far:
It seems that you can loaded certain data onto individual TPU cores
self.workers = ['/job:worker/replica:0/task:0/device:TPU:' + str(i) for i in range(num_tpu_cores)]
with tf.device(worker[0):
vecs = vectors[i]
However, I am not sure if this would translate into coordinated training among all the TPU cores.
You can read files with Python:
with open(image_path, "rb") as local_file:
img = local_file.read()
1-2 GB may be too big for TPU. If you are out of memory - split your data to smaller portions.

How to run yolov2-lite tflite in coral edge TPU USB accelerator?

I would like to make sure whether the following steps I executed to get the tflite of yolov2-lite model are correct or not?
Step1 Saving graph and weights to protobuf file
flow --model cfg/yolov2-tiny.cfg --load bin/yolov2-tiny.weights --savepb.
This command created build_graph folder with yolov2-tiny.pb and yolov2-tiny.meta.
Step2 Converting pb to tflite
I executed the below piece of code to get the yolov2-tiny.tflite
import tensorflow as tf
localpb = 'yolov2-tiny.pb'
tflite_file = 'yolov2-tiny.tflite'
print("{} -> {}".format(localpb, tflite_file))
converter = tf.lite.TFLiteConverter.from_frozen_graph(
localpb,
input_arrays= ['input'],
output_arrays= ['output']
)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
If the above steps I followed to get this tflite are correct, then please suggest me the command to run this tflite file in coral edge TPU USB accelerator.
Thank you so much :)
unfortunately, yolo models are supported by the edgetpu compiler as of now. I recommend using mobile_ssd models.
For future reference, your pipeline should be:
1) Train the model
2) Convert to tflite
3) Compiled for EdgeTPU (the step that actually delegates the work onto the TPU)

Tensorflow Lite on Raspberry Pi - Installation

In my current project I'm using machine learning on the Raspberry Pi for sensor fusion. Since I heard about the release of Tensorflow Lite I'm really interested to deploy and use it to run Lite models on the platform.
On the Tensorflow website are hints for Android and iOS, but I couldn't find any hints about any other platforms. Is there a (WIP) installation/compile guide out to bring TF Lite to the Raspi?
TIA
#all, if you are still in the trials to make tensorflow lite running on Raspberry Pi 3, my "pull-request" may be useful. Please look at https://github.com/tensorflow/tensorflow/pull/24194.
Following the steps, 2 apps (label_image and camera) can be running on Raspberry Pi 3.
Best,
--Jim
There is a very small section on Raspberry PI in the TFLite docs at https://www.tensorflow.org/mobile/tflite/devguide#raspberry_pi. That section links to this GitHub doc with instructions for building TFLite on Raspberry PI - tensorflow/rpi.md.
There is no official demo app yet, but the first location says one is planned. It will probably be shared at that same location when ready (that is where the Android and iOS demo apps are described).
You can install TensorFlow PIP on Raspberry pi with "pip install tensorflow" however, if you want only TFLite you can build a smaller pip that has only the tflite interpreter (you can then do conversion on another big machine).
Info on how to do it is here:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/pip_package
Then, you can use it. Here is an example of how you might use it!
import tflite_runtime as tflr
interpreter = tflr.lite.Interpreter(model_path="mobilenet_float.tflite")
interpreter.allocate()
input = interpreter.get_input_details()[0]
output = interpreter.get_input_details()[0]
cap = cv2.VideoCapture(0) # open 0th web camera
while 1:
ret, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = cv2.resize(frame, input.shape[2],input.shape[1])
frame = np.reshape(im, input.shape).astype(np.float32)/128.0-1.0
interpreter.set_tensor(input["index"], frame)
interpreter.invoke()
labels = interpreter.get_tensor(output["index"])
top_label_index = np.argmax(labels, axis=-1)
Hope this helps.
I would suggest next links:
The lightest way is by using TensorFlowLite interpreter only. You can find more information following this link: Install just the TensorFlow Lite interpreter
You have to remember, if you use interpreter only you have to follow a little bit different logic.
# Initiate the interpreter
interpreter = tf.lite.Interpreter(PATH_TO_SAVED_TFLITE_MODEL)
# Allocate memory for tensors
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Add a batch dimension if needed (data_tensor - your data input)
input_data = tf.extend.dims(data_tensor, axis=0)
# Predict
interpreter.set_tensor(input_details[0]['index'], data_tensor)
interpreter.invoke()
# Obtain results
predictions = interpreter.get_tensor(output_details[0]['index'])
Build from source for the Raspberry Pi
Install TensorFlow with pip