How to run ROS2 without gui from Gazebo? - google-colaboratory

I want qt reinforce learning training model because gazebo simulator always crashes and turtle model not move when about ten episilon.
I try disable gui is false but not working so I want qt reinforce learning without gui or How to use ROS2 on colab for qt reinforce learning.

Related

Facial Recognition on Raspberry Pi

I am trying to develop a facial recognition system on a raspberry pi 4 for a university project. I have to use Google Auto ML, Facenet, and Tensorflow. I have some understanding of what they are (I think), just want some guidance on what each really does and how they affect each other's operation when it comes to facial recognition. Any guidance would really appreciate it, just need to be shown the right path that is all!
You can find a lot of articles in medium/Github/Youtube/instructables/Tensorflow Examples on deploying Face recognition in Raspberry pi as a blueprint to get a head start . But You have to play with your Raspberry pi a bit to gain some Ground skills if you are unaware of Hardware details and other skills like Capturing Frames from Video ,Training and evaluating data etc.
I see a stable wheel of Tensorflow wheel by PINTOO for installing Tensorflow in Raspberry pi. A USB accelerator is recommended to smoothen the computation process.You can also use our TFlite for Edge devices like Raspberry pi.
Once model is Trained , you can convert into smaller size Tensorflow lite models or use Rest-API to a server to get results.Post Queries here on SO When you find an obstacle.
Attaching below links for reference.
https://www.tensorflow.org/lite/examples
https://github.com/PINTO0309/Tensorflow-bin#usage
https://bhashkarkunal.medium.com/face-recognition-real-time-webcam-face-
recognition-system-using-deep-learning-algorithm-and-98cf8254def7
https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/

Run Faster-rcnn on mobile iOS

I have faster rcnn model that I trained and work on my google cloud instance with GPU ( train with google models API),
I want to run it on mobile, I found some GitHub that shows how to run SSDmobileNet but I could not found one that runs Faster-rcnn.
real time is not my concern for now.
I have iPhone 6, iOS 11.4
The model can be run with Metal, CoreML, tensorflow-lite...
but for POC I need it to run on mobile without train new network.
any help?
Thanks!
Faster R-CNN requires a number of custom layers that are not available in Metal, CoreML, etc. You will have to implement these custom layers yourself (or hire someone to implement them for you, wink wink).
I'm not sure if TF-lite will work. It only supports a limited number of operations on iOS, so chances are it won't have everything that Faster R-CNN needs. But that would be the first thing to try. If that doesn't work, I would try a Core ML model with custom layers.
See here info about custom layers in Core ML: http://machinethink.net/blog/coreml-custom-layers/

Train Deep learning Models with AMD

I am currently using Lenovo Ideapad PC with AMD Radeon graphics in it. I am trying to run an image classifier model using convolutional neural networks. The dataset contains 50000 images and it takes too long to train the model. Can someone tell me how can I use my AMD GPU to fasten the process. I think AMD Graphics does not support CUDA. So is there any way around?
PS: I am using Ubuntu 17.10
What you're asking for is OpenCL support, or in more grandiose terms: the democratization of accelerated devices. There seems to be tentative support for OpenCL, I see some people testing it as of early 2018, but it doesn't appear fully baked yet. The issue has been tracked for quite some time here:
https://github.com/tensorflow/tensorflow/issues/22
You should also be aware of development on XLA, an attempt to virtualize tensorflow over an LLVM (or LLVM-like) virtualization layer making it more portable. It's currently cited as being in alpha as of early 2018.
https://www.tensorflow.org/performance/xla/
There isn't yet a simple solution, but these are the two efforts to follow along these lines.

Real Time Object detection using TensorFlow

I have just started experimenting with Deep Learning and Computer Vision technologies. I came across this awesome tutorial. I have setup the TensorFlow environment using docker and trained my own sets of objects and it provided greater accuracy when I tested it out.
Now I want to make the same more real-time. For example, instead of giving an image of an object as the input, I want to utilize a webcam and make it recognize the object with the help of TensorFlow. Can you guys guide me with the right place to start with this work?
You may want to look at TensorFlow Serving so that you can decouple compute from sensors (and distribute the computation), or our C++ api. Beyond that, tensorflow was written emphasizing throughput rather than latency, so batch samples as much as you can. You don't need to run tensorflow at every frame, so input from a webcam should definitely be in the realm of possibilities. Making the network smaller, and buying better hardware are popular options.

Facial Feature Detection

Currently working on a project with a hospital where I need to detect facial features to determine if any facial deformities exist through iPhone App.
For example I found https://github.com/auduno/clmtrackr which showed facial feature detection points. I thought maybe look at the code and make it into objective C. The problem is when I tested clmtrackr with a face with deformity it did not work as intended.
You can check it also: http://www.auduno.com/clmtrackr/clm_image.html
Also tried this image:
both were inconsistent with detecting all the features points it can detect.
Do you know of any API that could do this? Or do you know what techniques I should look up so that I can make one myself.
Thank you
There are several libraries for facial landmark detection:
Dlib ( C++ / Python )
CLM-Framework (C++)
Face++ ( FacePlusPlus ) : Web API
OpenCV. Here's a tutorial: http://www.learnopencv.com/computer-vision-for-predicting-facial-attractiveness/
You can read more at: http://www.learnopencv.com/facial-landmark-detection/
you can use dlib since it's face detection algorithm is faster and it also includes a pre-trained model
https://github.com/davisking/dlib/
https://github.com/davisking/dlib-models
refer this for integration to ios how to build DLIB for iOS
alternatively you could use openface for checking it out just download the binaries http://www.cl.cam.ac.uk/~tb346/software/OpenFace_0.2_win_x86.zip and you're ready to go with command lines https://github.com/TadasBaltrusaitis/OpenFace/wiki/Command-line-arguments
note:- i wont prefer to use opencv since training process and results and not so regular