Is it possible to recognise tracking face? - android-vision

Using Mobile vision Face API we can recognize faces.. But is it possible to define image set and try to compare tracking face and images from set and recognize tracking image in result?
If yes could you please give some hint how i can organize it or add some additional tools for it.

No, the mobile vision face API does not support facial recognition.
Although it does support tracking faces in video, the tracking mechanism uses position/size/velocity correlation to track faces from frame to frame, and does not use a facial similarity metric.

Related

Using data visualization in AR with ARKit

I am new to iOS swift programming and to building AR apps with ARKIT. I find that ARKIT is more powerful than I imagined and I can able to achieve all my business case but except placing data dashboards or charts in AR 3d space. I found ARCharts on Google but it seems to be useless.
My business case is actually scan the object or product and recognize it and display data related to it on AR world which should also show some data analytics dashboard for sales trends of the product.
How to achieve this.. pls provide some inputs
Using ARKit you will be able to detect image detection, object detection and plane detection. For your business case, you can use image detection and object detection.
I will recommend you to go through the below tutorial to get some basic knowledge on object detection and image detection.
Building a Museum App with ARKit 2. Happy coding ;)

Logitech Facial Feature tracking

for my application i want to track the facial features. i have tried some methods but none of them provided the required robustness .
the first method is based on haar face detection,canny edge detection, contour finding and key points detection , in this approach the landmarks changes drastically.
second i have used flandmark [http://cmp.felk.cvut.cz/~uricamic/flandmark/], in this approach the obtained landmark points are not enough(flandmark will detect 7 points).
i have seen the Logitech avatars their facial feature tracking was accurate and robust.any ideas how they are doing ?. it will be helpful....
Check out this example in matlab. It uses the Viola and Jones algorithm to detect a face, and Kanade-Lucas-Tomasi (KLT) point tracking algorithm to track it.

Does Miscrosoft Kinect SDK provide any API that I can input detph image then return skeleton?

I need a help about how to get skeleton data from my modified Depth Image using KinectSDK.
I have two Kinect. And I can get the DepthImage from both of them. I transform the two Depth Image into a 3D-coordinate and combine them together by using OpenGL. Finally, I reproject the 3D scene and get a new DepthImage. (The resolution is also 640*480, and the FPS of my new DepthImage is about 20FPS)
Now, I want to get the skeleton from this new DepthImage by using KinectSDK.
Anyone can help me about this?
Thanks
This picture is my flow path:
Does Miscrosoft Kinect SDK provide any API that I can input detph image then return skeleton?
No.
I tried to find some libraries (not made by Microsoft) to accomplish this task, but it is not simple at all.
Try to follow this discussion on ResearchGate, there are some useful links (such as databases or early stage projects) from where you can start if you want to develop your library and share it with us.
I was hoping to do something similar, feed post-processed depth image back to Kinect for skeleton segmentation, but unfortunately there doesn't seem to be a way of doing this with existing API.
Why would you reconstruct a skeleton based upon your 3d depth data ?
The kinect Sdk can record a skeleton directly without such reconstruction.
And one camera is enough to record a skeleton.
if you use the kinect v2, then out of my head i can track 3 or 4 skeletons or so.
kinect provides basically 3 streams, rgb video, depth, skeleton.

Is there any kind of iOS API that exists for detecting shapes through the camera?

I've been working with augmented reality API's lately but haven't been able to achieve irregular shape detection, namely the hand. I want to be able to detect hand shapes through the video/camera and execute code based on hand signs. Does anything like this already exist?
Did you have a look at OpenCV?
These are some of the links I found just using Google: Face Detection using OpenCV, Vision For Robots

how can I detect floor movements such as push-ups and sit-ups with Kinect?

I have tried to implement this using skeleton tracking provided by Kinect. But it doesn't work when I am lying down on a floor.
According to Blitz Games CTO Andrew Oliver, there are specific ways to implement with depth stream or tracking silhouette of a user instead of using skeleton frames from Kinect API. You may find a good example in video game Your Shape Fitness. Here is a link showing floor movements such as push-ups!
Do you guys have any idea how to implement this or detect movements and compare them with other movements using depth stream?
What if a 360 degree sensor was developed, one that recognises movements not only directly in front, to the left, or right of it, but also optimizes for movement above(?)/below it? The image that I just imagined was the spherical, 360 degree motion sensors often used in secure buildings and such.
Without another sensor I think you'll need to track the depth data yourself. Here's a paper with some details about how MS implements skeletal tracking in the Kinect SDK, that might get you started. They implement object matching while parsing the depth data to capture joints in the body, you may need to implement some object templates and algorithms to do the matching yourself. Unless you can reuse some of the skeletal tracking libraries to parse out objects from the depth data for you.