Categorizing a photo as a selfie - camera

How to recognize categorize a photo into selfie category ?
I was wondering if we could use the meta data where in it contains the type of the camera used to snap a pic (It could be a front camera of a phone (secondary camera) in case of a selfie but not necessarily should be, primary camera (back camera of the phone) could also be used or the non phone camera could be used too)
Do we use AI techniques to learn it as a selfie, Or do we use some kind of measuring the focal length to recognize the given pic as selfie.
This is just a open ended question. Any comments or thoughts are appreciated.

Meta data wouldn't be useful as smart phones are becoming so advanced. I know Apple and Microsoft have both shot commercials with their phones, as well using photos for giant billboards.
You could probably find some facial recognition software that cameras use to find faces without a lot of difficulty (opencv is a place to look). From there you could measure the size of the face in relation to the photo, if it's large enough it's probably a selfie.

With the selfie stick, categorize a photo as a selfie by the size of the face is probably not the most accurate solution. Of course, if you think that no photo that you have will be used with a selfie stick, then as roro said, measuring the size of the face would be a working solution.
What I would suggest is to have a photo of the owner of the phone and try to estimate with facial recognition if the photo contains the owner.

Related

Sensors that detects and simulate human emotions

Is there any WEARABLE SENSORS available that can simulate human emotions ?
Something like the one in this link https://www.technologyreview.com/s/421316/sensor-detects-emotions-through-the-skin/ (but it doesn't capture many of the human emotions).
I am looking for some WEARABLE sensor that should simulate the level of Anger, Disgust, Fear, Happiness, Sadness, Surprise, Excitement etc of a human at any particular instance. I am NOT looking for emotion detection from Facial expression or voice recognition.
Your Help is much appreciated !! Thanks.
Take a look at the Frauenhofer Shore Software. Frauenhofer creating really great things like mp3. But i think this Software would be really expensive.
http://www.iis.fraunhofer.de/en/ff/bsy/tech/bildanalyse/shore-gesichtsdetektion.html
Or here is a open source solution:
https://github.com/auduno/clmtrackr

Why are maps in Indian apps like Uber, Google Maps so inaccurate?

I've been using some popular taxi-hailing apps like Uber and OLA in India and the same, Uber, in USA. The location of cars, where they're moving and my position on the app's map are always off. So much so, I'd need to call the driver to tell them where I'm at. From this Quora thread I was able to narrow the problem to be in use of Maps API or GPS signals.
The Quora post: https://www.quora.com/Why-is-GPS-in-India-so-inaccurate
The parody video: https://www.youtube.com/watch?v=hjBM-zSq3NU
It is possible that the problem is caused by your device or the capability of GPS in your area. newer phones can use 10's of GPS's to locate themselves this is actually called AGPS. However, older phones use three cell towers (not GPS) in order to triangulate your position. While, this method was fairly accurate it was known to be more than a couple of feet off on occasion. Even older phones, may even use only two cell towers, the problem here that the speed at which this data (light speed) allowed for a very large margin of error which could be your problem. Also, some phones without AGPS use only one GPS to locate, and this may also complicate things for you.

Google Voice Recognition on Movies

I've had excellent results with the Google API for speech recognition with natural dialogues, however for sounds from Youtube videos or movies recognition is poor or nonexistent.
Recording sounds on an iPhone 4 of my voice in both Spanish to English is recognized, but with the same phone at a movie is almost impossible, even a scene with a character talking with little background noise. Only once had success.
I try to clean up the sound with SoX (Sound eXchange) using noisered and compand efects, without any success.
Any idea? Or simply are sounds that can not be identified by the Google API for more you change? It will have better success with other speech recognition software?
Google voice recognizer (and most other recognizers) is not compatible with reverberation effects. In most video scenes distance between person and microphone more than 1-3 meter. Try to put your phone on table and recognize smth from 3 meters distance. This will does not lead to anything but sound quality will be very good.

windows PC camera image capturing, not taking one frame in a video stream

I got a question of image capture with a PC camera(integrated note book camera or web cam). While I am developing a computer vision system in which high quality image capture is the key issue, most of the current method is use VFW or directShow to capture video stream and snap one frame as an image.
However, this method could not get high quality image ( or using up the full capacity of the camera). For example, I got a 5 mega pixel web cam. but the video stream is maximum 720P(USB bandwidth problem?). Video streaming is wasting some of the camera sensors.
Could I video streaming and taking picture independently? like inputing video with a 640*480 video stream and render on the stream. then take a picture of 1280*720 from the same cam? I guess this would be a hardware problem? the new HTC one X camera?
In short, it's there a way for a PC system to take a picture ,full use of the sensor capacity, without video streaming and capture one frame. Is this a hardware related problem? Does common web cam support this? Or a Software problem, I should learn DirectShow things?
Thanks a lot.
I vaguely remember (some) video sources offer both a capture and still pin, the latter I assume would offer you higher quality. You can easily test this in GraphEdit. If it works then yes, you'll have to learn DirectShow. Or pay someone to code this for you.

How to get a single screen shot from photo camera using microcontroller

Let's imagine that we have any of popular photocameras (like Canon or whatever) installed on a mechanical platform. This platform allows us to accurately adjust camera's lens direction to any interesting object. This platform is controlled from PC via microcontroller board. But we need a feedback from a photocamera - the image which currently appears on camera's display. Obviously, this feedback is required to be sure that the camera looks in a right direction. At the moment I don't know how to get a single shot image from photocamera by a microcontroller.
Could you please recommend me any directions to dig to ? Any recommendations on how to select photo camera (web cameras are not allowed) ? Any tips ?
Thank you in advance =)
Dwelch is right, you need to pick a "friendly" camera and work from there - google CHDK for a starter.
You could use the SPI interface of a micro to spoof being an SD card, and accept image data from the camera straight into the micro, but you would probably need quite a fast micro with a fair amount of RAM, especially if you want to do any processing on it.
Other than that, you could sample the camera's AV-output (if it has one), either into the micro or straight into the PC via a USB capture stick (or USB capture stick into micro if you're being a show-off), or maybe interrogate the camera over its USB or (insert name of proprietary port here) IO port.
Getting more hacky (yes, even more!) you could sniff the LCD data bus of the camera and steal the image from that, but that brings all sorts of pain, and tiny, tiny screws.