Camera API (Detect good pictures, rule of thirds etc) - api

So I am working on an app that will be taking pictures. I am using OpenCV to do things like detect that people are present in the picture. However what I would like to be able to do is detect if the picture is "good".
I am not sure exactly how to approach it but some ideas are:
I want to either detect a picture follows the rule of thirds. If it does not it could adjust the camera.
Have different types of pictures. So sometimes take a picture of a group and other times individual shots.
I think it may be easiest to take the picture and then try and crop it to meet one of these requirements? But I was wondering if there is any api or something that would be helpful for this?

Related

Three.js Visual indication or Effect to show when an object is occluded

I’m building a program where you control a small avatar (this is a basic circle geometry or plane) that traverses through a scene filled with 3D Models and shapes. I’d like to achieve an effect similar to those found in many video games where you can see some sort of indication that the avatar is behind the various models and shapes. For example, here is an image to explain what i mean:
Example image to show desired effect
It doesn’t necessarily need to be the outline of the shape like in the example image. I’m open to any effect really that shows some sort of indication that the avatar is behind something but also cant be too performance heavy as I'd like to get this program running on mobile. Being able to customise the effect somewhat (e.g. color, thickness, etc) is also highly desirable. Any advice or suggestions would be greatly appreciated. There really doesn't seem to be much information that I can find to achieve an effect like this.
Also I thought it was worth mentioning that thus far I have attempted two things on my own. One is just rendering the avatar above everything. That turned out to look really silly and confusing. The other thing I attempted was to use an Outline post processing effect (from this library https://github.com/vanruesc/postprocessing). Which actually looked pretty great but proved to be too performance heavy to run optimally at all times (not to mention other problems with color blending and transparent / see-through shapes and models).
I understand this is kind of a shot in the dark but thought it didn't hurt to ask.

How can I render 19451 circles on Rect Native map efficiently?

I have 19451 points exported by coordinates in a JSON file. I am trying to render them in an efficient way on the map with circles. How can I achieve this? It is the first time I am using https://github.com/react-native-maps/react-native-maps with expo, so I am not that experienced in using maps services. I don't even know where to start from. I was thinking of something like rendering the points dynamically, based on whether one point is to be found in the region of the map that is currently shown on the screen, although I have no idea how to actually achieve this. The first thing I tried was to obviously render them at once: it takes ages and it is very buggy!
You have several options:
Use some kind of clustering when there are multiple circles in the same area, for example when you're zoomed out. Have a look at react-native-maps-clustering. Performance wise is decent enough but it may lag on older devices.
When you go over a zoom level you can limit the number of circles you draw, I guess they overlap anyways. When your limit has been reached, you can display some warning to let the user know that the number of circles was limited and he should zoom in. From my experience, drawing max 50 custom markers was the upper limit to avoid lag on older devices. With circles, that limit might be different.
Manually filter your data and decide whether the circle belongs to the current viewport (visible part of the map) or not.
Some code would help me to give you some more hints.

How to change aspect ratio of photos taken using AVFoundation?

I am using AVFoundation to take pictures instead of UIImagePicker due to how customizable the user interface presented to the user can be. When using it the aspect ratio that the picture is saved as is the same as the iPhone's video feed. What I want to happen is to have the pictures saved in the same aspect ratio as normal pictures are.
The way that I am currently approaching this is to overlay a black bar in the excess preview display and then just crop the photo after saving it as an image.
However, this feels very crude. I assume that it is a common thing to use the AVFoundation as a way of taking photos and so I assume I must be missing something!
I have used this example code. And I have read through the AVFoundation documentation but can only assume that I am missing a function. I have also read through similar questions to this which describe the process by which I might go about cropping images, but that isn't really my concern.
On the other hand, if there is no standard way to do this, please do let me know so that I can stop worrying that I am approaching it in a convoluted way.
Also, I am using Objective-C so if answers contain code, please could you use the same language?

Directly Record Screen on Mac

OK so I want to record the screen of a Mac directly to a .mov or .m4v. I've taken a look at Son of Grab from Apple, but I would prefer not to deal with screenshots and individual images and just work with video.
I thought there should be something in QTKit but I can't find it. I know this can be done in OpenGL, but 1) I don't know how and 2) I'd like to avoid that if possible.
Just to elaborate, I am recording from iSight using QTCaptureDeviceInput and (obviously a QTDevice) because I need to solution to work on Snow Leopard.
It seems like there should be a way to just target the screen as the input device for QTMediaTypeVideo.
Any help would be greatly appreciated.
You can use AVFoundation to do screen recording on the Mac. It's only available on 10.7 though.
You can use CGDisplayCreateImage/CGDisplayCreateImageFromRect APIs (10.6+) to obtain still images of screen and then making a movie out of them.
I'm not sure how good will be the performance though.
I have found that when faced with the question, will it be fast enough or not, just give it a try. Do a quick test by gabbing frame after frame say 1000 times and time it. CGDisplayCreateImageFromRect is not that hard to call at all. I have called it for single screen shots of the whole screen when the mouse was clicked, and it hardly slowed my mac down (only a basic dual core machine).
Apple has two samples showing the main two ways this can be done: :-
ScreenSnapshot
SonOfGrab
It would be easy to modify these to do it say 1000 times in a loop!

Finding significant images in a set of surveillance camera images

I've had theft problems outside my house so I setup a simple webcam to capture every second with Dorgem (http://dorgem.sf.net).
Dorgem does offer a feature to use motion detection to only capture frames where something is moving on the screen. The problem is that the motion detection algorithm it uses is extremely sensitive. It goes off because of variations in color between successive shots on my cheap webcam, and it also goes off because the trees in front of the house are blowing in the wind. Additionally, the front of my house is a high traffic area so there is also a large number of legitimately captured frames.
I average capturing 2800/3600 frames every second using Dorgem's motion detection. This is too much for me to search through to find out where the interesting activity is.
I wish I could re-position the camera to a more optimal position where it would only capture the areas I'm interested in, so that motion detection would be simpler, however this is not an option for me.
I think that because my camera has a fixed position and each picture frames the same area in front of my house, then I should be able to scan the images and figure out which ones have motion in some interesting region of that image, throwing out all other frames.
For example: if there's a change in pixel 320,240 then someone has stepped in front of my house and I want to see that frame, but if there's a change in pixel 1,1 then its just the trees blowing in the wind and the frame can be discarded.
I've looked at pdiff, a tool for finding diffs in sets of pictures, but it seems to be also focused on diffing the entire picture, rather than a specific region of it:
http://pdiff.sourceforge.net/
I've also looked at phash, a tool for calculating a hash based on human perception of an image, but it seems too complex:
http://www.phash.org/
I suppose I could implement it in a shell script using imagemagick's mogrify -crop to cherry pick the regions of the image I'm interested in, then running pdiff to find the interesting ones, and using that to pick out the interesting frames.
Any thoughts? ideas? existing tools?
cropping and then using pdiff seems like the best choice to me.