Detect eyes in real time video with Objective-C? - objective-c

I was checking this Apple sample code, Squarecam. there are also some examples out there written in Swift.
In this example, a red square is drawn when a face is detected. My question is: How can circles be drawn at the eyes?
I still don't know how to detect the eyes in a similar way they did it in the example with the face.
Or how the position of the eyes (faceFeature.leftEyePosition) can be used for drawing the circle.
Any ideas?

The face feature gives you a point of the feature inside the Image bounds.
open var leftEyePosition: CGPoint { get }
you can build a rectangle around the point. Once you have the rectangle you can create a layer and composite over the face image.
if let overlay = CIImage(color: overlayColor).cropping(to: faceImage.extent).applyingPerspectiveTransformFilter(onRect: eyeRect)
{
let eyeMarkedImage = overlay.compositingOverImage(faceImage)
}
The "applyingPerspectiveTransformFilter" and "cropping" are the CIFilters.

Related

Changing the hitbox of an image

I've been looking for a solution for quite a while now, but I can't help finding one. The situation is as following: I created an SKSpriteNode with an image, by the method touchesBegan I want to interact with it, like if one touches the image, something should change. The problem is that the hitbox around this image is square-shaped and not adjusted to the shape of the image. Has anyone got any clue how I can solve this problem? I tried doing it by changing the physicsBody like this
CGFloat offsetX = node.frame.size.width * node.anchorPoint.x;
CGFloat offsetY = node.frame.size.height * node.anchorPoint.y;
CGMutablePathRef path = CGPathCreateMutable();
CGPathCloseSubpath(path);
node.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath:path];
Seems like the touchesBegan method doesn't use the physicsBody shape but the actual shape, I tried my best explaining it, sorry if I confused you.
Thanks in advance!
Julian
In the Xcode level editor, under the physics definition of your SKSpriteNode, change the body type to alpha mask (see image). This makes the physics boundary conform to the actual shape of your sprite rather than the rectangular bounding box.
Sadly, this solution only applies if your sprites are managed in an .sks file since level editor seems to have logic that calculates your sprite's outline and turns it into a path used to initialize the physics body with a body type of 'alpha mask'. (Too bad there's currently no 'bodyType' property on SKPhysicsBody because it would be really convenient to be able to set this programmatically instead of only being able to do so in the .sks file... Apple?)
The way you're attempting to create and initialize an SKPhysicsBody with a CGpath is on the right track I think... But since it's currently an empty path (having no points or lines) I'm guessing it won't have any effect.
Is there any way you could approximate some points & lines to represent a rough outline path for your sprite? Then, you could use those points to create a CGPath and then use that to initialize an SKPhysicsBody.
If not, a short-term compromise could be to initialize your SKPhysicsBody using init(circleOfRadius:) and use a circle as the body type which could seem less clunky than a box. And you could maybe play around with the circle radius to get a feel for the most natural radius for your collisions.

A camera zoom issue with libGDX

Hey guys I'm working on a android game with libGDX, and I got a zoom problem here. It's basically a ski safari style 2D game, and I want to implement a zoom in/out effect as the height changing. Is that possible to work this out with OrthoGraphicCamera? Or shall I change the size of the objects in real time(because I still wanna keep the background in a fixed size)?
if Camera.zoom not solve your problem, you can use
batch unmodified to draw your backgroud, and use batch.getProjectionMatrix().cpy().scale (yourScaleVariableX, yourScaleVariableY, 0);
to simulate only the items that you want, varying varibles, not if you're looking hopefully help.
simple example:
variable Class
Matrix4 testMatrix;
float yourScaleVariableX;
float yourScaleVariableY;
example render method
.//
batch.begin();
yourBackground.draw...
batch.end();
batch.begin();
testMatrix = batch.getProjectionMatrix().cpy().scale (yourScaleVariable, yourScaleVariable, 0);
batch.setProjectionMatrix(testMatrix);
yourItem.draw..
batch.end();
I think it is more efficient, changing the matrix, which resize all objects.
I hope to explain well.
Edit
I did not realize when I wrote the answer, you can store save the original matrix, before editing it for later use, or use batch.setProjectionMatrix (camera.combined); to restore

Create an image cropping interface for Objective C (Mac OS X)

I need to create a very simple image cropping interface for an OS X cocoa application, but I am not sure where to start. The user needs to be able to choose a crop size from a menu of presets, be presented with a cropping rectangle that can be resized preserving the ratio, and moved around the image until they finally apply the selected crop to the image.
I've done some searching for sample code and projects but not found anything too useful. Core Image fun house has some pointers but is a retired sample. There are lots of iOS examples, but I've not found an easy to follow Mac OS example.
Can someone point me in the right direction (or at a sample project or framework!!).
Thanks a lot.
Here is a project you can look at:
https://github.com/foundry/drawingtest
It's a little demo I made as I was trying to understand the relationship between the rects in this method:
- (void)drawInRect:(NSRect)dstRect
fromRect:(NSRect)srcRect
operation:(NSCompositingOperation)op
fraction:(CGFloat)delta
Note that the older compositeToPoint: methods are deprecated and should not be used for this sort of thing.
srcRect is the portion of the original image (in it's own coordinates) that you want to keep.
dstRect is the rect that you want that cropped area to draw into.
JMRect in the project is an NSObject representation of an NSRect - so that we can use cocoa bindings to tie the interface controls together.
For your UI, the cropping rectangle could just be a transparent subview view with a border that you push around and resize over the image you want to crop.
This is by no means a complete solution to your question, but it's something you can poke around with - it might help you to get started.

Collision detection of uneven shapes in iOS

I am working on drag and drop activity for iPad. I have a rectangle PNG image (see the image named as obj2). When I drag obj1 only on the black portion of the rectangle then it should react.
if (CGRectIntersectsRect(obj1.frame, obj2.frame))
{
NSLog(#" hit test done!! ");
}
Right now, this piece of code takes hit test even on the transparent area. How to prevent that to happen?
For something as simple as your specific example (triangle and circle), the link that David Rönnqvist gives is very useful. You should definitely look at it to see some available tools. But for the general case, the best bet is clipping, drawing, and searching.
For some background, see Clipping a CGRRect to a CGPath.
First, create an alpha-only bitmap image. This is explained in the above link.
Next, clip your context to one of your images using CGContextClipToMask().
Now, draw your other image onto the context.
Finally, search the bitmap data for any colored pixels (see the above link for example code).
If any of the pixels colored, then there is some overlap.
Another, similar approach (which might actually be faster), is to draw each image into its own alpha-only CGBitmapContext. Then walk the pixels in each context and see if they ever are both >128 at the same time.

IOS 3d for the experts

I've developed a few simple iOS apps, and I'm looking to develop a simple maze game. What I'm looking to do is really basic, so I'm looking for some guidance on which way to go.
Here is basically what I'm trying to do: simple escape-the-maze game where I read in a 2 dimensional array and that makes my maze.
I want the perspective to be standing in the middle of each square, and when you flick or hit a button, it will advance one square, and you can turn left or right. I'd like to be able to apply textures to the ways based on the numbers in the array, and I'd really like to see the movement of the wall moving towards me then stop.
The walls will always be the same height, and the maze will be simple square blocks, although I do want it to be able to hand rooms that might be more than 2x2 , like maybe 4x4, and be able to handle rendering that.
I'm not sure if trying to do something in OpenGL would be overkill since my requirements are really basic. Like I mentioned, it won't be a free move; it will be advance one square each time you hit a button and turn left or right.
I have read something about ray casting for this sort of thing but am not sure how I would accomplish this in iOS. Also I'd like to have the maze not take up the whole screen, maybe 2/3, while the rest is standard iOS controls like buttons and labels, and a background behind it.
What books or articles should I read to help me? I'd prefer not to use a 3rd party engine since this seems really basic.
You can do this using Core Animation, which is much easier than OpenGL. If you import QuartzCore into your project, you can position any view in 3D as follows:
//set the view's 2D position so that the vanishing point is the middle
//of the screen - use the same centre for every view
view.center = CGPointMake(window.bounds.size.width / 2.0f, window.bounds.size.height / 2.0f);
//create a 3d transform
CATransform3D transform = CATransform3DIdentity;
transform.m34 = -1.0f/500.0f; // this sets the 3D perspective
//you can transform the view in 3D relative to the camera
//this rotates the view by 45 degrees about the Y axis
//but you can also scale, translate, etc using equivalent functions
transform = CATransform3DRotate(transform, M_PI_4, 0.0f, 1.0f, 0.0f)
//transform the view in 3D
view.layer.transform = transform;
So if you create your walls as UIImageViews, you can arrange them into a room by transforming each one individually using the logic above.