Get image from AVCaptureVideoPreviewLayer - objective-c

I'm essentially cloning Cropping a captured image exactly to how it looks in AVCaptureVideoPreviewLayer since asking the original poster if they found a solution isn't an "answer" and I am unable to comment yet because I don't have enough reputation...
The app I'm building will always be in portrait mode because rotation isn't important in this case.
I have an AVCaptureSession with the AVCaptureVideoPreviewLayer connected to a UIView of size 320x240 that is positioned against the top layout guide.
I have capturing the input working but the image that I'm receiving is skewed and shows a lot more than the portion I'm displaying. How can I capture just the area that is shown in my AVCaptureVideoPreviewLayer?

Have a look at AVCaptureVideoPreviewLayer s
-(CGRect)metadataOutputRectOfInterestForRect:(CGRect)layerRect
This method lets you easily convert the visible CGRect of your layer to the actual camera output.
One caveat: The physical camera is not mounted "top side up", but rather rotated 90 degrees to the right. (So if you hold your iPhone - Home Button right, the camera is actually top side up).
Keeping this in mind, you have to convert the CGRect the above method gives you, to crop the image to exactly what is on screen.
Example:
CGRect visibleLayerFrame = THE ACTUAL VISIBLE AREA IN THE LAYER FRAME
CGRect metaRect = [self.previewView.layer metadataOutputRectOfInterestForRect:visibleLayerFrame];
CGSize originalSize = [originalImage size];
if (UIInterfaceOrientationIsPortrait(_snapInterfaceOrientation)) {
// For portrait images, swap the size of the image because
// here, the output image is actually rotated relative to what you see on screen.
CGFloat temp = originalSize.width;
originalSize.width = originalSize.height;
originalSize.height = temp;
}
// metaRect is fractional, that's why we multiply here
CGRect cropRect;
cropRect.origin.x = metaRect.origin.x * originalSize.width;
cropRect.origin.y = metaRect.origin.y * originalSize.height;
cropRect.size.width = metaRect.size.width * originalSize.width;
cropRect.size.height = metaRect.size.height * originalSize.height;
cropRect = CGRectIntegral(cropRect);
This may be a bit confusing, but what made me really understand this is this:
Hold your device "Home Button right" -> You'll see the x - axis actually lies along the "height" of your iPhone, while the y - axis lies along the "width" of your iPhone. That's why for portrait images, you have to swap the size ;)

Related

NSView not scaling all content

I'm trying to scale an NSView for a larger display when I'm testing on my Mac screen, so I can see everything in proportion.
NSRect wFrame = self.window.frame;
wFrame.origin.x = 200;
wFrame.origin.y = 100; // from bottom of my screen
wFrame.size.width = 1080 * 0.5; // so it will fit on my screen
wFrame.size.height = 1920 * 0.5;
[self.window setFrame:wFrame display:YES]; // the window is smaller, but not everything is scaled (e.g., font sizes)
That creates the correct size window, and most of the contents are drawn at half size. But, some things are not scaled.
An NSButton's title text is still the original, large size. If the button contains an image, the image is scaled properly.
The content of a WebView are not rendered at the small size.
How can I scale all of the content, including button titles and Webview content?
I found the answer here: https://stackoverflow.com/a/33293423/236415
I wrapped my full sized NSView (and its subviews) in an NSScrollView (sized for my screen) and then set the magnification to 0.5.

How can I ensure I still get correct touch inputs when my scene is offset?

How can I accept touch input beyond the scene's bounds, so that no matter what I set self.position to, touches can still be detected?
I'm creating a tile based game from Ray Winderlich on Cocos2d version 3.0. I am at the point of setting the view of the screen to a zoomed in state on my tile map. I have successfully been able to do that although now my touches are not responding since I'm out of the coordinate space the touches used to work on.
This method is called to set the zoomed view to the player's position:
-(void)setViewPointCenter:(CGPoint)position{
CGSize winSize = [CCDirector sharedDirector].viewSizeInPixels;
int x = MAX(position.x, winSize.width/2);
int y = MAX(position.y, winSize.height/2);
x = MIN(x, (_tileMap.mapSize.width * _tileMap.tileSize.width) - winSize.width / 2);
y = MIN(y, (_tileMap.mapSize.height * _tileMap.tileSize.height) - winSize.height / 2);
CGPoint actualPosition = ccp(x, y);
CGPoint centerOfView = ccp(winSize.width/2, winSize.height/2);
NSLog(#"centerOfView%#", NSStringFromCGPoint(centerOfView));
CGPoint viewPoint = ccpSub(centerOfView, actualPosition);
NSLog(#"viewPoint%#", NSStringFromCGPoint(viewPoint));
//This changes the position of the helloworld layer/scene so that
//we can see the portion of the tilemap we're interested in.
//That however makes my touchbegan method stop firing
self.position = viewPoint;
}
This is what the NSLog prints from the method:
2014-01-30 07:05:08.725 TestingTouch[593:60b] centerOfView{512, 384}
2014-01-30 07:05:08.727 TestingTouch[593:60b] viewPoint{0, -832}
As you can see the y coordinate is -800. If i comment out the line self.position = viewPoint then the self.position reads {0, 0} and touches are detectable again but then we don't have a zoomed view on the character. Instead it shows the view on the bottom left of the map.
Here's a video demonstration.
How can I fix this?
Update 1
Here is the github page to my repository.
Update 2
Mark has been able to come up with a temporary solution so far by setting the hitAreaExpansion to a large number like so:
self.hitAreaExpansion = 10000000.0f;
This will cause touches to respond again all over! However, if there is a solution that would not require me to set the property with an absolute number then that would be great!
-edit 3-(tldr version):
setting the contentsize of the scene/layer to the size of the tilemap solves this issue:
[self setContentSize: self.tileMap.contentSize];
original replies below:
You would take the touch coordinate and subtract the layer position.
Generally something like:
touchLocation = ccpSub(touchLocation, self.position);
if you were to scale the layer, you would also need appropriate translation for that as well.
-edit 1-:
So, I had a chance to take another look, and it looks like my 'ridiculous' number was not ridiculous enough, or I had made another change. Anyway, if you simply add
self.hitAreaExpansion = 10000000.0f; // I'll let you find a more reasonable number
the touches will now get registered.
As for the underlying issue, I believe it to be one of content scale that is not set correctly, but again, I'll now leave that to you. I did however find out that when looking through some of the tilemap class, that tilesize is said to be in pixels, not points, which I guess is somehow related to this.
-edit 2-:
It bugged me with the sub-optimal answer, so I looked a little further. Forgive me, I hadn't looked at v3 until I saw this question. :p
after inspecting the base class and observing the scene/layer's value of:
- (BOOL)hitTestWithWorldPos:(CGPoint)pos;
it became obvious that the content size of the scene/layer was being set to the current view size, which in the case of an iPad is (1024, 768)
The position of the layer after the setViewPointCenter call is fully above the initial view's position, hence, the touch was being suppressed. by setting the layer/scene contentSize to the size of the tilemap, the touchable area is now expanded over the entire map, which allows the node to process the touch.

Problems scaling a UIImage to fit a UIButton

I have a set of buttons and different sized images. I want to scale each image in order that it fits the button in the correct aspect ratio. Once I've scaled the image, I set the button's image property to the scaled version.
UIImage *scaledImage = [image scaledForButton:pickerButton];
[pickerButton setImage:scaledImage forState:UIControlStateNormal];
my scaledForButton method is defined in a class extension for UIImage. It looks like this:
- (UIImage *)scaledForButton:(UIButton *)button
{
// Check which dimension (width or height) to pay respect to and
// calculate the scale factor
CGFloat imageRatio = self.size.width / self.size.height;
CGFloat buttonRatio = button.frame.size.width / button.frame.size.height;
CGFloat scaleFactor = (imageRatio > buttonRatio ? self.size.width/button.frame.size.width : self.size.height/button.frame.size.height);
// Create image using scale factor
UIImage *scaledimage = [UIImage imageWithCGImage:[self CGImage]
scale:scaleFactor
orientation:UIImageOrientationUp];
return scaledimage;
}
When I run this on an iPad2 it works fine and the images are scaled correctly. However if I run it on a retina display (both in the simulator and on a device) the image does not scale correctly and is squished into the button.
Any ideas why this would happen on retina only? I've been scratching my head for a couple of days but can't figure it out. They're both running the same iOS and I've checked the scale and ratio outputs, which are always the same, regardless of device. Many thanks.
Found the answer here: UIButton doesn't listen to content mode setting?
If you're setting the .contentMode, it seems you have to set the imageView property of the UIButton, not just the UIButton, then it worked properly.
The problem on iPad 3 was as Herman suggested - the CGImage was still a lot larger than the UIButton, so even though it was scaled down, it still had to be resized to fit the button.

UIAccessoryElement, accessoryFrame and screen rotation broken

I have a UIControl subclass which follows the UIAccessibilityContainer informal protocol: it returns NO to -isAccessibilityElement, delivers the correct -accessibilityElementCount and elements in the accessors.
Each UIAccessibilityElement which is created to represent an accessibility region is created successfully, and the frame is a 1:1 mapping of another CGRect I'm drawing.
E.g., I'm drawing into {94, 99}, {209, 350}} and the -accessibilityFrame on the UIAccessibilityElement is set to the same CGRect value.
However, when in landscape (or upside-down portrait) orientation, the frames (only for accessibility elements, drawing still works fine) are rotated incorrectly. The top-left point relative to the frame is always the corner top-left of the home button.
Here's a screenshot from the simulator:
As you can see, it's in landscape mode, and the frame is totally impossibly not what it's specifying.
Here's the code driving the creation of the elements:
CGRect localRect = someCGRectVariable;
CGRect globalRect = CGRectOffset(localRect, CGRectGetMinX(self.accessibilityFrame), CGRectGetMinY(self.accessibilityFrame));
UIAccessibilityElement *accElem = [[UIAccessibilityElement alloc]initWithAccessibilityContainer:self];
accElem.isAccessibilityElement = YES;
accElem.accessibilityFrame = globalRect;
accElem.accessibilityHint = [NSString stringWithFormat:NSLocalizedString(#"xyz %#", nil), someName];
accElem.accessibilityTraits = UIAccessibilityTraitButton;
accElem.accessibilityLabel = nameValue;
It looks to me like the rotation is busted, but I can't put my finger on it. It's worth noting that it works perfectly fine in portrait mode.
accessibilityFrame returns its answer in screen coordinates, without adjusting for device rotation.
Somewhere in Apple's docs it suggests you use [UIWindow convertRect:toWindow:] in this sort of case.

How do I clip or change alpha of an image (pixels) in Quartz?

I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...