I have a UIControl subclass which follows the UIAccessibilityContainer informal protocol: it returns NO to -isAccessibilityElement, delivers the correct -accessibilityElementCount and elements in the accessors.
Each UIAccessibilityElement which is created to represent an accessibility region is created successfully, and the frame is a 1:1 mapping of another CGRect I'm drawing.
E.g., I'm drawing into {94, 99}, {209, 350}} and the -accessibilityFrame on the UIAccessibilityElement is set to the same CGRect value.
However, when in landscape (or upside-down portrait) orientation, the frames (only for accessibility elements, drawing still works fine) are rotated incorrectly. The top-left point relative to the frame is always the corner top-left of the home button.
Here's a screenshot from the simulator:
As you can see, it's in landscape mode, and the frame is totally impossibly not what it's specifying.
Here's the code driving the creation of the elements:
CGRect localRect = someCGRectVariable;
CGRect globalRect = CGRectOffset(localRect, CGRectGetMinX(self.accessibilityFrame), CGRectGetMinY(self.accessibilityFrame));
UIAccessibilityElement *accElem = [[UIAccessibilityElement alloc]initWithAccessibilityContainer:self];
accElem.isAccessibilityElement = YES;
accElem.accessibilityFrame = globalRect;
accElem.accessibilityHint = [NSString stringWithFormat:NSLocalizedString(#"xyz %#", nil), someName];
accElem.accessibilityTraits = UIAccessibilityTraitButton;
accElem.accessibilityLabel = nameValue;
It looks to me like the rotation is busted, but I can't put my finger on it. It's worth noting that it works perfectly fine in portrait mode.
accessibilityFrame returns its answer in screen coordinates, without adjusting for device rotation.
Somewhere in Apple's docs it suggests you use [UIWindow convertRect:toWindow:] in this sort of case.
Related
I'm essentially cloning Cropping a captured image exactly to how it looks in AVCaptureVideoPreviewLayer since asking the original poster if they found a solution isn't an "answer" and I am unable to comment yet because I don't have enough reputation...
The app I'm building will always be in portrait mode because rotation isn't important in this case.
I have an AVCaptureSession with the AVCaptureVideoPreviewLayer connected to a UIView of size 320x240 that is positioned against the top layout guide.
I have capturing the input working but the image that I'm receiving is skewed and shows a lot more than the portion I'm displaying. How can I capture just the area that is shown in my AVCaptureVideoPreviewLayer?
Have a look at AVCaptureVideoPreviewLayer s
-(CGRect)metadataOutputRectOfInterestForRect:(CGRect)layerRect
This method lets you easily convert the visible CGRect of your layer to the actual camera output.
One caveat: The physical camera is not mounted "top side up", but rather rotated 90 degrees to the right. (So if you hold your iPhone - Home Button right, the camera is actually top side up).
Keeping this in mind, you have to convert the CGRect the above method gives you, to crop the image to exactly what is on screen.
Example:
CGRect visibleLayerFrame = THE ACTUAL VISIBLE AREA IN THE LAYER FRAME
CGRect metaRect = [self.previewView.layer metadataOutputRectOfInterestForRect:visibleLayerFrame];
CGSize originalSize = [originalImage size];
if (UIInterfaceOrientationIsPortrait(_snapInterfaceOrientation)) {
// For portrait images, swap the size of the image because
// here, the output image is actually rotated relative to what you see on screen.
CGFloat temp = originalSize.width;
originalSize.width = originalSize.height;
originalSize.height = temp;
}
// metaRect is fractional, that's why we multiply here
CGRect cropRect;
cropRect.origin.x = metaRect.origin.x * originalSize.width;
cropRect.origin.y = metaRect.origin.y * originalSize.height;
cropRect.size.width = metaRect.size.width * originalSize.width;
cropRect.size.height = metaRect.size.height * originalSize.height;
cropRect = CGRectIntegral(cropRect);
This may be a bit confusing, but what made me really understand this is this:
Hold your device "Home Button right" -> You'll see the x - axis actually lies along the "height" of your iPhone, while the y - axis lies along the "width" of your iPhone. That's why for portrait images, you have to swap the size ;)
How can I accept touch input beyond the scene's bounds, so that no matter what I set self.position to, touches can still be detected?
I'm creating a tile based game from Ray Winderlich on Cocos2d version 3.0. I am at the point of setting the view of the screen to a zoomed in state on my tile map. I have successfully been able to do that although now my touches are not responding since I'm out of the coordinate space the touches used to work on.
This method is called to set the zoomed view to the player's position:
-(void)setViewPointCenter:(CGPoint)position{
CGSize winSize = [CCDirector sharedDirector].viewSizeInPixels;
int x = MAX(position.x, winSize.width/2);
int y = MAX(position.y, winSize.height/2);
x = MIN(x, (_tileMap.mapSize.width * _tileMap.tileSize.width) - winSize.width / 2);
y = MIN(y, (_tileMap.mapSize.height * _tileMap.tileSize.height) - winSize.height / 2);
CGPoint actualPosition = ccp(x, y);
CGPoint centerOfView = ccp(winSize.width/2, winSize.height/2);
NSLog(#"centerOfView%#", NSStringFromCGPoint(centerOfView));
CGPoint viewPoint = ccpSub(centerOfView, actualPosition);
NSLog(#"viewPoint%#", NSStringFromCGPoint(viewPoint));
//This changes the position of the helloworld layer/scene so that
//we can see the portion of the tilemap we're interested in.
//That however makes my touchbegan method stop firing
self.position = viewPoint;
}
This is what the NSLog prints from the method:
2014-01-30 07:05:08.725 TestingTouch[593:60b] centerOfView{512, 384}
2014-01-30 07:05:08.727 TestingTouch[593:60b] viewPoint{0, -832}
As you can see the y coordinate is -800. If i comment out the line self.position = viewPoint then the self.position reads {0, 0} and touches are detectable again but then we don't have a zoomed view on the character. Instead it shows the view on the bottom left of the map.
Here's a video demonstration.
How can I fix this?
Update 1
Here is the github page to my repository.
Update 2
Mark has been able to come up with a temporary solution so far by setting the hitAreaExpansion to a large number like so:
self.hitAreaExpansion = 10000000.0f;
This will cause touches to respond again all over! However, if there is a solution that would not require me to set the property with an absolute number then that would be great!
-edit 3-(tldr version):
setting the contentsize of the scene/layer to the size of the tilemap solves this issue:
[self setContentSize: self.tileMap.contentSize];
original replies below:
You would take the touch coordinate and subtract the layer position.
Generally something like:
touchLocation = ccpSub(touchLocation, self.position);
if you were to scale the layer, you would also need appropriate translation for that as well.
-edit 1-:
So, I had a chance to take another look, and it looks like my 'ridiculous' number was not ridiculous enough, or I had made another change. Anyway, if you simply add
self.hitAreaExpansion = 10000000.0f; // I'll let you find a more reasonable number
the touches will now get registered.
As for the underlying issue, I believe it to be one of content scale that is not set correctly, but again, I'll now leave that to you. I did however find out that when looking through some of the tilemap class, that tilesize is said to be in pixels, not points, which I guess is somehow related to this.
-edit 2-:
It bugged me with the sub-optimal answer, so I looked a little further. Forgive me, I hadn't looked at v3 until I saw this question. :p
after inspecting the base class and observing the scene/layer's value of:
- (BOOL)hitTestWithWorldPos:(CGPoint)pos;
it became obvious that the content size of the scene/layer was being set to the current view size, which in the case of an iPad is (1024, 768)
The position of the layer after the setViewPointCenter call is fully above the initial view's position, hence, the touch was being suppressed. by setting the layer/scene contentSize to the size of the tilemap, the touchable area is now expanded over the entire map, which allows the node to process the touch.
I have a UIView with a custom shape drawn in drawRect:. The frame property is set to:
{5.f, 6.f, 50.f, 50.f}
Now, I render the view in an Image Context, but it is ignoring the frame property of the UIView, and always drawing the UIView in the top left.
[_shapeView.layer renderInContext:UIGraphicsGetCurrentContext()];
I tried to change the frame of the CALayer, but nothing changed. Modifying the bounds property made things worse. The only work around I found useful was:
CGRect frame = _shapeView.frame;
CGContextTranslateCTM(context, frame.origin.x, frame.origin.y);
[_shapeView.layer renderInContext:context];
But, this is impractical when I am dealing with many shapes, I want to just use the frame property.
Using CGContextTranslateCTM is the proper way to go. As renderInContext: documentation states : "Renders in the coordinate space of the layer." This means the frame origin of your layer/view is ignored.
Regarding CGContextDrawLayer, it is not made to be used with CALayer, but with CGLayer. These are two very different things, and explains your crash.
The view system in my app is highly customized and uses a number of views that are manually rotated from portrait to landscape based on user interactions (the rotation is done by applying an affine transform to the view/layer).
I want to present a popover inside one of these rotated views, but the orientation of the popover always appears relative to the orientation of the device (i.e., not relative to the view). I'm guessing the answer is no, but just in case someone has a clever idea: is there any way to manually rotate the view that is presented by UIPopoverController?
Sean, I just tested it for kicks, yes it works.
It has to be done (in my case at least) in viewDidAppear (if done in viewWillAppear, it gets knocked back to the original setting.)
This worked just fine (just tested now) to have a popover at a 90 degree angle. i.e in my case my main view is in portrait mode and the popover is turned 90 deg.
self.navigationController.view.superview.superview.transform = CGAffineTransformMakeRotation (M_PI/2.0);
Are you trying to rotate the popover or just the content shown in the popover? You can control some of the former by setting which arrow orientations are possible. I'm interested in the latter, and it seems to work just by grabbing the content view controller. E.g.:
aPopoverController.contentViewController.view.transform = CGAffineTransformMakeRotation(M_PI);
DISCLAIMER: If you're at all interested in trying to get your app into the store, this code is almost certainly grounds for rejection. It dives into UIKit's private API's which is a big no-no as far as apple is concerned.
#RunningPink had the right idea. Depending on how the view hierarchy is set up, the popover may be back up farther than two superviews. The popover itself it an instance of the (private) class _UIPopover (at least in iOS 5). You can find this view by doing:
UIView *possiblePopover = popoverController.contentViewController.view;
while (possiblePopover != nil) {
// Climb up the view hierarchy
possiblePopover = possiblePopover.superview;
if ( [NSStringFromClass([possiblePopover class]) isEqualToString:#"_UIPopoverView"] ) {
// We found the popover, break out of the loop
break;
}
}
if (nil != possiblePopover) {
// Do whatever you want with the popover
}
In doing this, I found that transforming the view often ended up making the popover look blurry. I found the reason was that the popover's superview was an instance of another private class called UIDimmingView which is responsible for accepting touches outside of the popover and causing the popover to dismiss. Performing the rotation on the dimming view removed the blurriness I was seeing in the popover.
However, transforming the dimming can result in weirdness where certain parts of the window are not "covered" by the dimming view so the popover will not dismiss if these parts of the window are tapped. To get around this, I applied the rotation to the dimming view, reset the dimming view's frame to cover the screen, and then translated the popover view into place.
if (nil != possiblePopover) {
// Found the popover view
CGAffineTransform rotation = CGAffineTransformMakeRotation(-M_PI_2);
CGAffineTransform translation = // Whatever translation in necessary here
// Rotate the UIDimming View and reset its frame
[possiblePopover.superview setTransform:rotation];
[possiblePopover.superview setFrame:CGRectMake(0, 0, possiblePopover.superview.frame.size.height, possiblePopover.superview.frame.size.width)];
// Translate the popover view
[possiblePopover setTransform:translation];
}
I have a UIViewController one UIWebView in it. I'd like the UIWebView to be positioned in the centre of the iPad screen in landscape and portrait modes. So, I've implemented it like this
// UIViewController
// InfoGraphicView is the UIWebView
-(BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
// Overriden to allow any orientation.
return YES;
}
- (void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation
duration:(NSTimeInterval)duration {
if (toInterfaceOrientation == UIInterfaceOrientationPortrait ||
toInterfaceOrientation == UIInterfaceOrientationPortraitUpsideDown) {
[self layoutPortrait];
} else {
[self layoutLandscape];
}
}
- (void)layoutLandscape {
NSLog(#"Layout Landscape");
infoGraphicView.frame = CGRectMake(100, 100, 936, 700);
}
- (void)layoutPortrait {
NSLog(#"Layout Portrait");
infoGraphicView.frame = CGRectMake(100, 100, 700, 936);
}
However, it's not behaving as I expected. In the above code, I would expectt he UIWebView to be 100px (or points or whatever the unit is) away from the top and the left. But it's not. In Portrait mode it appears flush with the top left of the screen, and in Landscape mode it seems to be partially offscreen in the top left.
If I set the frame as CGRectMake(-100, 100, 700, 936) then I get it positioned in the center of the screen as I'd like it to be, but I've no idea why.
As usual, there's most likely something simple I'm overlooking but I can't figure it out. Any help greatly appreciated as always.
The coordinates you set on infoGraphicView are relative to its superview, not to the screen generally. And views don't necessarily clip their subviews. Furthermore, the shape set automatically to self.view will depend on the scaling flags set in Interface Builder. However, I think that by default it is set to fill the whole screen.
That said, I think the mistake is in your use of willRotateToInterfaceOrientation:duration:. That is called before the rotation begins, so self.view has the old size (ie, it'll still be portrait sized if rotating from portrait to landscape and vice versa). Probably better to hook willAnimateRotationToInterfaceOrientation:duration: — then the correct size has been set and you'll be within the CoreAnimation block so your view will grow/shrink as part of the rotation animation.
It's also worth checking which resizing flags you have set on infoGraphicView. They'll take effect automatically, in addition to any changes you make. So you probably want to disable them all.
This probably is an issue with the view that the web view is in. The coordinate system used is that of the view’s superview. If that view isn’t being resized on rotation, then you’ll see unexpected layout like this. You can access the superview of a view through the superview property; one way to see its frame would be to use its description. Put this line in one of your layout methods:
NSLog(#"Superview: %#", [infoGraphicView superview]);
That should print out a description of the view.
Once you get that figured out, if you want the web view to have the same layout, you can use its autoresizingMask property. If you set it like this:
infoGraphicView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
Then the view will automatically change its width and height to keep the top, left, right, and bottom margins the same.