Currently I do something that looks a bit fuzzy because I am dealing with points with my UI Object but what I want to do is get the width and height of the UI Object (in my case a UIImageView) in pixels.
Is this possible? I have looked around the documentation but I have not seen anything that looks relevant.
Can anyone assist with this?
Thanks!
Have you tried this?
[object frame].size.height
[object frame].size.width
I'm pretty sure that anything with a visual representation will have a frame attribute to indicate where it's located. Of course, the origin of an object's frame is relative to its container, but the size should always be useable.
Edit/Update:
I misread the initial question, which wants to also convert points --> pixels. Here's how to make sure you get that correctly:
float scale;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
// This message was implemented in iOS 4.0
scale = [[UIScreen mainScreen] scale];
} else {
// Anything that is running < 4.0 doesn't have a Retina display
scale = 1.0;
}
Then, multiply the height and width values by scale to get actual pixel count.
Related
I cannot find a way to set the height of a NSProgressIndicator programmatically.
My try so far:
NSProgressIndicator *ind = [[NSProgressIndicator alloc] init];
[ind setStyle: NSProgressIndicatorBarStyle];
// Height does not change height of the actual indicator
[ind setFrame: NSMakeRect(0, 0, 100, 50)];
[ind setBounds: NSMakeRect(0, 0, 100, 50)];
//[ind setControlSize: 0]; does only make it smaller, not bigger
[view addSubview: ind];
I found NSProgressIndicatorBarStyle enumeration in the documentation, but I couldn't find a method to specify the thickness.
Here a screenshot describing my problem: (layer has a background of red for better understanding):
This also occurs when using the NSButton class. Is there a workaround for this?
in iOS You can't change the progress indicator height just changing its frame, due to framework restrictions. However you should be able to achieve the same result, playing with transform
_indicator.transform = CGAffineTransformMakeScale(1.0f, 0.6f);
EDIT: I just tried on Mac OS X
_indicator.layer.transform = CATransform3DMakeScale(1.0f, 0.6f, 0.0f);
and it doesn't work, so it is not like iOS, likely because of how it is implemented on Cocoa (like Ken suggested).
The only way I managed to change the height is using controlSize, but I don't think it will suit your needs (since it doesn't allow you to specify points).
[_indicator setControlSize:NSMiniControlSize]; // or NSSmallControlSize
You should be able to use an arbitrary frame by subclassing NSProgressIndicator, and overriding drawRect, at this point my recommendation would be to look around to find something that can be extended for your use, like this one
https://www.cocoacontrols.com/controls/lbprogressbar
in IB you can
select your NSProgressIndicator control
in the utilities view select the View Effects inspector
press + in Content Filters
select Lanczos Scale Transform filter
set the appropriate scale value in the Scale row
set the Aspect Ratio too if you need to change the height only
this can be added programmatically also, just google for it how to add Content Filters to NSView
How can I accept touch input beyond the scene's bounds, so that no matter what I set self.position to, touches can still be detected?
I'm creating a tile based game from Ray Winderlich on Cocos2d version 3.0. I am at the point of setting the view of the screen to a zoomed in state on my tile map. I have successfully been able to do that although now my touches are not responding since I'm out of the coordinate space the touches used to work on.
This method is called to set the zoomed view to the player's position:
-(void)setViewPointCenter:(CGPoint)position{
CGSize winSize = [CCDirector sharedDirector].viewSizeInPixels;
int x = MAX(position.x, winSize.width/2);
int y = MAX(position.y, winSize.height/2);
x = MIN(x, (_tileMap.mapSize.width * _tileMap.tileSize.width) - winSize.width / 2);
y = MIN(y, (_tileMap.mapSize.height * _tileMap.tileSize.height) - winSize.height / 2);
CGPoint actualPosition = ccp(x, y);
CGPoint centerOfView = ccp(winSize.width/2, winSize.height/2);
NSLog(#"centerOfView%#", NSStringFromCGPoint(centerOfView));
CGPoint viewPoint = ccpSub(centerOfView, actualPosition);
NSLog(#"viewPoint%#", NSStringFromCGPoint(viewPoint));
//This changes the position of the helloworld layer/scene so that
//we can see the portion of the tilemap we're interested in.
//That however makes my touchbegan method stop firing
self.position = viewPoint;
}
This is what the NSLog prints from the method:
2014-01-30 07:05:08.725 TestingTouch[593:60b] centerOfView{512, 384}
2014-01-30 07:05:08.727 TestingTouch[593:60b] viewPoint{0, -832}
As you can see the y coordinate is -800. If i comment out the line self.position = viewPoint then the self.position reads {0, 0} and touches are detectable again but then we don't have a zoomed view on the character. Instead it shows the view on the bottom left of the map.
Here's a video demonstration.
How can I fix this?
Update 1
Here is the github page to my repository.
Update 2
Mark has been able to come up with a temporary solution so far by setting the hitAreaExpansion to a large number like so:
self.hitAreaExpansion = 10000000.0f;
This will cause touches to respond again all over! However, if there is a solution that would not require me to set the property with an absolute number then that would be great!
-edit 3-(tldr version):
setting the contentsize of the scene/layer to the size of the tilemap solves this issue:
[self setContentSize: self.tileMap.contentSize];
original replies below:
You would take the touch coordinate and subtract the layer position.
Generally something like:
touchLocation = ccpSub(touchLocation, self.position);
if you were to scale the layer, you would also need appropriate translation for that as well.
-edit 1-:
So, I had a chance to take another look, and it looks like my 'ridiculous' number was not ridiculous enough, or I had made another change. Anyway, if you simply add
self.hitAreaExpansion = 10000000.0f; // I'll let you find a more reasonable number
the touches will now get registered.
As for the underlying issue, I believe it to be one of content scale that is not set correctly, but again, I'll now leave that to you. I did however find out that when looking through some of the tilemap class, that tilesize is said to be in pixels, not points, which I guess is somehow related to this.
-edit 2-:
It bugged me with the sub-optimal answer, so I looked a little further. Forgive me, I hadn't looked at v3 until I saw this question. :p
after inspecting the base class and observing the scene/layer's value of:
- (BOOL)hitTestWithWorldPos:(CGPoint)pos;
it became obvious that the content size of the scene/layer was being set to the current view size, which in the case of an iPad is (1024, 768)
The position of the layer after the setViewPointCenter call is fully above the initial view's position, hence, the touch was being suppressed. by setting the layer/scene contentSize to the size of the tilemap, the touchable area is now expanded over the entire map, which allows the node to process the touch.
I have a UIScrollView which I'm using to represent an axis on a graph. I'd like the user to be able to zoom in on the axis using the usual pinch motion, but for it to only scale in the vertical direction, not horizontally.
My question is similar to this one, but I've tried the solution suggested there (overriding the subview's SetTransform method so that it ignores scaling in one direction) and it works perfectly when constraining scaling horizontally, but not vertically. When I try implementing it vertically the first pinch action works fine, but subsequent pinches seem to reset the zoom scale to one before having any effect.
Does anyone know what might be causing this behaviour, and more importantly how I can get around it?
I'm using MonoTouch but answers using Objective-C are fine.
I know this question was posted quite a while ago, but here's an answer for anyone stuck on this problem.
I looked over the question you linked to, rankAmateur, and I think the simple way to fix the solution found there to suit your needs is to replace the CGAffineTransform's "a" property with its "d" property in the setTransform: method.
- (void)setTransform:(CGAffineTransform)newValue;
{
CGAffineTransform constrainedTransform = CGAffineTransformIdentity;
// constrainedTransform.a = newValue.a;
constrainedTransform.d = newValue.d;
[super setTransform:constrainedTransform];
}
I'm not very well versed in CGAffineTransorm, but this worked for me and after browsing the documentation it seems the "a" property corresponds to a view's x-axis and the "d" property corresponds to a view's y-axis.
EDIT
So after going back and realizing what the question really was, I did some more digging into this and I'm a bit stumped, but having experienced the same behavior that rankAmateur mentions above, it seems incredibly unusual for the CGAffineTransform to work perfectly well with zoomScale when zooming is constrained to only horizontally, but not when constrained to only vertically.
The only hypothesis I can offer, is that it might have something to do with the differing default coordinate systems of Core Graphics and UIKit, since in those coordinate systems the x-axis functions in the same way, while the y-axis functions oppositely. Perhaps somehow this gets muddled up in the previously mentioned overriding of setTransform.
This answer depends heavily on the answer from starryVere (thumbs up!)
this is starryVere's code in Swift. It is in the zoomed UIView subclass:
var initialScale: CGFloat = 1.0
override var transform: CGAffineTransform {
set{
//print("1 transform... \(newValue), frame=\(self.frame), bounds=\(self.bounds)")
var constrainedTransform = CGAffineTransformIdentity
constrainedTransform.d = self.initialScale * newValue.d // vertical zoom
//constrainedTransform.a = newValue.a // horizontal zoom
super.transform = constrainedTransform
//print("2 transform... \(constrainedTransform), frame=\(self.frame), bounds=\(self.bounds)")
}
get{
return super.transform
}
}
The commented out prints are very helpful to understand what happens with bounds and frame during the transformation.
Now to the scale problem:
the method scrollViewDidEndZooming of the containing UIScrollViewDelegate has a parameter scale. According to my tests this parameter scale contains the value zoomedView.transform.a which is the horizontal scale factor that we set to 1.0 using CGAffineTransformIdentity. So scale is always 1.0.
The fix is easy:
func scrollViewDidEndZooming(scrollView: UIScrollView, withView view: UIView?, atScale scale: CGFloat) {
let myScale = zoomView.transform.d
}
use myScale like you would use scale in cases with horizontal zoom.
After struggling with the same issue, I was able to come up with a workaround.
Use this code for the setTransform method.
-(void)setTransform:(CGAffineTransform)transform
{
CGAffineTransform constrainedTransform = CGAffineTransformIdentity;
constrainedTransform.d = self.initialScale * transform.d;
constrainedTransform.d = (constrainedTransform.d < MINIMUM_ZOOM_SCALE) ? MINIMUM_ZOOM_SCALE : constrainedTransform.d;
[super setTransform:constrainedTransform];
}
Set the initialScale property from within the scrollViewWillBeginZooming delegate method.
It will be more helpful if you provide a sample code of what you are trying, but i am giving you some lines to try. Actually you have to make the content size width equal to "320" i.e. equal to the the screen size of iPhone.
scrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 45,320,480)];
scrollView.contentSize = CGSizeMake(320,1000);
scrollView.showsVerticalScrollIndicator = YES;
The MonoTouch version follows:
scrollView = new UIScrollView (new RectangleF (0, 45, 320, 480)) {
ContentSize = new SizeF (320, 1000),
ShowVerticalScrollIndicator = true
};
Hope it helps.. :)
and Yes dont forget to accept the answer if it helps :D
In a project I'm working on, I have 3 images: top, middle, and bottom. Top and bottom are fixed height, and middle should be repeated in between the two. (The window size will be changing.) They all are tinted with a color from the user preferences, then need to have their alpha set using a value from the preferences.
I can do pretty much everything. The part I get stuck at is drawing the middle. I decided using [NSColor +colorWithPaternImage:] would be the easiest thing to use. There's a lot of code that makes the actually images and colors, so just assume they exist and are not nil.
int problem; // just to help explain things
float alpha;
NSImage *middleTinted;
NSRect drawRect = [self bounds];
drawRect.size.height = self.bounds.size.height - topTinted.size.height - bottomTinted.size.height;
drawRect.origin.y = topTinted.size.height;
NSColor* colorOne = [NSColor colorWithPatternImage:middleTinted];
NSColor* colorTwo = [colorOne colorWithAlphaComponent:alpha];
if(problem == 1)
{
[colorOne set];
}
else if(problem == 2)
{
[colorTwo set];
}
[NSBezierPath fillRect:drawRect];
Assuming problem == 1, it draws the correct image, in the correct location and with the correct size, but no alpha. (Obviously, since I didn't specify one.)
When problem == 2, I'd expect it to do the same thing, but have the correct alpha value. Instead of this, I get a black box.
Is there a solution that will repeat the image with the correct alpha? I figure I could just draw the image manually in a loop, but I'd prefer a more reasonable solution if one exists.
I expect the problem is that pattern colors don't support -colorWithAlphaComponent:.
NSCell.h contains a method called NSDrawThreePartImage that does the work of drawing end caps and a tiled image in between. It also has an alphaFraction parameter that should meet your needs.
If that doesn't work for you, then you might get the pattern color approach to work by re-rendering your middleTinted image into a new NSImage, using the desired alpha value. (See NSImage's draw... methods.)
I use CGWindowListCopyWindowInfo to get a list of all windows. It gives me the co-ordinates of each window based upon the origin being the top-left of the screen.
If I use NSWindow's setFrame method, the co-ordinates on based upon the origin being the bottom-left of the screen.
What's a clean, reliable way to convert from one to the other?
Added: By clean and reliable, I mean, something sure to work regardless whether the user has multiple screens or is using Spaces. I figure there must be a known idiom using library APIs.
Math is quite reliable :-)
yFromBottom = screenHeight - windowHeight - yFromTop
Main screen height is
[[[NSScreen screens] objectAtIndex:0] frame].size.height
I would suggest using an NSAffineTransform. If you draw with respect to the default origin and then apply a transform to the view, you can essentially flip things around in one fell swoop.
Try something like this (from here):
NSRect boundsInWindow = [myView convertRect:[myView bounds] toView:nil];
NSRect visibleRectInWindow = [myView convertRect:[myView visibleRect] toView:nil];
// Flip Y to convert NSWindow coordinates to top-left-based window coordinates.
float borderViewHeight = [[myView window] frame].size.height;
boundsInWindow.origin.y = borderViewHeight - NSMaxY(boundsInWindow);
visibleRectInWindow.origin.y = borderViewHeight - NSMaxY(visibleRectInWindow);