In Cocos2D, I would like a sprite placed on a screen coordinate, not a map coordinate. I thought I could get by using convertToNodeSpace, but it doesn't seem to do what I want.
I thought this should place a sprite in the middle of my iPad screen:
selectionScreenOverlaySprite.position = [self convertToNodeSpace:CGPointMake(512, 384)];
But it doesn't. It also places it in a different place depending on the size of my map. Does anyone know what I should be using? I've also tried: convertToWorldSpace, convertToNodeSpaceAR, and convertToWorldSpaceAR.
Try this:
CGSize wins = [[CCDirector sharedDirector] winSize];
[yourSprite setPosition:CGPointMake(wins.width / 2, wins.height / 2)];
This is better than using hard-coded values because it will work regardless of resolution.
Related
I need to perform efficient hit testing against a (potentially huge) number of components so I've represented all my primitives as NSBezierPath instances. All working great so far.
Now I'm having trouble converting NSString objects, in particular reflecting their position in the view:
I'm using the NSString (BezierConversions) category from Apple's SpeedometerView example to convert strings into bezier paths.
The bezier path created for strings looks great but positioning it to match the position of the NSString instances location in the view doesn't quite work so I suppose this question is really about
NSBezierPath and transformUsingAffineTransform: vs.
a combination of NSAffineTransform applied to a view and NSString drawAtPoint:
In my test project even the trivial case fails:
Grey bezier representation of string drawn using:
NSAffineTransform *moveFinal = [NSAffineTransform transform];
[moveFinal translateXBy:x yBy:y];
[textBezier transformUsingAffineTransform:moveFinal];
and the purple string via
[testString drawAtPoint:NSMakePoint(x, y)
withAttributes:attributes];
Same attributes, same input positions, different location in view.
And it just gets worse with rotated text.
UPDATE #1
Looks like it's boiling down to different bounding boxes returned by
NSString sizeWithAttributes:
NSBezierPath bounds
Now experimenting with NSString boundingRectWithSize
FWIW - Working now.
Using boundingRectWithSize:options:attributes: with the NSStringDrawingUsesDeviceMetrics option gives some good text dimensions to work with, including the actual bounding box occupied by the string when drawn and the offset of the first glyph.
Offset the NSBezierPath returned from bezierWithFont: by that amount and you're good to go..
The UIScrollView has a lot of information available to the programmer, but I dont see an obvious way to control the location that the control stop at after decelerating from a scroll gesture.
Basically I would like the scrollview to snap to specific regions of the screen. The user can still scroll like normal, but when they stop scrolling the view should snap to the most relevant location, and in the case of a flick gesture the deceleration should stop at these locations too.
Is there an easy way to do something like this, or should I consider the only way to accomplish this effect to write a custom scrolling control?
Since the UITableView is a UIScrollView subclass, you could implement the UIScrollViewDelegate method:
- (void)scrollViewWillEndDragging:(UIScrollView *)scrollView
withVelocity:(CGPoint)velocity
targetContentOffset:(inout CGPoint *)targetContentOffset
And then compute what the closest desired target content offset is that you want, and set that on the inout CGPoint parameter.
I've just tried this and it works well.
First, retrieve the unguided offset like this:
CGFloat unguidedOffsetY = targetContentOffset->y;
Then Figure out through some math, where you'd want it to be, noting the height of the table header. Here's a sample in my code dealing with custom cells representing US States:
CGFloat guidedOffsetY;
if (unguidedOffsetY > kFirstStateTableViewOffsetHeight) {
int remainder = lroundf(unguidedOffsetY) % lroundf(kStateTableCell_Height_Unrotated);
log4Debug(#"Remainder: %d", remainder);
if (remainder < (kStateTableCell_Height_Unrotated/2)) {
guidedOffsetY = unguidedOffsetY - remainder;
}
else {
guidedOffsetY = unguidedOffsetY - remainder + kStateTableCell_Height_Unrotated;
}
}
else {
guidedOffsetY = 0;
}
targetContentOffset->y = guidedOffsetY;
The last line above, actually writes the value back into the inout parameter, which tells the scroll view that this is the y-offset you'd like it to snap to.
Finally, if you're dealing with a fetched results controller, and you want to know what just got snapped to, you can do something like this (in my example, the property "states" is the FRC for US States). I use that information to set a button title:
NSUInteger selectedStateIndexPosition = floorf((guidedOffsetY + kFirstStateTableViewOffsetHeight) / kStateTableCell_Height_Unrotated);
log4Debug(#"selectedStateIndexPosition: %d", selectedStateIndexPosition);
NSIndexPath *indexPath = [NSIndexPath indexPathForRow:selectedStateIndexPosition inSection:0];
CCState *selectedState = [self.states objectAtIndexPath:indexPath];
log4Debug(#"Selected State: %#", selectedState.name);
self.stateSelectionButton.titleLabel.text = selectedState.name;
OFF-TOPIC NOTE: As you probably guessed, the "log4Debug" statements are just logging. Incidentally, I'm using Lumberjack for that, but I prefer the command syntax from the old Log4Cocoa.
After the scrollViewDidEndDecelerating: and scrollViewDidEndDragging:willDecelerate: (the last one just when the will decelerate parameter is NO) you should set the contentOffset parameter of your UIScrollView to the desired position.
You also will know the current position by checking the contentOffset property of your scrollview, and then calculate the closest desired region that you have
Although you don't have to create your own scrolling control, you will have to manually scroll to the desired positions
To add to what felipe said, i've recently created a table view that snaps to cells in a similar way the UIPicker does.
A clever scrollview delegate is definitely the way to do this (and you can also do that on a uitableview, since it's just a subclass of uiscrollview).
I had this done by, once the the scroll view started decelerating (ie after scrollViewDidEndDragging:willDecelerate: is called), responding to scrollViewDidScroll: and computing the diff with the previous scroll event.
When the diff is less than say a 2 to 5 of pixels, i check for the nearest cell, then wait until that cell has been passed by a few pixels, then scroll back in the other direction with setContentOffset:animated:.
That creates a little bounce effect that is very nice for user experience, as it gives a good feedback on the snapping.
You'll have to be clever and not do anything when the table is bouncing at the top or bottom (comparing the offset to 0 or the content size will tell you that).
It works pretty well in my case because the cells are small (about 80-100px high), you might run into problems if you have a regular scroll view with bigger content areas.
Of course, you will not always decelerate past a cell, so in this case i just scroll to the nearest cell, and the animation looks jerky. Turns out with the right tuning, it barely ever happens, so i'm cool with this.
Spend a few hours tuning the actual values depending on your specific screen and you can get something decent.
I've also tried the naive approach, calling setContentOffset:animated: on scrollViewDidEndDecelerating: but it creates a really weird animation (or just plain confusing jump if you don't animate), that gets worse the lower the deceleration rate is (you'll be jumping from a slow movement to a much faster one).
So to answer the question:
- no, there is no easy way to do this, it'll take some time polishing the actual values of the previous algorithm, which might not work at all on your screen,
- don't try to create your own scroll view, you'll just waste time and badly reinvent a beautiful piece of engineering apple created with truck loads of bug. The scrollview delegate is the key to your problem.
Try something like this:
- (void) snapScroll;
{
int temp = (theScrollView.contentOffset.x+halfOfASubviewsWidth) / widthOfSubview;
theScrollView.contentOffset = CGPointMake(temp*widthOfSubview , 0);
}
- (void) scrollViewDidEndDragging:(UIScrollView *)scrollView willDecelerate:(BOOL)decelerate;
{
if (!decelerate) {
[self snapScroll];
}
}
- (void) scrollViewDidEndDecelerating:(UIScrollView *)scrollView
{
[self snapScroll];
}
This takes advantage of the int's drop of the post-decimal digits. Also assumes all your views are lined up from 0,0 and only the contentOffset is what makes it show up in different areas.
Note: hook up the delegate and this works perfectly fine. You're getting a modified version - mine just has the actual constants lol. I renamed the variables so you can read it easy
how to generate an end screen when two images collide. I am making an app with a stickman you move with a very sensitive acceremeter. SO if it hits these spikes, (UIImages) it will generate the end screen. How do I make the app detect this collision and then generate an end screen.
I'm sure you know the rect of the two images because you need to draw them so you can use
bool CGRectIntersectsRect (
CGRect rect1,
CGRect rect2
);
It returns YES if the two rects have a shared point
The fact that you haven't declared any rects doesn't matter. You need rects for collision detection. I assume that you at least have x and y coordinates for the stickman and you should have some kind of idea of his height and width. Judging from the question title it seems like you're using images to draw the objects you want to check for collision, so you should know the height and width of the images you're using. If you don't have this info you can't draw the objects in the right place and you certainly can't check for collisions.
You basically want to use the same rects that you use for drawing the objects.
Some code examples:
If your coordinates point to the middle of the stickman you would use something like the following:
if (CGRectIntersectsRect(CGRectMake(stickman.x-stickman.width/2,
stickman.y-stickman.height/2,
stickman.width,
stickman.height),
CGRectMake(spikes.x-spikes.width/2,
spikes.y-spikes.height/2,
spikes.width,
spikes.height))) {
// Do whatever it is you need to do. For instance:
[self showEndScreen];
}
If your coordinates point to the top left corner of your stickman you would use:
if (CGRectIntersectsRect(CGRectMake(stickman.x,
stickman.y,
stickman.width,
stickman.height),
CGRectMake(spikes.x,
spikes.y,
spikes.width,
spikes.height))) {
// Do whatever it is you need to do. For instance:
[self showEndScreen];
}
If I might give you a suggestion, I would suggest storing the coordinates and sizes in a CGRect, so that you don't have to create a new CGRect every time you're checking for collision.
I'm working with a View-based application compiling for iPhoneOS 4.0 Simulator (Debug), Xcode 3.2.3.
I've got a UIImageView, imgView, whose center I want the coordinates of. I obtain them like this:
CGPoint imgviewcoords=[imgView center];
This doesn't produce any compile-time errors, but when I NSLog the coordinates like this:
NSLog(#"x: %i, y:%i", imgviewcoords.x, imgviewcoords.y);
I get this output:
x: 0, y:108762
It's showing 0 for imgView's x coordinate (which I know isn't right, because imgView is near the top-middle of the screen on Interface Builder) and some giant impossible number which is way past the boundaries of the iPhone's screen for the y coordinate (The y coordinate in the output above may not be exactly correct, but it's some giant number like that). I get this same exact output each time. The imgView is properly linked to its File's Owner outlet, and I can even change its image using
[imgview setImage:[UIImage imageNamed:#"./blahblah.png"]];
I just can't seem to properly get its center coordinates.
I've also tried
CGPoint viewcoords=[[imgView frame] origin];
and that gives me the same erroneous coordinates in imgviewcoords as described above.
This happens with every control that I have in my app's main UIView, except the y coordinate differs a little bit for each control.
What am I doing wrong?
#Vladimir : Thanks for the suggestion to change the NSLog format specifiers. However, I don't think it's the output that is the problem. I think it's the [imgView center] call that isn't working. I'm using the CGPoint that's returned from [imgView Center] to set the center of another UIImageView, and that UIImageView simply moves to the very top-left of the screen instead of moving to the center of imgView. So I'm guessing it's the [imgView center] call that is returning a bad set of coordinates.
%i format specifier expects integer value and CGPoint components have CGFloat type, try to use correct specifier (%f) - may be you will get correct output:
NSLog(#"x: %f, y:%f", imgviewcoords.x, imgviewcoords.y);
in a simple drawing application I have a model which has a NSMutableArray curvedPaths holding all the lines the user has drawn. A line itself is also a NSMutableArray, containing the point objects. As I draw curved NSBezier paths, my point array has the following structure: linePoint, controlPoint, controlPoint, linePoint, controlPoint, controlPoint, etc... I thought having one array holding all the points plus control points would be more efficient than dealing with 2 or 3 different arrays.
Obviously my view draws the paths it gets from the model, which leads to the actual question: Is there a way to optimize the following code (inside the view's drawRect method) in terms of speed?
int lineCount = [[model curvedPaths] count];
// Go through paths
for (int i=0; i < lineCount; i++)
{
// Get the Color
NSColor *theColor = [model getColorOfPath:[[model curvedPaths] objectAtIndex:i]];
// Get the points
NSArray *thePoints = [model getPointsOfPath:[[model curvedPaths] objectAtIndex:i]];
// Create a new path for performance reasons
NSBezierPath *path = [[NSBezierPath alloc] init];
// Set the color
[theColor set];
// Move to first point without drawing
[path moveToPoint:[[thePoints objectAtIndex:0] myNSPoint]];
int pointCount = [thePoints count] - 3;
// Go through points
for (int j=0; j < pointCount; j+=3)
{
[path curveToPoint:[[thePoints objectAtIndex:j+3] myNSPoint]
controlPoint1:[[thePoints objectAtIndex:j+1] myNSPoint]
controlPoint2:[[thePoints objectAtIndex:j+2] myNSPoint]];
}
// Draw the path
[path stroke];
// Bye stuff
[path release];
[theColor release];
}
Thanks,
xonic
Hey xon1c, the code looks good. In general it is impossible to optimize without measuring performance in specific cases.
For example, lets say the code above is only ever called once. It draws a picture in a view and it never needs redrawing. Say the code above takes 50 milliseconds to run. You could rewrite it in openGL and do every optimisation under the sun and get that time down to 20 milliseconds and realistically the 30 milliseconds that you have saved makes no difference to anyone and you pretty much just wasted your time and added a load of code-bloat that is going to be more difficult to maintain.
However, if the code above is called 50 times a second and most of those times it is drawing the same thing then you could meaningfully optimise it.
When it comes to drawing the best way to optimise is to is to eliminate unnecessary drawing.
Each time you draw you recreate the NSBezierPaths - are they always different? You may want to maintain the list of NSBezier paths to draw, keep that in sync with your model, and keep drawrect solely for drawing the paths.
Are you drawing to areas of your view which don't need redrawing? The argument to drawrect is the area of the view that needs redrawing - you could test against that (or getRectsBeingDrawn:count:), but it may be in your case that you know that the entire view needs to be redrawn.
If the paths themselves don't change often, but the view needs redrawing often - eg when the shapes of the paths aren't changing but their positions are animating and they overlap in different ways, you could draw the paths to images (textures) and then inside drawrect you would draw the texture to the view instead of drawing the path to the view. This can be faster because the texture is only created once and uploaded to video memory which is faster to draw to the screen. You should look at Core Animation if this is what you need todo.
If you find that drawing the paths is too slow you could look at CGPath
So, on the whole, it really does depend on what you are doing. The best advice is, as ever, not to get sucked into premature optimisation. If your app isn't actually too slow for your users, your code is just fine.