I Have just added a custom pin image to a pin on my map. The problem is that the pin is quite large and covers a lot of the map this is fine zoomed in but is a problem when zoomed out as the user cant see any of the map.
How can I make the pin scale down when the user zooms out?
I have googled it but cant seem to find any answers.
Here is my code
Does any one know how to do this or where I can find out how?
Thanks!
From "Location Awareness Programming Guide":
All annotations are drawn at the same scale every time, regardless of the map’s current zoom level.
You need to track zoom level of the map and do change image size for annotation.
Hope, this will help: http://troybrant.net/blog/2010/01/set-the-zoom-level-of-an-mkmapview/
You should review one or two tutorials on using mapkit to see how it's done. A map pin implemented as an MKMapAnnotationView will always scale properly (it will always stay the same size while the map scale changes.
Try looking over this tutorial by Ray Wenderlich. There is a lot to digest, but the main points to refer to are how to use the MKAnnotation protocol (see the MyLocation class in this tutorial), how pins are actually added as "annotations" (see the - (void)plotCrimePositions:(NSString *)responseString method), and, finally, how the MKMapViewDelegate methods are used, particularly - (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>)annotation.
Related
I have tried a few solutions but haven't found one that works for me. Basically, when a button is pressed on my mapview, I want to zoom in as far as MapKit allows, with the user location centered. I keep trying to manipulate the distance parameters, but it seems below a certain point that no longer works...the map view will only zoom to a certain distance, but I can manually pinch and zoom further beyond that. Obviously I don't expect it to give me a 10x10m map...but as close as possible.
Actually, ideally I'd like to do this in combination with using the "follow with heading" tracking mode...but I've found that setting this property will automatically set the zoom level to some predefined point, which is annoying. I think I'll have to manually track heading and rotate the map...a little frustrating that the API doesn't give us a little more flexibility. And it seems like "[mapView maxZoom]" would be a really useful call, but I've found no such thing...what am I missing?
-(void)zoomAndCenterMap
{
CLLocationCoordinate2D coord = self.mapView.userLocation.coordinate;
MKCoordinateRegion viewRegion = MKCoordinateRegionMakeWithDistance(coord, 10, 10);
MKCoordinateRegion adjustedRegion = [self.mapView regionThatFits:viewRegion];
[self.mapView setRegion:adjustedRegion animated:YES];
}
A maxZoom API does indeed not exist in MapKit. I've added a method to a subclass of MKMapView over in another project that might be of use. Then you could at least figure out what the max zoom level is and maybe programmatically go to it.
I need to create a very simple image cropping interface for an OS X cocoa application, but I am not sure where to start. The user needs to be able to choose a crop size from a menu of presets, be presented with a cropping rectangle that can be resized preserving the ratio, and moved around the image until they finally apply the selected crop to the image.
I've done some searching for sample code and projects but not found anything too useful. Core Image fun house has some pointers but is a retired sample. There are lots of iOS examples, but I've not found an easy to follow Mac OS example.
Can someone point me in the right direction (or at a sample project or framework!!).
Thanks a lot.
Here is a project you can look at:
https://github.com/foundry/drawingtest
It's a little demo I made as I was trying to understand the relationship between the rects in this method:
- (void)drawInRect:(NSRect)dstRect
fromRect:(NSRect)srcRect
operation:(NSCompositingOperation)op
fraction:(CGFloat)delta
Note that the older compositeToPoint: methods are deprecated and should not be used for this sort of thing.
srcRect is the portion of the original image (in it's own coordinates) that you want to keep.
dstRect is the rect that you want that cropped area to draw into.
JMRect in the project is an NSObject representation of an NSRect - so that we can use cocoa bindings to tie the interface controls together.
For your UI, the cropping rectangle could just be a transparent subview view with a border that you push around and resize over the image you want to crop.
This is by no means a complete solution to your question, but it's something you can poke around with - it might help you to get started.
I'm writing my first Cocoa app and I would like to make a "trackball / eyeball / arcball / whatever it's called" button to rotate a 3D OpenGL scene.
There's a perfect example of this custom Cocoa control in Pages (Apple iWork suite) when you select a 3D chart. After some hacks, this control seems to be referenced as SFC3DRotateWidget. Here's a screenshot of the control in Pages.
Maybe this widget is reusable, but I didn't find how or where. So I try to recreate it.
I'm inexperienced with Cocoa so I'm not sure how to do that nor exactly where (i.e. what to do with Interface Builder, what to do with code...).
I'm not sure if I need to override the drawing function. I thought to use a textured button (Interface Builder) with a NSTrackingArea (code) to handle mouse events (move, drag, ...) but the area is necessarily rectangular. The interactive zones of the custom control used by Apple seem to follow the shape of the arrows. I've read on S.O. I can use NSBezierPath to create a more specific area (only via code?).
Does it sound good for you?
Do I miss something?
Let met know if you have any tips, tricks or resources you can share!
Thanks!
It sounds like you want to build a custom control. You do this by subclassing NSControl, which there is a guide on how to do. You can control the circular clickable area, and the responses to the mouse events by implementing the various methods. For example you can track mouse events with mouseDown: and the related methods.
You probably do not need to use any custom drawing code, NSImageView subviews with the various arrows will probably suite your purposes fine, unless you'd rather draw them in code.
I'm working on an iPad application and that's my problem:
I elaborated an algorithm to know if a point is inside a polygon, in an image. So I need when touching the Image, to know the coordinates of the touched point and then do an action using those coordinates (an NSLog to make the example easy), the problem is that I can't use an IBAction on an UIImageView, and so can't recover the point's coordinates. Thanks for any help
I think at first you have to make polygon which fit to your image. And then you can use touchesBegan:withEvent: to get the coordinate of touch point and judge whether the point is inside of polygon or not.
Here is similar question like yours.
How to get particular touch Area?
I think this is a little difficult work, so maybe you would better use cocos2d library which have collision judgement function.
http://box2d.org/forum/viewtopic.php?f=9&t=7487
But also I think iOS is well constructed for handling touch, so this is beneficial effort for you.
So I want to have a view (NSView, NSOpenGLView, something CG related?) which basically displays a map. Such as:
http://dump.tanaris4.com/map.png
Obviously that looks horrible, but I did it using an NSView, and it draws SO slow. Clearly not designed for this.
I just need to allow users to click on the individual (x,y) coordinates to make changes, and zoom into a certain area (to see it better).
Should I go the OpenGL route? And if so - any suggestions as to how to get started? (I was able to follow the guide to draw a triangle, so that's good).
I did find this post on zooming in an NSView: How to implement zoom/scale in a Cocoa AppKit-application
My concern is if I'm drawing over 6000 coordinates and the lines connecting them, this isn't efficient at all.
I don't think using OpenGL would be of any good here. The problem does not seem to be the actual painting, but rather the rendering strategy. You would need a scene graph of some kind to dynamically handle level of detail and culling.
Qt has all this packaged in a nice class class QGraphicsScene (see http://doc.qt.nokia.com/latest/qgraphicsscene.html for reference, and http://doc.qt.nokia.com/main-snapshot/demos-chip.html for an example).
Some basic concepts you should consider using:
http://en.wikipedia.org/wiki/Scene_graph
http://en.wikipedia.org/wiki/Quadtree
http://en.wikipedia.org/wiki/Level_of_detail
Try using core graphics for this, really there is so much that could be done. Watch the video Practical Drawing for iOS Developers from WWDC 2011 and it should give an over view of what can be done with CG.
I believe even CoreGraphics will suffice for what you want to achieve, and that should work under a UIView if you draw the rectangle of your view completely under the DrawRect method of your UIView (you must overload this method). Please see the UIView Class Reference. I have a mobile application that logs points on the UIMapKit, kind of like Nike+, and it certainly works well for massive amounts of points/line segments. There is no reason why this simple approach cannot work for you as well.