I did a .gpx file to simulate a route on IOS simulator, now i wanna simulate the horizontal accuracy how I can do this?
as follow is an excerpt of my .gpx file:
<?xml version="1.0"?>
<gpx>
<wpt lat="-23.772830" lon="-46.689820"/> //how add horizontal accuracy 7 meters for example
<wpt lat="-23.774450" lon="-46.692570"/> //and here horizontal accuracy of 2 metters for example
<wpt lat="-23.773450" lon="-46.693530"/> //and here 19 meters
</gpx>
if I run all gps points return horizontal accuracy of 5 meters, I can change this otherwise.
I did this with a method call from the viewController (mine is a button, but obviously you could use a gestureRecognizer or whatever).
I have my viewController set as the LocationManagerDelegate, but you can put in the delegate you're using instead of "self"
- (IBAction)simulateAccuracy:(id)sender {
CLLocationCoordinate2D newCoor = CLLocationCoordinate2DMake(someLat, someLng);
CLLocation *newLoc = [[CLLocation alloc] initWithCoordinate:newCoor altitude:someAlt
horizontalAccuracy:TheAccuracyYouWantToTest verticalAccuracy:ditto timestamp:nil];
NSArray *newLocation = [[NSArray alloc] initWithObjects:newLoc,nil];
[self locationManager:myLocationManager didUpdateLocations:newLocation];
}
Related
I'm trying to build a rolling marble type game. I've decided to convert from Cocos3D to SceneKit so I have probably primitive questions about code snippets.
Here is my CMMotionManager setup. Problem is that as I change my device orientation, the gravity direction also changes (not properly adjusting to device orientation). This code only works with Landscape Left orientation.
-(void) setupMotionManager
{
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
motionManager = [[CMMotionManager alloc] init];
[motionManager startAccelerometerUpdatesToQueue:queue withHandler:^(CMAccelerometerData *accelerometerData, NSError *error)
{
CMAcceleration acceleration = [accelerometerData acceleration];
float accelX = 9.8 * acceleration.y;
float accelY = -9.8 * acceleration.x;
float accelZ = 9.8 * acceleration.z;
scene.physicsWorld.gravity = SCNVector3Make(accelX, accelY, accelZ);
}];
}
This code came from a marble demo from apple. I translated it from Swift to Obj-C.
If I want it to work in Landscape Right I need to change last line to
scene.physicsWorld.gravity = SCNVector3Make(-accelX, -accelY, accelZ);
This brings up another question... If Y is Up in SceneKit, why is it the accelZ variable that needs no change? So my question is how does CMMotionManager coordinates relate to Scene coordinates?
When I call startDeviceMotionUpdatesUsingReferenceFrame, then cache a reference to my first reference frame and call multiplyByInverseOfAttitude on all my motion updates after that, I don't get the change from the reference frame that I am expecting. Here is a really simple demonstration of what I'm not understanding.
self.motionQueue = [[NSOperationQueue alloc] init];
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 1.0/20.0;
[self.motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXArbitraryZVertical toQueue:self.motionQueue withHandler:^(CMDeviceMotion *motion, NSError *error){
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
CMAttitude *att = motion.attitude;
if(self.motionManagerAttitudeRef == nil){
self.motionManagerAttitudeRef = att;
return;
}
[att multiplyByInverseOfAttitude:self.motionManagerAttitudeRef];
NSLog(#"yaw:%+0.1f, pitch:%+0.1f, roll:%+0.1f, att.yaw, att.pitch, att.roll);
}];
}];
First off, in my application I only really care about pitch and roll. But yaw is in there too to demonstrate my confusion.
Everything works as expected if I put the phone laying on my flat desk, launch the app and look at the logs. All of the yaw, pitch roll values are 0.0, then if I spin the phone 90 degrees without lifting it off the surface only the yaw changes. So all good there.
To demonstrate what I think is the problem... Now put the phone inside of (for example) an empty coffee mug, so that all of the angles are slightly tilted and the direction of gravity would have some fractional value in all axis. Now launch the app and with the code above you would think everything is working because there is again a 0.0 value for yaw, pitch and roll. But now spin the coffee mug 90 degrees without lifting it from the table surface. Why do I see significant change in attitude on all of the yaw, pitch and roll?? Since I cached my initial attitude (which is now my new reference attitude), and called muptiplyByInverseOfAttitude shouldn't I just be getting a change in the yaw only?
I don't really understand why using the attitude multiplied by a cached reference attitude doesn't work... And I don't think it is a gimbal lock problem. But here is what gets me exactly what I need. And if you tried the experiment with the coffee mug I described above, this provides exactly the expected results (i.e. spinning the coffee mug on a flat surface doesn't affect pitch and roll values, and tilting the coffee mug in all other directions now only affects one axis at a time too). Plus instead of saving a reference frame, I just save the reference pitch and roll, then when the app starts, everything is zero'ed out until there is some movement.
So all good now. But still wish I understood why the other method did not work as expected.
self.motionQueue = [[NSOperationQueue alloc] init];
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 1.0/20.0;
[self.motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXArbitraryZVertical toQueue:self.motionQueue withHandler:^(CMDeviceMotion *motion, NSError *error)
{
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
if(self.motionManagerAttitude == nil){
CGFloat x = motion.gravity.x;
CGFloat y = motion.gravity.y;
CGFloat z = motion.gravity.z;
refRollF = atan2(y, x) + M_PI_2;
CGFloat r = sqrtf(x*x + y*y + z*z);
refPitchF = acosf(z/r);
self.motionManagerAttitude = motion.attitude;
return;
}
CGFloat x = motion.gravity.x;
CGFloat y = motion.gravity.y;
CGFloat z = motion.gravity.z;
CGFloat rollF = refRollF - (atan2(y, x) + M_PI_2);
CGFloat r = sqrtf(x*x + y*y + z*z);
CGFloat pitchF = refPitchF - acosf(z/r);
//I don't care about yaw, so just printing out whatever the value is in the attitude
NSLog(#"yaw: %+0.1f, pitch: %+0.1f, roll: %+0.1f", (180.0f/M_PI)*motion.attitude.yaw, (180.0f/M_PI)*pitchF, (180.0f/M_PI)*rollF);
}];
}];
EDIT: This is now a confirmed bug with this SDK
I'm using version 1.1.1.2311 of the Google Maps for iOS SDK, and I'm looking to find the bounding latitude and longitude coordinates for the visible map on screen.
I'm using the following code to tell me what the current projection is:
NSLog(#"\n%#,%#\n%#,%#\n%#,%#\n%#,%#\n",
[NSNumber numberWithDouble:mapView.projection.visibleRegion.farLeft.latitude],
[NSNumber numberWithDouble:mapView.projection.visibleRegion.farLeft.longitude],
[NSNumber numberWithDouble:mapView.projection.visibleRegion.farRight.latitude],
[NSNumber numberWithDouble:mapView.projection.visibleRegion.farRight.longitude],
[NSNumber numberWithDouble:mapView.projection.visibleRegion.nearLeft.latitude],
[NSNumber numberWithDouble:mapView.projection.visibleRegion.nearLeft.longitude],
[NSNumber numberWithDouble:mapView.projection.visibleRegion.nearRight.latitude],
[NSNumber numberWithDouble:mapView.projection.visibleRegion.nearRight.longitude]);
From reading the headers, it seems that it may not be updated when the camera moves. Fair enough...
/**
* The GMSProjection currently used by this GMSMapView. This is a snapshot of
* the current projection, and will not automatically update when the camera
* moves. The projection may be nil while the render is not running (if the map
* is not yet part of your UI, or is part of a hidden UIViewController, or you
* have called stopRendering).
*/
But, it appears to update each time the delegate method is called, so I attempted to plot the coordinates to test them...
For the following on my phone:
The output of the NSLog from above gives me the following:
37.34209003645947,-122.0382353290915
37.34209003645947,-122.010769508779
37.30332095984257,-122.0382353290915
37.30332095984257,-122.010769508779
When plotting those coordinates using this I get a projection that seems off:
These coordinates are consistent across app launches which leads me to believe that I'm either consistently doing something wrong, I'm misunderstanding what visibleRegion is, or I've discovered a bug. Anyone care to help me figure out which one it is?
To get the bounding latitude and longitude you have to do the following steps:
GMSCoordinateBounds *bounds = [[GMSCoordinateBounds alloc] initWithRegion:self.googleMapsView.projection.visibleRegion];
CLLocationCoordinate2D northEast = bounds.northEast;
CLLocationCoordinate2D northWest = CLLocationCoordinate2DMake(bounds.northEast.latitude, bounds.southWest.longitude);
CLLocationCoordinate2D southEast = CLLocationCoordinate2DMake(bounds.southWest.latitude, bounds.northEast.longitude);
CLLocationCoordinate2D southWest = bounds.southWest;
Best regards
Robert
I saw your issue here. Hope that they fix this issue in next update.
But now we can take the real visible region like this:
CGPoint topLeftPoint = CGPointMake(0, 0);
CLLocationCoordinate2D topLeftLocation =
[_mapView.projection coordinateForPoint: topLeftPoint];
CGPoint bottomRightPoint =
CGPointMake(_mapView.frame.size.width, _mapView.frame.size.height);
CLLocationCoordinate2D bottomRightLocation =
[_mapView.projection coordinateForPoint: bottomRightPoint];
CGPoint topRightPoint = CGPointMake(_mapView.frame.size.width, 0);
CLLocationCoordinate2D topRightLocation =
[_mapView.projection coordinateForPoint: topRightPoint];
CGPoint bottomLeftPoint =
CGPointMake(0, _mapView.frame.size.height);
CLLocationCoordinate2D bottomLeftLocation =
[_mapView.projection coordinateForPoint: bottomLeftPoint];
GMSVisibleRegion realVisibleRegion;
realVisibleRegion.farLeft = topLeftLocation;
realVisibleRegion.farRight = topRightLocation;
realVisibleRegion.nearLeft = bottomLeftLocation;
realVisibleRegion.nearRight = bottomRightLocation;
[self drawPolylineWithGMSVisibleRegion:realVisibleRegion color:[UIColor redColor] width:10.0f forMap:mapView];
Drawing polyline method:
- (void)drawPolylineWithGMSVisibleRegion:(GMSVisibleRegion)visibleRegion
color:(UIColor*)color
width:(double)width
forMap:(GMSMapView*)mapView{
GMSPolylineOptions *rectangle = [GMSPolylineOptions options];
rectangle.color = color;
rectangle.width = width;
GMSMutablePath *path = [GMSMutablePath path];
[path addCoordinate:visibleRegion.nearRight];
[path addCoordinate:visibleRegion.nearLeft];
[path addCoordinate:visibleRegion.farLeft];
[path addCoordinate:visibleRegion.farRight];
[path addCoordinate:visibleRegion.nearRight];
rectangle.path = path;
[mapView addPolylineWithOptions:rectangle];
}
It works even for map with non-default bearing and angle.
The solution is to download the latest version of the SDK (1.2 at the time of this writing) as the issue has been fixed.
From the 1.2 release notes:
Resolved Issues:
- visibleRegion now reports correctly sized region on Retina devices
Download here.
Looks like the latitude and longitude coordinates you are printing out and manually plotting are possibly off a bit / being truncated. %f defaults to only print out to 6 decimal places.
Here's a related question that might help:
How to print a double with full precision on iOS?
Maybe you give the location Manager the wrong accuracy..
Try to increase it:
locationMgr.desiredAccuracy = kCLLocationAccuracyBest;
The battery is draining moreyy
I'm using two APIs to read EXIF data from images, which I'll call "valueForProperty:NSImageEXIFData" and "CGImageSourceCopyPropertiesAtIndex". Both provide the same EXIF data, although the second provides other data (e.g., GPS, TIFF), too.
Both give wrong values for "ApertureValue" and "MaxApertureValue", and the correct value for "FNumber". The example program that follows dumps all of the metadata returned by each method, and also invokes ExifTool. The output is summarized at the end.
(Knowing what lens I was using, ExifTool is correct when it reports MaxApertureValue as 2.8.)
Details: Xcode 4.02, OS X 10.6.7, 10.6 SDK
Anyone else notice this anomaly?
#import "ExifTestAppDelegate.h"
#implementation ExifTestAppDelegate
#synthesize window;
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
NSString *path = #"/Users/marc/Pictures/iPadPhotos- overflow/Portfolio/MJR_20061221_0258.jpg";
NSData *data = [NSData dataWithContentsOfFile:path];
NSImage *img = [[[NSImage alloc] initWithData:data] autorelease];
NSImageRep *rep = [img bestRepresentationForRect:NSMakeRect(0, 0, 500, 500) context:nil hints:nil];
NSDictionary *exifDict = (NSDictionary *)[(id)rep valueForProperty:NSImageEXIFData];
NSLog(#"NSImageEXIFData: %#", exifDict);
CGImageSourceRef imgSource = CGImageSourceCreateWithData((CFDataRef)data, nil);
CFDictionaryRef dictRef = CGImageSourceCopyPropertiesAtIndex(imgSource, 0, nil);
NSLog(#"CGImageSourceCopyPropertiesAtIndex: %#", dictRef);
CFRelease(imgSource);
system([[NSString stringWithFormat:#"exiftool '%#'", path] UTF8String]);
}
#end
/*
2011-05-21 11:22:58.140 ExifTest[4510:903] NSImageEXIFData: {
ApertureValue = 6;
...
FNumber = 8;
...
MaxApertureValue = 3;
...
}
2011-05-21 11:22:58.154 ExifTest[4510:903] CGImageSourceCopyPropertiesAtIndex: {
...
"{Exif}" = {
ApertureValue = 6;
...
FNumber = 8;
...
MaxApertureValue = 3;
...
ExifTool Version Number : 8.51
...
F Number : 8.0
...
Aperture Value : 8.0
...
Max Aperture Value : 2.8
*/
Update: It's not me. Here's the EXIF data as reported by Apple's Preview app:
Try the ImageIO framework from Quartz 2D.
Specifically CGImageSourceCopyProperties with CGImageProperties set to kCGImagePropertyExifDictionary
To get 2.8 vs 3 for the max aperture, you may need to set kCGImageSourceShouldAllowFloat appropriately.
It's not Cocoa, but easy to use and xfer to Cocoa.
Edit
The part above about setting kCGImageSourceShouldAllowFloat is incorrect...
I just put an f2.8 60mm Micro Nikkor prime lens on a Nikon D7000 to check this. I took one image of a very close object (3 inches away), another at mid focus (6 feet) and a third at distant focus.
The close focus image EXIF reported "Max Aperture" value of 3.2 in Preview. Photoshop allows the raw EXIF data embedded in the file to be seen. If I open the close object image in Photoshop, the embedded EXIF is shown as exif:MaxAperatureValue: 32/10
Using the same methods, the Mid focus image has a reported "Max Aperture"of 3.0 (or 30/10 in Photoshop). Only the distant focus reported "Max Aperture" value of 2.8.
So it would seem that the camera is reporting the effective max aperture of the lens given the current setting of the focus. This makes sense, because of the prevalence of variable max aperture zoom lenses. If you put a zoom lens on that has a variable max aperture (such as a Nikkor 18-200 f3.5 / f5.6) the effective max aperture at the given zoom setting and focus setting is calculated by the camera and embedded in the EXIF data. This value is corrected shown by Preview and assumably by the ImageIO framework.
See T-Stops
I am working with MKMapView Based application. I need a clarification whether it is possible to eliminate the pin OverLap in the MKMapView? Because at some places there are large number of pins displaying. It is difficult to me to identify the pins.
If you have an Apple Developer Account, I would strongly recommend getting the Session 111 video from the 2011 WWDC Conference Sessions, entitled "Visualizing Information Geographically with MapKit". One of the segments specifically covers how to cluster content from large data sets to allow you to group or ungroup pins based on density at various zoom levels.
Their example is elegantly simple, but at the heart of the problem you want to replace a group of overlapping pins with a single pin and as you zoom in the single pin will split back into the individual pins.
How and when you decide to group things can be varied considerably. Apple's solution simply subdivides the map into a grid and any box that has more than 1 pin results in a group. You could also take an algorithmic approach such as using a kMeansCluster algorithm which is incredibly simple and you could feed all of your annotations through the algorithm and get an array of groups out the other side logically organized.
From there it's a matter of keeping track of all the individual pins and how they are grouped as you zoom in and out. You will only display a single annotation for each group or any individual pins that are left over. It's also possible to animate the transitions as the map zooms in and out so you can visually reinforce what is happening.
My own technique is too closely related to Apple's approach for me to post here so I'm hoping you can access the above video which covers almost all of these points.
For this you have to implement clustering concept to your map.By using Apple demo code it's easy to implement clustering concept in our code. Reference link
Simply we can use following code for the Clustering
Steps to implement clustering
Step1 : The important thing is for clustering we use two mapviews(allAnnotationsMapView, ), One is for reference(allAnnotationsMapView).
#property (nonatomic, strong) MKMapView *allAnnotationsMapView;
#property (nonatomic, strong) IBOutlet MKMapView *mapView;
In viewDidLoad
_allAnnotationsMapView = [[MKMapView alloc] initWithFrame:CGRectZero];
Step2 : Add all annotations to the _allAnnotationsMapView, In below _photos are the annotations array.
[_allAnnotationsMapView addAnnotations:_photos];
[self updateVisibleAnnotations];
Step3 : Add below methods for clustering, in this PhotoAnnotation is the custom annotation.
MapViewDelegate methods
- (void)mapView:(MKMapView *)aMapView regionDidChangeAnimated:(BOOL)animated {
[self updateVisibleAnnotations];
}
- (void)mapView:(MKMapView *)aMapView didAddAnnotationViews:(NSArray *)views {
for (MKAnnotationView *annotationView in views) {
if (![annotationView.annotation isKindOfClass:[PhotoAnnotation class]]) {
continue;
}
PhotoAnnotation *annotation = (PhotoAnnotation *)annotationView.annotation;
if (annotation.clusterAnnotation != nil) {
// animate the annotation from it's old container's coordinate, to its actual coordinate
CLLocationCoordinate2D actualCoordinate = annotation.coordinate;
CLLocationCoordinate2D containerCoordinate = annotation.clusterAnnotation.coordinate;
// since it's displayed on the map, it is no longer contained by another annotation,
// (We couldn't reset this in -updateVisibleAnnotations because we needed the reference to it here
// to get the containerCoordinate)
annotation.clusterAnnotation = nil;
annotation.coordinate = containerCoordinate;
[UIView animateWithDuration:0.3 animations:^{
annotation.coordinate = actualCoordinate;
}];
}
}
}
clustering Handling methods
- (id<MKAnnotation>)annotationInGrid:(MKMapRect)gridMapRect usingAnnotations:(NSSet *)annotations {
// first, see if one of the annotations we were already showing is in this mapRect
NSSet *visibleAnnotationsInBucket = [self.mapView annotationsInMapRect:gridMapRect];
NSSet *annotationsForGridSet = [annotations objectsPassingTest:^BOOL(id obj, BOOL *stop) {
BOOL returnValue = ([visibleAnnotationsInBucket containsObject:obj]);
if (returnValue)
{
*stop = YES;
}
return returnValue;
}];
if (annotationsForGridSet.count != 0) {
return [annotationsForGridSet anyObject];
}
// otherwise, sort the annotations based on their distance from the center of the grid square,
// then choose the one closest to the center to show
MKMapPoint centerMapPoint = MKMapPointMake(MKMapRectGetMidX(gridMapRect), MKMapRectGetMidY(gridMapRect));
NSArray *sortedAnnotations = [[annotations allObjects] sortedArrayUsingComparator:^(id obj1, id obj2) {
MKMapPoint mapPoint1 = MKMapPointForCoordinate(((id<MKAnnotation>)obj1).coordinate);
MKMapPoint mapPoint2 = MKMapPointForCoordinate(((id<MKAnnotation>)obj2).coordinate);
CLLocationDistance distance1 = MKMetersBetweenMapPoints(mapPoint1, centerMapPoint);
CLLocationDistance distance2 = MKMetersBetweenMapPoints(mapPoint2, centerMapPoint);
if (distance1 < distance2) {
return NSOrderedAscending;
} else if (distance1 > distance2) {
return NSOrderedDescending;
}
return NSOrderedSame;
}];
PhotoAnnotation *photoAnn = sortedAnnotations[0];
NSLog(#"lat long %f %f", photoAnn.coordinate.latitude, photoAnn.coordinate.longitude);
return sortedAnnotations[0];
}
- (void)updateVisibleAnnotations {
// This value to controls the number of off screen annotations are displayed.
// A bigger number means more annotations, less chance of seeing annotation views pop in but decreased performance.
// A smaller number means fewer annotations, more chance of seeing annotation views pop in but better performance.
static float marginFactor = 2.0;
// Adjust this roughly based on the dimensions of your annotations views.
// Bigger numbers more aggressively coalesce annotations (fewer annotations displayed but better performance).
// Numbers too small result in overlapping annotations views and too many annotations on screen.
static float bucketSize = 60.0;
// find all the annotations in the visible area + a wide margin to avoid popping annotation views in and out while panning the map.
MKMapRect visibleMapRect = [self.mapView visibleMapRect];
MKMapRect adjustedVisibleMapRect = MKMapRectInset(visibleMapRect, -marginFactor * visibleMapRect.size.width, -marginFactor * visibleMapRect.size.height);
// determine how wide each bucket will be, as a MKMapRect square
CLLocationCoordinate2D leftCoordinate = [self.mapView convertPoint:CGPointZero toCoordinateFromView:self.view];
CLLocationCoordinate2D rightCoordinate = [self.mapView convertPoint:CGPointMake(bucketSize, 0) toCoordinateFromView:self.view];
double gridSize = MKMapPointForCoordinate(rightCoordinate).x - MKMapPointForCoordinate(leftCoordinate).x;
MKMapRect gridMapRect = MKMapRectMake(0, 0, gridSize, gridSize);
// condense annotations, with a padding of two squares, around the visibleMapRect
double startX = floor(MKMapRectGetMinX(adjustedVisibleMapRect) / gridSize) * gridSize;
double startY = floor(MKMapRectGetMinY(adjustedVisibleMapRect) / gridSize) * gridSize;
double endX = floor(MKMapRectGetMaxX(adjustedVisibleMapRect) / gridSize) * gridSize;
double endY = floor(MKMapRectGetMaxY(adjustedVisibleMapRect) / gridSize) * gridSize;
// for each square in our grid, pick one annotation to show
gridMapRect.origin.y = startY;
while (MKMapRectGetMinY(gridMapRect) <= endY) {
gridMapRect.origin.x = startX;
while (MKMapRectGetMinX(gridMapRect) <= endX) {
NSSet *allAnnotationsInBucket = [self.allAnnotationsMapView annotationsInMapRect:gridMapRect];
NSSet *visibleAnnotationsInBucket = [self.mapView annotationsInMapRect:gridMapRect];
// we only care about PhotoAnnotations
NSMutableSet *filteredAnnotationsInBucket = [[allAnnotationsInBucket objectsPassingTest:^BOOL(id obj, BOOL *stop) {
return ([obj isKindOfClass:[PhotoAnnotation class]]);
}] mutableCopy];
if (filteredAnnotationsInBucket.count > 0) {
PhotoAnnotation *annotationForGrid = (PhotoAnnotation *)[self annotationInGrid:gridMapRect usingAnnotations:filteredAnnotationsInBucket];
[filteredAnnotationsInBucket removeObject:annotationForGrid];
// give the annotationForGrid a reference to all the annotations it will represent
annotationForGrid.containedAnnotations = [filteredAnnotationsInBucket allObjects];
[self.mapView addAnnotation:annotationForGrid];
for (PhotoAnnotation *annotation in filteredAnnotationsInBucket) {
// give all the other annotations a reference to the one which is representing them
annotation.clusterAnnotation = annotationForGrid;
annotation.containedAnnotations = nil;
// remove annotations which we've decided to cluster
if ([visibleAnnotationsInBucket containsObject:annotation]) {
CLLocationCoordinate2D actualCoordinate = annotation.coordinate;
[UIView animateWithDuration:0.3 animations:^{
annotation.coordinate = annotation.clusterAnnotation.coordinate;
} completion:^(BOOL finished) {
annotation.coordinate = actualCoordinate;
[self.mapView removeAnnotation:annotation];
}];
}
}
}
gridMapRect.origin.x += gridSize;
}
gridMapRect.origin.y += gridSize;
}
}
By following above steps we can achieve clustering on mapview, it is not necessary to use any third party code or framework. Please check the Apple sample code here. Please let me know if you have any doubts on this.
It's quite easy to implement your own annotation clustering framework. Here's an example of a basic one that you can refer here.
If your pins are overlapping then it must be that your zoom level is high for that place.
You can think of removing some annotations in that zoom level until you dont have annotation overlaps and while zooming in you can add the annotations so that there are enough space between the annotations.