How can I check if a map is loaded into MKKapview ? I mean... asking the map view to be ready - and not waiting for notification mapViewDidFinishLoadingMap: which doesn't notify if it is already buffered ?
My usecase for better understanding the problem: My App should print a map which is created in the background, using MKMapview. Loading the map can take a while, so without checking the readyness, the map will be incomplete. This can clearly be seen in the preview. Using mapViewDidFinishLoadingMap: helps for the first time. But if I print again just with a slightly different region, I will not get the notification.
This is my code snippest
-(IBAction)print:(id)sender {
[self prepareClonedMap];
}
-(void) prepareClonedMap {
[self cloneMap:scaling];
// I would like here ... if ( mapIsReady ) [self executePrint];
}
-(void)mapViewDidFinishLoadingMap:(MKMapView *)mapView {
[self executePrint];
}
-(void)executePrint {
// make MKMapSnapshot
// create printoperation
.....
}
-(void)cloneMap:(CGFloat) factor {
NSRect myframe;
// expand to max factor
myframe=mapView.frame;
myframe.size.height = myframe.size.height * factor,
myframe.size.width = myframe.size.width * factor;
// reduce to max dimensions
myframe.size = [self maxDimensionWithRatio:myframe.size];
[cloneView setFrame:myframe];
[cloneView setRegion:[mapView region]];
}
Related
I'm trying to make a cocos2D game in which every time the player touches the screen, the background scrolls to the left thus simulating moving forwards.
The "background" consists of 2 very long rectangular panels called pane1 and pane2 linked together in a chain. When the screen is touched, I use CCActionMoveTo to move both panes to the left, and when one pane is completely off the screen, I move it back around to the other side to create an infinite loop.
The problem is the background scrolling animation takes .2 seconds, and if the player just mashes the screen a lot, it messes up everything. And, sometimes these two panes experience different amounts of lag so that they desync.
How do I set a delay on a function so that it can only be called once per designated time period? In other words, I want to add a "cooldown" to the function that handles player touch.
This function is called every time the scene is touched:
- (void)playerMove {
CCActionMoveTo * actionMove1 = [CCActionEaseOut actionWithAction:
[CCActionMoveTo actionWithDuration:.2
position:ccp(pane1.position.x - 150, 0)]
rate: 1.5];
CCActionMoveTo * actionMove2 = [CCActionEaseOut actionWithAction:
[CCActionMoveTo actionWithDuration:.2
position:ccp(pane2.position.x - 150, 0)]
rate: 1.5];
CCActionCallFunc * actionCallGenerateTerrain = [CCActionCallFunc actionWithTarget: self selector:#selector(generateTerrain)];
counter++;
if(pane1InUse){
[pane1 runAction: [CCActionSequence actionWithArray:#[actionMove1, actionCallGenerateTerrain]]];
[pane2 runAction: actionMove2];
}
else
{
[pane1 runAction: actionMove1];
[pane2 runAction: [CCActionSequence actionWithArray:#[actionMove2, actionCallGenerateTerrain]]];
}
}
-(void) generateTerrain {
if (counter%8 == 0){
pane1InUse ^= YES;
CCLOG(#"%#", pane1InUse ? #"YES" : #"NO");
if (pane1InUse){
CCLOG(#"Generating Terrain 2 ...");
pane2.position = ccp(pane1.position.x+pane1.boundingBox.size.width, 0);
}
else{
CCLOG(#"Generating Terrain 1 ...");
pane1.position = ccp(pane2.position.x+pane2.boundingBox.size.width, 0);
}
}
}
Also you may have noticed I'm using this weird block of code because ideally I would like actionMove1 and actionMove2 to be executed simultaneously, and once they are both done, then execute actionCallGenerateTerrain, but I don't know how to implement that:
if(pane1InUse){
[pane1 runAction: [CCActionSequence actionWithArray:#[actionMove1, actionCallGenerateTerrain]]];
[pane2 runAction: actionMove2];
}
else
{
[pane1 runAction: actionMove1];
[pane2 runAction: [CCActionSequence actionWithArray:#[actionMove2, actionCallGenerateTerrain]]];
}
}
You probably want to put pane1 and pane2 in a ccNode , call it combinedPanes (this will make pane1 and pane2 always move in sync). Then perform a single action on combinedPane. Also add a boolean state property, call it moveEnabled;
id movePane = [CCActionMoveTo ... ]; // whatever you have for pane1 or pane2
id moveComplete = [CCActionBlock actionWithBlock:^{
[self generateTerrain];
self.moveEnabled = YES;
}];
self.moveEnabled = NO;
[movePane runAction:[CCActionSequence actions:movePane,moveComplete,nil]];
and use moveEnabled to allow/deny the touch processing that detected a touch and triggered this move code. During the move, this will drop touches and effectively block your hysterical user tapping like nuts.
-(void) playerMove {
if (self.moveEnabled){
//
// the rest of your logic
// ...
}
}
and in init (or if you detect this condition already, place it there).
self.moveEnabled = YES;
One simple way to prevent a function being called twice within a time period is to wrap it in an if statement like:
if ([[NSDate date] timeIntervalSinceDate:lastUpdated] > minDelay)
{
// call function here
lastUpdated = [NSDate date];
}
where lastUpdated is initialized using [NSDate date] and minDelay is the required minimum delay in seconds.
I am using SpriteBuilder to make a game. The objective is to destroy some CCSprites. I have 3 sprites on screen and are destroyed by another sprite, so the code must have something to do with when there are no more 'enemy' sprites remaining a next button must show. I have looked on the internet and are inexperienced with Cocos2D coding. Here is the code I have used to get rid of the 'enemy'
-(void)ccPhysicsCollisionPostSolve:(CCPhysicsCollisionPair *)pair danald:(CCNode *)nodeA wildcard:(CCNode *)nodeB {
float energy = [pair totalKineticEnergy];
if (energy > 5000.f) {
[self danaldRemoved:nodeA];
}
}
If the object is hit with a certain speed it will call the method below
- (void)danaldRemoved:(CCNode *)Danald {
CCParticleSystem *explosion = (CCParticleSystem *)[CCBReader load:#"Explosion"];
explosion.autoRemoveOnFinish = TRUE;
explosion.position = Danald.position;
[Danald.parent addChild:explosion];
[Danald removeFromParent];
}
Thanks in an advanced, sorry if this question has been asked before but I cannot find it
Well I would suggest this method:
Create a variable where you store the number of sprites left. For example:
int spritesLeft;
And then initialize it to 0:
-(void) didLoadFromCCB{
//REST OF CODE
spritesLeft=3; //3 because you said there are only 3.
}
Now when you call danaldRemoved: method, just subtract 1 to spritesLeft, and check if spritesLeft is equal to 0. If it's true, just call your method to make a button appear:
- (void)danaldRemoved:(CCNode *)Danald {
spritesLeft--; //substract 1
CCParticleSystem *explosion = (CCParticleSystem *)[CCBReader load:#"Explosion"];
explosion.autoRemoveOnFinish = TRUE;
explosion.position = Danald.position;
[Danald.parent addChild:explosion];
[Danald removeFromParent];
//check if game is over.
if (spritesLeft == 0){
[self printButton];
}
}
Now create the method printButton, but before go to SpriteBuilder, create the button and place it where you want. Now uncheck 'Visible' value, and then go to code connections, and select 'Doc root var' (under custom class) and write a name for the button, for example: nextButton. At the selector value write: changeLevel and target: document root
Now declare it at the top of your .m file as you did with any other objects:
CCButton *nextButton;
Method for button (just set visibility ON)
-(void) printButton{
nextButton.visible = YES;
}
And now your method to change level:
-(void) changeLevel{
CCScene *nextLevel = [CCBReader loadAsScene:#"YOUR LEVEL"];
[[CCDirector sharedDirector] replaceScene:nextLevel];
}
Hope this helps!
EDIT: HOW TO DETECT WHEN A SPRITE GOES OFF THE SCREEN
As I said, create any kind of physic object in spritebuilder. For example, I use CCNodeColor. Then make it a rectangle and place it at left of the screen. Now go to physics, enable physics, polygon type and static. Now in connections, select doc root var and call it _leftNode. Now repeat with top,right and bottom and call them _topNode, etc.
Now go to code, declare your new nodes: CCNode *_leftNode; and so...
Now let's make a collision type:
_bottomNode.physicsBody.collisionType = #"_bound";
_leftNode.physicsBody.collisionType = #"_bound";
_rightNode.physicsBody.collisionType = #"_bound";
_topNode.physicsBody.collisionType = #"_bound";
And do the same with your sprite, but I think you have done that before. Let's make an example:
spritename.physicsBody.collisionType = #"_sprite";
So now implement the method:
-(void)ccPhysicsCollisionPostSolve:(CCPhysicsCollisionPair *)pair _sprite:(CCNode *)nodeA _bound:(CCNode *)nodeB {
[_physicsNode removeChild:nodeA cleanup:YES];
}
And that's all.
I am playing a Tv-show that has been sliced to different chapters on my project using an AVQueuePlayer.
I also want to offer the possibility to skip to the previous/next chapter or to select a different chapter on the fly, while the AVQueuePlayer is already playing.
Skipping to next Item is no problem with the advanceToNextItem provided by AVQueuePlayer, but there is nothing alike for skipping back or playing a certainitem from the queue.
So I am not quite sure what would be the best approach here:
Using an AVPlayer instead of AVQueuePlayer, invoke replaceCurrentItemWithPlayerItem: at actionAtItemEnd to play the nextItem and just use 'replaceCurrentItemWithPlayerItem' to let the User select a certain Chapter
or
reorganise the queue or the current player by using 'insertItem:afterItem:' and 'removeAllItems'
Additional information:
I store the Path to the different videos in the order they should appear in a NSArray
The user is supposed to jump to certain chapters by pressing buttons that represent the chapter. The Buttons have tags, that are also the indexes of the corresponding videos in the array.
Hope I could make myself clear?
Anyone having any experience with this situation?
If anyone knows where to buy a good IOS VideoPlayerFramework which provides the functionality, I would also appreciate the link.
If you want your program can play previous item and play the selected item from your playeritems(NSArray),you can do this.
- (void)playAtIndex:(NSInteger)index
{
[player removeAllItems];
for (int i = index; i <playerItems.count; i ++) {
AVPlayerItem* obj = [playerItems objectAtIndex:i];
if ([player canInsertItem:obj afterItem:nil]) {
[obj seekToTime:kCMTimeZero];
[player insertItem:obj afterItem:nil];
}
}
}
edit:
playerItems is the NSMutableArray(NSArray) where you store your AVPlayerItems.
The first answer removes all items from AVQueuePlayer, and repopulates queue starting with iteration passed as index arg. This would start the newly populated queue with previous item(assuming you passed correct index) as well the rest of the items in existing playerItems array from that point forward, BUT it does not allow for multiple reverses, e.g. you are on track 10 and want to go back and replay track 9, then replay track 5, with above you cannot accomplish. But here you can...
-(IBAction) prevSongTapped: (id) sender
{
if (globalSongCount>1){ //keep running tally of items already played
[self resetAVQueue]; //removes all items from AVQueuePlayer
for (int i = 1; i < globalSongCount-1; i++){
[audioQueuePlayer advanceToNextItem];
}
globalSongCount--;
[audioQueuePlayer play];
}
}
The following code allows you to jump to any item in your. No playerhead advancing. Plain and simple. playerItemList is your NSArray with AVPlayerItem objects.
- (void)playAtIndex:(NSInteger)index
{
[audioPlayer removeAllItems];
AVPlayerItem* obj = [playerItemList objectAtIndex:index];
[obj seekToTime:kCMTimeZero];
[audioPlayer insertItem:obj afterItem:nil];
[audioPlayer play];
}
djiovann created a subclass of AVQueuePlayer that provides exactly this functionality.
You can find it on github.
I haven't tested it yet but from browsing through the code it seems to get the job done. Also the code is well documented, so it should at least serve as a good reference for a custom implementation of the functionality (I suggest using a category instead of subclassing though).
This should be the responsability of the AVQueuePlayer object and not your view controller itself, thus you should make it reusable and expose other answers implementations through an extension and use it in a similar way of advanceToNextItem() :
extension AVQueuePlayer {
func advanceToPreviousItem(for currentItem: Int, with initialItems: [AVPlayerItem]) {
self.removeAllItems()
for i in currentItem..<initialItems.count {
let obj: AVPlayerItem? = initialItems[i]
if self.canInsert(obj!, after: nil) {
obj?.seek(to: kCMTimeZero, completionHandler: nil)
self.insert(obj!, after: nil)
}
}
}
}
Usage (you only have to store an index and a reference to initial queue player items) :
self.queuePlayer.advanceToPreviousItem(for: self.currentIndex, with: self.playerItems)
One way of maintaining an index is to observe the AVPlayerItemDidPlayToEndTime notification for each of your video items :
func addDidFinishObserver() {
queuePlayer.items().forEach { item in
NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying), name: Notification.Name.AVPlayerItemDidPlayToEndTime, object: item)
}
}
func removeDidFinishObserver() {
queuePlayer.items().forEach { item in
NotificationCenter.default.removeObserver(self, name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: item)
}
}
#objc func playerDidFinishPlaying(note: NSNotification) {
if queuePlayer.currentItem == queuePlayer.items().last {
print("last item finished")
} else {
print("item \(currentIndex) finished")
currentIndex += 1
}
}
This observation can also be really useful for other use cases (progress bar, current video timer reset ...).
Swift 5.2
var playerItems = [AVPlayerItem]()
func play(at itemIndex: Int) {
player.removeAllItems()
for index in itemIndex...playerItems.count {
if let item = playerItems[safe: index] {
if player.canInsert(item, after: nil) {
item.seek(to: .zero, completionHandler: nil)
player.insert(item, after: nil)
}
}
}
}
#saiday's answer works for me, here is swift version of his answer
func play(at index: Int) {
queue.removeAllItems()
for i in index..<items.count {
let obj: AVPlayerItem? = items[i]
if queue.canInsert(obj!, after: nil) {
obj?.seek(to: kCMTimeZero, completionHandler: nil)
queue.insert(obj!, after: nil)
}
}
}
If you want to play a song from any index using AVQueuePlayer.Then this below code can help to.
NSMutableArray *musicListarray (add song that you want to play in queue);
AVQueuePlayer *audioPlayer;
AVPlayerItem *item;
-(void) nextTapped
{
nowPlayingIndex = nowPlayingIndex + 1;
if (nowPlayingIndex > musicListarray.count)
{
}
else
{
[self playTrack];
}
}
-(void) playback
{
if (nowPlayingIndex < 0)
{
}
else
{
nowPlayingIndex = nowPlayingIndex - 1;
[self playTrack];
}
}
-(void) playTrack
{
#try
{
if (musicArray.count > 0)
{
item =[[AVPlayerItem alloc] initWithURL: [NSURL URLWithString:musicListarray
[nowPlayingIndex]]];
[audioPlayer replaceCurrentItemWithPlayerItem:item];
[audioPlayer play];
}
}
#catch (NSException *exception)
{
}
}
-(void) PlaySongAtIndex
{
//yore code...
nowPlayingIndex = i (stating index from queue)[audioPlayer play];
[self playTrack];
}
Here PlaySongAtIndex call when you want to play a song.
I am working an iPhone app which is using CLLocationManager. When a user goes for a run, it shows the run path on a mapView. I am drawing the running path on mapView using following code:
double leastDistanceToRecord = 0.0000905;
- (void) locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation {
if (newLocation.horizontalAccuracy >= 0) {
if (!runoPath)
{
NSLog(#"in !runoPath if");
// This is the first time we're getting a location update, so create
// the RunoPath and add it to the map.
runoPath = [[RunoPath alloc] initWithCenterCoordinate:newLocation.coordinate];
[map addOverlay:runoPath];
self.currentRunData = [[RunData alloc] init];
[currentRunData startPointLocation:newLocation];
// On the first location update, zoom map to user location
MKCoordinateRegion region =
MKCoordinateRegionMakeWithDistance(newLocation.coordinate, 1000, 1000);
[map setRegion:region animated: NO];
}
else
{
// This is a subsequent location update.
// If the runoPath MKOverlay model object determines that the current location has moved
// far enough from the previous location, use the returned updateRect to redraw just
// the changed area.
double latitudeChange = fabs(newLocation.coordinate.latitude - oldLocation.coordinate.latitude);
double longitudeChange = fabs(newLocation.coordinate.latitude - oldLocation.coordinate.longitude);
if (latitudeChange > leastDistanceToRecord || longitudeChange > leastDistanceToRecord) {
MKMapRect updateRect = [runoPath addCoordinate:newLocation.coordinate];
if (!MKMapRectIsNull(updateRect))
{
// There is a non null update rect.
// Compute the currently visible map zoom scale
MKZoomScale currentZoomScale = map.bounds.size.width / map.visibleMapRect.size.width;
// Find out the line width at this zoom scale and outset the updateRect by that amount
CGFloat lineWidth = MKRoadWidthAtZoomScale(currentZoomScale);
updateRect = MKMapRectInset(updateRect, -lineWidth, -lineWidth);
// Ask the overlay view to update just the changed area.
[runoPathView setNeedsDisplayInMapRect:updateRect];
}
// [currentRunData updateLocation:oldLocation toNewLocation: newLocation];
}
[currentRunData updateLocation:oldLocation toNewLocation: newLocation];
// }
}
}
}
The problem is that when I start a run, I get some extra points and then because of those points I get an extraneous line on mapView that does not reflect the actual run. It even happens when I install the app on my iPhone and run it for the first time. I don't know why it's adding those extra points. Can anyone help me with that? Thanks in advance.
The first location you get is usually a cached location and is old. You can check the age of the location and if it is old (>60 seconds or whatever) then ignore that location update. See this answer here.
--EDIT-- If you are still having problems, put this code in didUpdateToLocation: and show us the actual output from NSLog (you can edit your question and add the output):
NSTimeInterval age = -[newLocation.timestamp timeIntervalSinceNow];
NSLog(#"age: %0.3f sec, lat=%0.2f, lon=%0.2f, hAcc=%1.0f",
age, newLocation.coordinate.latitude, newLocation.coordinate.longitude,
newLocation.horizontalAccuracy);
I am working with MKMapView Based application. I need a clarification whether it is possible to eliminate the pin OverLap in the MKMapView? Because at some places there are large number of pins displaying. It is difficult to me to identify the pins.
If you have an Apple Developer Account, I would strongly recommend getting the Session 111 video from the 2011 WWDC Conference Sessions, entitled "Visualizing Information Geographically with MapKit". One of the segments specifically covers how to cluster content from large data sets to allow you to group or ungroup pins based on density at various zoom levels.
Their example is elegantly simple, but at the heart of the problem you want to replace a group of overlapping pins with a single pin and as you zoom in the single pin will split back into the individual pins.
How and when you decide to group things can be varied considerably. Apple's solution simply subdivides the map into a grid and any box that has more than 1 pin results in a group. You could also take an algorithmic approach such as using a kMeansCluster algorithm which is incredibly simple and you could feed all of your annotations through the algorithm and get an array of groups out the other side logically organized.
From there it's a matter of keeping track of all the individual pins and how they are grouped as you zoom in and out. You will only display a single annotation for each group or any individual pins that are left over. It's also possible to animate the transitions as the map zooms in and out so you can visually reinforce what is happening.
My own technique is too closely related to Apple's approach for me to post here so I'm hoping you can access the above video which covers almost all of these points.
For this you have to implement clustering concept to your map.By using Apple demo code it's easy to implement clustering concept in our code. Reference link
Simply we can use following code for the Clustering
Steps to implement clustering
Step1 : The important thing is for clustering we use two mapviews(allAnnotationsMapView, ), One is for reference(allAnnotationsMapView).
#property (nonatomic, strong) MKMapView *allAnnotationsMapView;
#property (nonatomic, strong) IBOutlet MKMapView *mapView;
In viewDidLoad
_allAnnotationsMapView = [[MKMapView alloc] initWithFrame:CGRectZero];
Step2 : Add all annotations to the _allAnnotationsMapView, In below _photos are the annotations array.
[_allAnnotationsMapView addAnnotations:_photos];
[self updateVisibleAnnotations];
Step3 : Add below methods for clustering, in this PhotoAnnotation is the custom annotation.
MapViewDelegate methods
- (void)mapView:(MKMapView *)aMapView regionDidChangeAnimated:(BOOL)animated {
[self updateVisibleAnnotations];
}
- (void)mapView:(MKMapView *)aMapView didAddAnnotationViews:(NSArray *)views {
for (MKAnnotationView *annotationView in views) {
if (![annotationView.annotation isKindOfClass:[PhotoAnnotation class]]) {
continue;
}
PhotoAnnotation *annotation = (PhotoAnnotation *)annotationView.annotation;
if (annotation.clusterAnnotation != nil) {
// animate the annotation from it's old container's coordinate, to its actual coordinate
CLLocationCoordinate2D actualCoordinate = annotation.coordinate;
CLLocationCoordinate2D containerCoordinate = annotation.clusterAnnotation.coordinate;
// since it's displayed on the map, it is no longer contained by another annotation,
// (We couldn't reset this in -updateVisibleAnnotations because we needed the reference to it here
// to get the containerCoordinate)
annotation.clusterAnnotation = nil;
annotation.coordinate = containerCoordinate;
[UIView animateWithDuration:0.3 animations:^{
annotation.coordinate = actualCoordinate;
}];
}
}
}
clustering Handling methods
- (id<MKAnnotation>)annotationInGrid:(MKMapRect)gridMapRect usingAnnotations:(NSSet *)annotations {
// first, see if one of the annotations we were already showing is in this mapRect
NSSet *visibleAnnotationsInBucket = [self.mapView annotationsInMapRect:gridMapRect];
NSSet *annotationsForGridSet = [annotations objectsPassingTest:^BOOL(id obj, BOOL *stop) {
BOOL returnValue = ([visibleAnnotationsInBucket containsObject:obj]);
if (returnValue)
{
*stop = YES;
}
return returnValue;
}];
if (annotationsForGridSet.count != 0) {
return [annotationsForGridSet anyObject];
}
// otherwise, sort the annotations based on their distance from the center of the grid square,
// then choose the one closest to the center to show
MKMapPoint centerMapPoint = MKMapPointMake(MKMapRectGetMidX(gridMapRect), MKMapRectGetMidY(gridMapRect));
NSArray *sortedAnnotations = [[annotations allObjects] sortedArrayUsingComparator:^(id obj1, id obj2) {
MKMapPoint mapPoint1 = MKMapPointForCoordinate(((id<MKAnnotation>)obj1).coordinate);
MKMapPoint mapPoint2 = MKMapPointForCoordinate(((id<MKAnnotation>)obj2).coordinate);
CLLocationDistance distance1 = MKMetersBetweenMapPoints(mapPoint1, centerMapPoint);
CLLocationDistance distance2 = MKMetersBetweenMapPoints(mapPoint2, centerMapPoint);
if (distance1 < distance2) {
return NSOrderedAscending;
} else if (distance1 > distance2) {
return NSOrderedDescending;
}
return NSOrderedSame;
}];
PhotoAnnotation *photoAnn = sortedAnnotations[0];
NSLog(#"lat long %f %f", photoAnn.coordinate.latitude, photoAnn.coordinate.longitude);
return sortedAnnotations[0];
}
- (void)updateVisibleAnnotations {
// This value to controls the number of off screen annotations are displayed.
// A bigger number means more annotations, less chance of seeing annotation views pop in but decreased performance.
// A smaller number means fewer annotations, more chance of seeing annotation views pop in but better performance.
static float marginFactor = 2.0;
// Adjust this roughly based on the dimensions of your annotations views.
// Bigger numbers more aggressively coalesce annotations (fewer annotations displayed but better performance).
// Numbers too small result in overlapping annotations views and too many annotations on screen.
static float bucketSize = 60.0;
// find all the annotations in the visible area + a wide margin to avoid popping annotation views in and out while panning the map.
MKMapRect visibleMapRect = [self.mapView visibleMapRect];
MKMapRect adjustedVisibleMapRect = MKMapRectInset(visibleMapRect, -marginFactor * visibleMapRect.size.width, -marginFactor * visibleMapRect.size.height);
// determine how wide each bucket will be, as a MKMapRect square
CLLocationCoordinate2D leftCoordinate = [self.mapView convertPoint:CGPointZero toCoordinateFromView:self.view];
CLLocationCoordinate2D rightCoordinate = [self.mapView convertPoint:CGPointMake(bucketSize, 0) toCoordinateFromView:self.view];
double gridSize = MKMapPointForCoordinate(rightCoordinate).x - MKMapPointForCoordinate(leftCoordinate).x;
MKMapRect gridMapRect = MKMapRectMake(0, 0, gridSize, gridSize);
// condense annotations, with a padding of two squares, around the visibleMapRect
double startX = floor(MKMapRectGetMinX(adjustedVisibleMapRect) / gridSize) * gridSize;
double startY = floor(MKMapRectGetMinY(adjustedVisibleMapRect) / gridSize) * gridSize;
double endX = floor(MKMapRectGetMaxX(adjustedVisibleMapRect) / gridSize) * gridSize;
double endY = floor(MKMapRectGetMaxY(adjustedVisibleMapRect) / gridSize) * gridSize;
// for each square in our grid, pick one annotation to show
gridMapRect.origin.y = startY;
while (MKMapRectGetMinY(gridMapRect) <= endY) {
gridMapRect.origin.x = startX;
while (MKMapRectGetMinX(gridMapRect) <= endX) {
NSSet *allAnnotationsInBucket = [self.allAnnotationsMapView annotationsInMapRect:gridMapRect];
NSSet *visibleAnnotationsInBucket = [self.mapView annotationsInMapRect:gridMapRect];
// we only care about PhotoAnnotations
NSMutableSet *filteredAnnotationsInBucket = [[allAnnotationsInBucket objectsPassingTest:^BOOL(id obj, BOOL *stop) {
return ([obj isKindOfClass:[PhotoAnnotation class]]);
}] mutableCopy];
if (filteredAnnotationsInBucket.count > 0) {
PhotoAnnotation *annotationForGrid = (PhotoAnnotation *)[self annotationInGrid:gridMapRect usingAnnotations:filteredAnnotationsInBucket];
[filteredAnnotationsInBucket removeObject:annotationForGrid];
// give the annotationForGrid a reference to all the annotations it will represent
annotationForGrid.containedAnnotations = [filteredAnnotationsInBucket allObjects];
[self.mapView addAnnotation:annotationForGrid];
for (PhotoAnnotation *annotation in filteredAnnotationsInBucket) {
// give all the other annotations a reference to the one which is representing them
annotation.clusterAnnotation = annotationForGrid;
annotation.containedAnnotations = nil;
// remove annotations which we've decided to cluster
if ([visibleAnnotationsInBucket containsObject:annotation]) {
CLLocationCoordinate2D actualCoordinate = annotation.coordinate;
[UIView animateWithDuration:0.3 animations:^{
annotation.coordinate = annotation.clusterAnnotation.coordinate;
} completion:^(BOOL finished) {
annotation.coordinate = actualCoordinate;
[self.mapView removeAnnotation:annotation];
}];
}
}
}
gridMapRect.origin.x += gridSize;
}
gridMapRect.origin.y += gridSize;
}
}
By following above steps we can achieve clustering on mapview, it is not necessary to use any third party code or framework. Please check the Apple sample code here. Please let me know if you have any doubts on this.
It's quite easy to implement your own annotation clustering framework. Here's an example of a basic one that you can refer here.
If your pins are overlapping then it must be that your zoom level is high for that place.
You can think of removing some annotations in that zoom level until you dont have annotation overlaps and while zooming in you can add the annotations so that there are enough space between the annotations.