I'm new to iOS development and I'm struggling with porting some code from iOS6 involving the use of MKOverlay.
When the overlay radius or coordinate change, the renderer should update the display accordingly in real time.
This part works, but if I drag the overlay too much, it reaches some boundary and the rendering gets cut off. I can't find any documentation or help on this behavior.
In the CircleOverlayRenderer class:
- (id)initWithOverlay:(id<MKOverlay>)overlay
{
self = [super initWithOverlay:overlay];
if (self) {
CircleZone *bOverlay = (CircleZone *)overlay;
[RACObserve(bOverlay, coordinate) subscribeNext:^(id x) {
[self setNeedsDisplay];
}];
[RACObserve(bOverlay, radius) subscribeNext:^(id x) {
[self setNeedsDisplay];
}];
}
return self;
}
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGRect rect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGContextSaveGState(context);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextSetFillColorSpace(context, colorSpace);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextSetFillColor(context, color);
CGContextSetAllowsAntialiasing(context, YES);
// outline
{
CGContextSetAlpha(context, 0.8);
CGContextFillEllipseInRect(context, rect);
}
// red
{
CGContextSetAlpha(context, 0.5);
CGRect ellipseRect = CGRectInset(rect, 0.01 * rect.size.width / 2, 0.01 * rect.size.height / 2);
CGContextFillEllipseInRect(context, ellipseRect);
}
CGContextRestoreGState(cox);
}
In the CircleOverlay class:
- (MKMapRect)boundingMapRect
{
MKMapPoint center = MKMapPointForCoordinate(self.coordinate);
double mapPointsPerMeter = MKMapPointsPerMeterAtLatitude(self.coordinate.latitude);
double mapPointsRadius = _radius * mapPointsPerMeter;
return MKMapRectMake(center.x - mapPointsRadius, center.y - mapPointsRadius,
mapPointsRadius * 2.0, mapPointsRadius * 2.0);
}
Here are some screen shots of the problem I'm seeing:
Problem when dragging overlay too much:
Problem when changing the radius:
The problem does go away if I keep zooming the map out. After the map tiles refresh, the overlay no longer gets cut off...
If anyone had a similar problem, please help me, it's driving me crazy!
Looking at the radius example, it makes me suspect the boundingMapRect, given how its cropping. Looking at the boundingMapRect implementation, the reliance upon MKMapPointsPerMeterAtLatitude (esp when you're looking at a large region) is worrying. That function is useful if you are, for example, trying to figure out where a coordinate 10 meters from some other coordinate, but when looking at really large spans, it doesn't always work out well.
I might, instead, suggest something that gets the MKCoordinateRegion of where the circle is, and then convert that to MKMapRect. A simplistic implementation might look like:
- (MKMapRect)boundingMapRect {
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(self.coordinate, _radius * 2, _radius * 2);
CLLocationCoordinate2D upperLeftCoordinate = CLLocationCoordinate2DMake(region.center.latitude - region.span.latitudeDelta / 2, region.center.longitude - region.span.longitudeDelta / 2);
CLLocationCoordinate2D lowerRightCoordinate = CLLocationCoordinate2DMake(region.center.latitude + region.span.latitudeDelta / 2, region.center.longitude + region.span.longitudeDelta / 2);
MKMapPoint upperLeft = MKMapPointForCoordinate(upperLeftCoordinate);
MKMapPoint lowerRight = MKMapPointForCoordinate(lowerRightCoordinate);
return MKMapRectMake(MIN(upperLeft.x, lowerRight.x),
MIN(upperLeft.y, lowerRight.y),
ABS(upperLeft.x - lowerRight.x),
ABS(upperLeft.y - lowerRight.y));
}
You'll have to tweak with this to make sure it gracefully handles crossing of the 180th meridian and when the circle encompasses the north pole, but it illustrates the basic idea: Get MKCoordinateRegion for the circle and then convert that to MKMapRect.
Related
I am using a UILongPressGestureRecognizer which works perfectly but the data I get is not precise enough for my use case. CGPoints that I get are rounded off I think.
Example points that I get: 100.5, 103.0 etc. The decimal part is either .5 or .0 . Is there a way to get more precise points? I was hoping for something like .xxxx as in '100.8745' but .xx would do to.
The reason I need this is because I have a circular UIBezierPath, I want to restrict a drag gesture to only that circular path. The item should only be draggable along the circumference of this circle. To do this I calculated 720 points on the circle's boundary using it's radius. Now these points are .xxxx numbers. If I round them off, the drag is not as smooth around the middle section of the circle.This is because in the middle section, the equator, the points on the x-coordinate are very close together. So when I rounded of the y-coordinate, I lost a lot of points and hence the "not so smooth" drag action.
Here is how I calculate the points
for (CGFloat i = -154;i<154;i++) {
CGPoint point = [self pointAroundCircumferenceFromCenter:center forX:i];
[bezierPoints addObject:[NSValue valueWithCGPoint:point]];
i = i - .5;
}
- (CGPoint)pointAroundCircumferenceFromCenter:(CGPoint)center forX:(CGFloat)x
{
CGFloat radius = 154;
CGPoint upperPoint = CGPointZero;
CGPoint lowerPoint = CGPointZero;
//theta used to be the x variable. was first calculating points using the angle
/* point.x = center.x + radius * cosf(theta);
point.y = center.y + radius * sinf(theta);*/
CGFloat y = (radius*radius) - (theta*theta);
upperPoint.x = x+156;
upperPoint.y = 230-sqrtf(y);
lowerPoint.x = x+156;
lowerPoint.y = sqrtf(y)+230;
NSLog(#"x = %f, y = %f",upperPoint.x, upperPoint.y);
[lowerPoints addObject:[NSValue valueWithCGPoint:lowerPoint]];
[upperPoints addObject:[NSValue valueWithCGPoint:upperPoint]];
return upperPoint;
}
I know the code is weird I mean why would I add the points into arrays and return one point back.
Here is how I restrict the movement
-(void)handleLongPress:(UILongPressGestureRecognizer *)recognizer{
CGPoint finalpoint;
CGPoint initialpoint;
CGFloat y;
CGFloat x;
CGPoint tempPoint;
if(recognizer.state == UIGestureRecognizerStateBegan){
initialpoint = [recognizer locationInView:self.view];
CGRect rect = CGRectMake(initialpoint.x, initialpoint.y, 40, 40);
self.hourHand.frame = rect;
self.hourHand.center = initialpoint;
NSLog(#"Long Press Activated at %f,%f",initialpoint.x, initialpoint.y );
}
else if (recognizer.state == UIGestureRecognizerStateChanged){
CGPoint currentPoint = [recognizer locationInView:self.view];
x = currentPoint.x-initialpoint.x;
y = currentPoint.y-initialpoint.y;
tempPoint = CGPointMake( currentPoint.x, currentPoint.y);
NSLog(#"temp point ::%f, %f", tempPoint.x, tempPoint.y);
tempPoint = [self givePointOnCircleForPoint:tempPoint];
self.hourHand.center = tempPoint;
}
else if (recognizer.state == UIGestureRecognizerStateEnded){
// finalpoint = [recognizer locationInView:self.view];
CGRect rect = CGRectMake(tempPoint.x, tempPoint.y, 20, 20);
self.hourHand.frame = rect;
self.hourHand.center = tempPoint;
NSLog(#"Long Press DeActivated at %f,%f",tempPoint.x, tempPoint.y );
}
}
-(CGPoint)givePointOnCircleForPoint:(CGPoint) point{
CGPoint resultingPoint;
for (NSValue *pointValue in allPoints){
CGPoint pointFromArray = [pointValue CGPointValue];
if (point.x == pointFromArray.x) {
// if(point.y > 230.0){
resultingPoint = pointFromArray;
break;
// }
}
}
Basically, I taking the x-coordinate of the "touched point" and returning the y by comparing it to the array of points I calculated earlier.
Currently this code works for half a circle only because, each x has 2 y values because it's a circle, Ignore this because I think this can be easily dealt with.
In the picture, the white circle is the original circle, the black circle is the circle of the points I have from the code+formatting it to remove precision to fit the input I get. If you look around the equator(red highlighted part) you will see a gap between the next points. This gap is my problem.
To answer your original question: On a device with a Retina display, one pixel is 0.5 points, so 0.5 is the best resolution you can get on this hardware.
(On non-Retina devices, 1 pixel == 1 point.)
But it seems to me that you don't need that points array at all. If understand the problem correctly, you can use the following code to
"restrict" (or "project") an arbitrary point to the circumference of the circle:
CGPoint center = ...; // Center of the circle
CGFloat radius = ...; // Radius of the circle
CGPoint point = ...; // The touched point
CGPoint resultingPoint; // Resulting point on the circumference
// Distance from center to point:
CGFloat dist = hypot(point.x - center.x, point.y - center.y);
if (dist == 0) {
// The touched point is the circle center.
// Choose any point on the circumference:
resultingPoint = CGPointMake(center.x + radius, center.y);
} else {
// Project point to circle circumference:
resultingPoint = CGPointMake(center.x + (point.x - center.x)*radius/dist,
center.y + (point.y - center.y)*radius/dist);
}
I wrote this class that draws a animated progress with a circle (it draws a circular sector based on a float progress)
#implementation MXMProgressView
#synthesize progress;
- (id)initWithDefaultSize {
int circleOffset = 45.0f;
self = [super initWithFrame:CGRectMake(0.0f,
0.0f,
135.0f + circleOffset,
135.0f + circleOffset)];
self.backgroundColor = [UIColor clearColor];
return self;
}
- (void)drawRect:(CGRect)rect {
CGRect allRect = self.bounds;
CGRect circleRect = CGRectMake(allRect.origin.x + 2, allRect.origin.y + 2, allRect.size.width - 4,
allRect.size.height - 4);
CGContextRef context = UIGraphicsGetCurrentContext();
// background image
//UIImage *image = [UIImage imageNamed:#"loader_disc_hover.png"];
//[image drawInRect:circleRect];
// Orange: E27006
CGContextSetRGBFillColor(context,
((CGFloat)0xE2/(CGFloat)0xFF),
((CGFloat)0x70/(CGFloat)0xFF),
((CGFloat)0x06/(CGFloat)0xFF),
0.01f); // fill
//CGContextSetLineWidth(context, 2.0);
CGContextFillEllipseInRect(context, circleRect);
//CGContextStrokeEllipseInRect(context, circleRect);
// Draw progress
float x = (allRect.size.width / 2);
float y = (allRect.size.height / 2);
// Orange: E27006
CGContextSetRGBFillColor(context,
((CGFloat)0xE2/(CGFloat)0xFF),
((CGFloat)0x70/(CGFloat)0xFF),
((CGFloat)0x06/(CGFloat)0xFF),
1.0f); // progress
CGContextMoveToPoint(context, x, y);
CGContextAddArc(context, x, y, (allRect.size.width - 4) / 2, -M_PI_2, (self.progress * 2 * M_PI) - M_PI_2, 0);
CGContextClosePath(context);
CGContextFillPath(context);
}
#end
Now what I want to do I to draw a ring shape with the same progress animation, instead of filling the full circle, so a circular sector again not starting from the center of the circle.
I tried with CGContextAddEllipseInRect and the CGContextEOFillPath(context);
with no success.
I think you'll need to construct a more complex path, something like:
// Move to start point of outer arc (which might not be required)
CGContextMoveToPoint(context, x+outerRadius*cos(startAngle), y+outerRadius*sin(startAngle));
// Add outer arc to path (counterclockwise)
CGContextAddArc(context, x, y, outerRadius, startAngle, endAngle, 0);
// move *inward* to start point of inner arc
CGContextMoveToPoint(context, x+innerRadius*cos(endAngle), y+innerRadius*sin(endAngle));
// Add inner arc to path (clockwise)
CGContextAddArc(context, x, y, innerRadius, endAngle, StartAngle, 1);
// Close the path from end of inner arc to start of outer arc
CGContextClosePath(context);
Note: I haven't tried the above code myself
Cheap and nasty solution:
Draw a solid circle that is smaller than the original circle by the thickness of the ring you want to draw.
Draw this circle on top of the original circle, all that you will see animating is the ring.
So I have a UIView called fallingBall that currently collides nicely with my UIView called theBlockView. I am using CGRectIntersectsRect(theBlockView.frame, fallingBall.frame) to detect this collision.
That's all very well, so now I would like my fallingBall to actually be round, and I would also like the top corners of theBlockView to be rounded. To do this, I used the following code:
//round top right-hand corner of theBlockView
UIBezierPath *maskPath = [UIBezierPath bezierPathWithRoundedRect:theBlockView.bounds
byRoundingCorners:UIRectCornerTopRight
cornerRadii:CGSizeMake(10.0, 10.0)];
CAShapeLayer *maskLayer = [CAShapeLayer layer];
maskLayer.frame = theBlockView.bounds;
maskLayer.path = maskPath.CGPath;
theBlockView.layer.mask = maskLayer;
//round the fallingBall view
[[fallingBall layer] setCornerRadius:30];
But, funnily enough, though they look nice and rounded, the views are still rectangles.
So my question is: how can I make CGRectIntersectsRect treat them as the shapes that they look like? Is there a function that works the same but uses the view's alpha to detect collisions?
Thanks for your time!
Actually, let me answer my own question!
OK, so I spent the greater part of the last 10 hours looking around, and I came across this post: Circle-Rectangle collision detection (intersection) - check out what e.James has to say!
I wrote a function to help with this: first, declare the following structs:
typedef struct
{
CGFloat x; //center.x
CGFloat y; //center.y
CGFloat r; //radius
} Circle;
typedef struct
{
CGFloat x; //center.x
CGFloat y; //center.y
CGFloat width;
CGFloat height;
} MCRect;
Then add the following function:
-(BOOL)circle:(Circle)circle intersectsRect:(MCRect)rect
{
CGPoint circleDistance = CGPointMake(abs(circle.x - rect.x), abs(circle.y - rect.y) );
if (circleDistance.x > (rect.width/2 + circle.r)) { return false; }
if (circleDistance.y > (rect.height/2 + circle.r)) { return false; }
if (circleDistance.x <= (rect.width/2)) { return true; }
if (circleDistance.y <= (rect.height/2)) { return true; }
CGFloat cornerDistance_sq = pow((circleDistance.x - rect.width/2), 2) + pow((circleDistance.y - rect.height/2), 2);
return (cornerDistance_sq <= (pow(circle.r, 2)));
}
I hope this helps someone!
CGRectIntersectsRect will always use rectangles, also the frames of the views will always be rectangles. You will have to write your own function. You could use the center of your views to calculate circles using the corner radius, and test if the rectangles AND the circles intersect somehow.
I'm working with Mixare AR SDK for iOS and I need to solve some bugs that happends, one of them is show the information of a POI when the POI's view is tapped.
Prelude:
Mixare has an overlay UIView within MarkerView views are placed, MarkerView views are moving around the screen to geolocate the POIs and each one has two subviews, an UIImageView and an UILabel.
Issue:
Now, for example, there are 3 visible POIs in the screen, so there are 3 MarkerView as overlay subviews. If you touch anywhere in the overlay, a info view associated to a random POI of which are visible is showed.
Desired:
I want that the associated POI's info is shown only when the user tapped a MarkerView
Let's work. I've see that MarkerView inherits from UIView and implements hitTest:withEvent
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
viewTouched = (MarkerView*)[super hitTest:point withEvent:event];
return self;
}
I've put a breakpoint and hitTest is called once for each visible MarkerView but loadedView always is null so I can't work with it, so I've tried to check if the hit point is inside the MarkerView frame implementing pointInside:withEvent: by this way
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
NSLog(#"ClassName: %#", [[self class] description]);
NSLog(#"Point Inside: %f, %f", point.x, point.y);
NSLog(#"Frame x: %f y: %f widht:%f height:%f", self.frame.origin.x, self.frame.origin.y, self.frame.size.width, self.frame.size.height);
if (CGRectContainsPoint(self.frame, point))
return YES;
else
return NO;
return YES;
}
But this function always returns NO, even when I touch the MarkerView. When I check the log I saw that X and Y point values has negative values sometimes and width and height of the view are very small, 0.00022 or similar instead of 100 x 150 that I set the MarkerView frame on its initialization.
Here you are a extract of my log in which you can see the class name, the point and the MarkerView frame values.
ClassName: MarkerView
2011-12-29 13:20:32.679 paisromanico[2996:707] Point Inside: 105.224899, 49.049023
2011-12-29 13:20:32.683 paisromanico[2996:707] Frame x: 187.568573 y: 245.735138 widht:0.021862 height:0.016427
I'm very lost with this issue so any help will be welcome. Thanks in advance for any help provided and I'm sorry about this brick :(
Edit:
At last I've found that the problem is not in hitTest:withEvent: or pointInside:withEvent, problem is with CGTransform that applies to the MarkerView for scaling based on distande and rotating the view, if I comment any code related to this, the Mixare AR SDK works fine, I mean, info view is shown correctly if you touch a marker and doesn't do anything if any other place in the screen is touched.
So, by the moment, I've not solved the problem but I applied a patch removing the CGTransform related code in AugmentedViewController.m class - (void)updateLocations:(NSTimer *)timer function
- (void)updateLocations:(NSTimer *)timer {
//update locations!
if (!ar_coordinateViews || ar_coordinateViews.count == 0) {
return;
}
int index = 0;
NSMutableArray * radarPointValues= [[NSMutableArray alloc]initWithCapacity:[ar_coordinates count]];
for (PoiItem *item in ar_coordinates) {
MarkerView *viewToDraw = [ar_coordinateViews objectAtIndex:index];
viewToDraw.tag = index;
if ([self viewportContainsCoordinate:item]) {
CGPoint loc = [self pointInView:ar_overlayView forCoordinate:item];
CGFloat scaleFactor = 1.5;
if (self.scaleViewsBasedOnDistance) {
scaleFactor = 1.0 - self.minimumScaleFactor * (item.radialDistance / self.maximumScaleDistance);
}
float width = viewToDraw.bounds.size.width ;//* scaleFactor;
float height = viewToDraw.bounds.size.height; // * scaleFactor;
viewToDraw.frame = CGRectMake(loc.x - width / 2.0, loc.y-height / 2.0, width, height);
/*
CATransform3D transform = CATransform3DIdentity;
//set the scale if it needs it.
if (self.scaleViewsBasedOnDistance) {
//scale the perspective transform if we have one.
transform = CATransform3DScale(transform, scaleFactor, scaleFactor, scaleFactor);
}
if (self.rotateViewsBasedOnPerspective) {
transform.m34 = 1.0 / 300.0;
double itemAzimuth = item.azimuth;
double centerAzimuth = self.centerCoordinate.azimuth;
if (itemAzimuth - centerAzimuth > M_PI) centerAzimuth += 2*M_PI;
if (itemAzimuth - centerAzimuth < -M_PI) itemAzimuth += 2*M_PI;
double angleDifference = itemAzimuth - centerAzimuth;
transform = CATransform3DRotate(transform, self.maximumRotationAngle * angleDifference / (VIEWPORT_HEIGHT_RADIANS / 2.0) , 0, 1, 0);
}
viewToDraw.layer.transform = transform;
*/
//if we don't have a superview, set it up.
if (!(viewToDraw.superview)) {
[ar_overlayView addSubview:viewToDraw];
[ar_overlayView sendSubviewToBack:viewToDraw];
}
} else {
[viewToDraw removeFromSuperview];
viewToDraw.transform = CGAffineTransformIdentity;
}
[radarPointValues addObject:item];
index++;
}
float radius = [[[NSUserDefaults standardUserDefaults] objectForKey:#"radius"] floatValue];
if(radius <= 0 || radius > 100){
radius = 5.0;
}
radarView.pois = radarPointValues;
radarView.radius = radius;
[radarView setNeedsDisplay];
[radarPointValues release];
}
Any CoreGrapics or UI expert could give us his point of view about this issue??
You should either try to hittest as attached:
if ([self pointInside:point withEvent:event]) {
// do something
}
I would suggest you add the hit test on the superview, and do the following in the hit test of the parent of the markerViews
if ([markerView pointInside:point withEvent:event]) {
// extract the tag and show the relevant info
}
Hope this helps
The Problem
I'm trying to create a visual radius circle around a annonation, that remains at a fixed size in real terms. Eg. So If i set the radius to 100m, as you zoom out of the Map view the radius circle gets progressively smaller.
I've been able to achieve the scaling, however the radius rect/circle seems to "Jitter" away from the Pin Placemark as the user manipulates the view.
I'm lead to believe this is much easier to achieve on the forthcoming iPhone OS 4, however my application needs to support 3.0.
The Manifestation
Here is a video of the behaviour.
The Implementation
The annotations are added to the Mapview in the usual fashion, and i've used the delegate method on my UIViewController Subclass (MapViewController) to see when the region changes.
-(void)mapView:(MKMapView *)pMapView regionDidChangeAnimated:(BOOL)animated{
//Get the map view
MKCoordinateRegion region;
CGRect rect;
//Scale the annotations
for( id<MKAnnotation> annotation in [[self mapView] annotations] ){
if( [annotation isKindOfClass: [Location class]] && [annotation conformsToProtocol:#protocol(MKAnnotation)] ){
//Approximately 200 m radius
region.span.latitudeDelta = 0.002f;
region.span.longitudeDelta = 0.002f;
region.center = [annotation coordinate];
rect = [[self mapView] convertRegion:region toRectToView: self.mapView];
if( [[[self mapView] viewForAnnotation: annotation] respondsToSelector:#selector(setRadiusFrame:)] ){
[[[self mapView] viewForAnnotation: annotation] setRadiusFrame:rect];
}
}
}
The Annotation object (LocationAnnotationView)is a subclass of the MKAnnotationView and it's setRadiusFrame looks like this
-(void) setRadiusFrame:(CGRect) rect{
CGPoint centerPoint;
//Invert
centerPoint.x = (rect.size.width/2) * -1;
centerPoint.y = 0 + 55 + ((rect.size.height/2) * -1);
rect.origin = centerPoint;
[self.radiusView setFrame:rect];
}
And finally the radiusView object is a subclass of a UIView, that overrides the drawRect method to draw the translucent circles. setFrame is also over ridden in this UIView subclass, but it only serves to call [UIView setNeedsDisplay] in addition to [UIView setFrame:] to ensure that the view is redrawn after the frame has been updated.
The radiusView object's (CircleView) drawRect method looks like this
-(void) drawRect:(CGRect)rect{
//NSLog(#"[CircleView drawRect]");
[self setBackgroundColor:[UIColor clearColor]];
//Declarations
CGContextRef context;
CGMutablePathRef path;
//Assignments
context = UIGraphicsGetCurrentContext();
path = CGPathCreateMutable();
//Alter the rect so the circle isn't cliped
//Calculate the biggest size circle
if( rect.size.height > rect.size.width ){
rect.size.height = rect.size.width;
}
else if( rect.size.height < rect.size.width ){
rect.size.width = rect.size.height;
}
rect.size.height -= 4;
rect.size.width -= 4;
rect.origin.x += 2;
rect.origin.y += 2;
//Create paths
CGPathAddEllipseInRect(path, NULL, rect );
//Create colors
[[self areaColor] setFill];
CGContextAddPath( context, path);
CGContextFillPath( context );
[[self borderColor] setStroke];
CGContextSetLineWidth( context, 2.0f );
CGContextSetLineCap(context, kCGLineCapSquare);
CGContextAddPath(context, path );
CGContextStrokePath( context );
CGPathRelease( path );
//CGContextRestoreGState( context );
}
Thanks for bearing with me, any help is appreciated.
Jonathan
First, what's foo used in the first function? And I'm assuming radiusView's parent is the annotation view, right?
The "Jittering"
Also, the center point of radiusView should coincide with that of the annotationView. This should solve your problem:
-(void) setRadiusFrame:(CGRect)rect{
rect.origin.x -= 0.5*(self.frame.size.width - rect.size.width);
rect.origin.y -= 0.5*(self.frame.size.height - rect.size.height);
[self.radiusView setFrame:rect]
}
Unnecessary method
You could set the frame directly on the radiusView and avoid the above calculation:
UIView * radiusView = [[[self mapView] viewForAnnotation: annotation] radiusView];
rect = [[self mapView] convertRegion:foo toRectToView: radiusView.superView];
[radiusView setFrame:rect];
When drawing the ellipse, don't use the rect passed to drawRect:, it doesn't have to be the same as the frame. It's safer to directly use self.frame
Unnecessary view
Now I gave the above points if you need to use the above hierarchy, but I don't see why don't you just draw your ellipses directly in the LocationAnnotationView? It's there for this purpose after all. This is how you do this:
when scaling, change the annotationView's rect directly:
rect = [[self mapView] convertRegion:foo toRectToView: self.mapView];
[[[self mapView] viewForAnnotation: annotation] setFrame:rect];
Move the implementation of drawRect: to LocationAnnotationView.
This is easier to implement, and should address your problem as the center point of the annotation view moves with your pin and you shouldn't see this problem.
Fixes
There are two other issues in the code:
Set longitudeDelta like this:
region.span.longitudeDelta = 0.002*cos(region.center.latitude*pi/180.0);
as the longitude delta converted to meters changes with the latitude. Alternatively, you could only set latitude delta, then modify the rect so it becomes rectangular (width==height).
in drawRect:, don't use the passed rect; instead use self.frame. There's no guarantee that these are the same, and rect could have any value.
Let me know if these work ;-)