I have made an application which simulates movement of buses on a map.
The buses are MKAnnotations moving atop a MKMapview. By touching the annotation/bus, it will show a custom made callout with info about the bus. It is all based upon public realtime data.
A problem I have is that the MKAnnotations (buses) are very small, and since they are constantly moving, it is often hard to touch the annotation to show the callout.
The icon is 25x25 pixels, and can not be increased.
Is there any way to make an invisible rect which will increase the touch-sensitive area around an annotation?
Create a larger annotation view and set the background color to [UIColor clearColor].
Create a 25x25 UIImageView and add it to the annotation view. You can set the frame of the ImageView to place it in the center of the annotation view.
I ended up doing it non-programmatically.
Using photoshop, I made a 75x75pix 100% opaque picture, and added my 25x25pix icon on top.
Maybe not the best solution, but it worked.
Related
I know how to use a background image with alpha channel to create a UIButton with an irregular tap area. But with this solution, only the ignore-tap area is transparent; the tap area consists of the opaque.
What I want is a totally transparent UIButton, with an irregular tap area. (It triggers an animation behind the button.)
It seems that some sort of extra UILayer with some hit-testing could work, but I don't quite see how. Suggestions welcome.
Here's my solution. The problem: A UIImageView (call it the base view) to which I want to attach an irregularly-shaped tap area. The area corresponds to something in the image, let's call it a river. It could be a non-connected region.
In the Finder, duplicate the view's image, and make all the non-river pixels transparent, using your favorite graphics app. Call this the mask image.
When the base view is tapped, check to see if the corresponding pixel in the mask image is or is not transparent. (Solution left to reader; examples abound.)
So the mask image is never actually rendered, it is just used as a reference.
Note that if the base view gets resized or moved around (as it does in my app, via animating its constraints) then you have to "move" the mask image too, or in some other way manage the coordinate translation between view and mask. The easiest way to do this is to make the mask image a hidden subview of the base view, with its left, top, leading and trailing anchors constrained to be equal to those of the base view. Then they move around together, and the coord translation between the two is the identity.
I wrapped all this up in a subclass of UIImageView so that it manages its own mask, and calls a delegate if the "river" (or whatever) is tapped.
I am creating an app for practice that is a simple drawing app. The user drags his/her finger along the screen and it colors in a 100px x 100px square.
I currently achieve this by creating a new colored UIView where the user taps, and that is working. But, after a little time coloring in, there is substantial lag, which I believe is down to there being too many UIViews as a subview of the main view.
How can I, and others who similarly create UIViews on dragging a finger reduce the lag to none at all, no matter how many UIViews there are. I also think that perhaps this is an impossible task, so how else can someone like me color a cube of the size stated above in the main view on a finger dragged along the screen?
I know that this may seem like a specific question, but I believe that it could help others understand how to reduce lag if there are a very large amount of UIViews where a less performance reducing option is available.
One approach is to draw each square into an image and display that image, rather than keeping around an UIView for each square.
If your drawing is simple enough, though, you can use OpenGL to do this, which is much faster. You should look at Apple's GL Paint Sample Code which shows how to do this in OpenGL.
If your drawing is too complex for OpenGL, you could create, for example, a CGBitmapContext, and draw each square into that context when the user drags their finger. Whenever you draw a new square into that bitmap, you can turn the bitmap into an image (via CGBitmapConxtextCreateImage) and display that image an a UIImageView.
There are two things that come to my mind:
1- Use Instruments tool to check if you are leaking any memory
2- If you are just coloring the views than instead of creating images for each of them, either set the background color property of UIView or override the drawRect method to do custom drawing
I think what you are looking for is the drawRect: method of UIView. You could create your custom UIView (you propably have that already) and override the drawRect method and do your drawing there! You will have to save your drawings in an array or another container and call the setNeedsDisplay method whenever the array content is changed.
I want to create a simple application which performs some calculations and then draws some images on view. I use NSBezierPath. Then I must resize the view and allow people scroll the finished picture. But i don't know how.If I also try to draw an image on an invisible part of canvas then it becomes invisible or isn't drawn (I couldn't know the future canvas size).
Check out the Apple sample code called BezierPathLab. I think that will get you started. There's lot of other sample code for Quartz 2D drawing too.
Being able to scroll and resize the view should be as simple as putting the view that you will be using to draw inside an NSScrollView.
I am new to iphone Programming . I have a rectangle on which i display some data. I want to assign an image to its background. And is it possible that the data would be visible even after the image is assigned . Any idea how i can do it ?
Thanks
You don't say how you are displaying your data, so it isn't possible to respond properly.
But in general, you create simple displays using Interface Builder, and can position a UILabel over a UIImageView, and can set the labels background to be transparent.
If you are drawing using drawRect:, then you probably shouldn't be. And yes, you can draw text over an image doing it this way, too.
CGRect is just a structure representing an abstract rectangle. You do not associate styling information (e.g. color, background image, etc.) to it because this does not make sense.
If you simply want to display an image on a window, drag the PNG image into your project, then you can find your image in Interface Builder, which can be put onto the window.
The scenario:
I have a custom view (a subclass of UIView) that draws a game board. To enable the ability to zoom into, and pan around, the board I added my view as a subview of UIScrollView. This kind of works, but the game board is being rendered incorrectly. Everything is kind of fuzzy, and nothing looks right.
The question:
How can I force my view to be redrawn correctly ay varying scales? I'm providing my view with the current scale and sending it a setNeedsDisplay message after the scroll view is done zooming in/out, but the game board is still being rendered incorrectly. My view should be redrawing the game board depending on the zoom level, but this isn't happening. Does the scroll view perform a generic transformation on subviews? Is there a way to disable this behavior?
The easiest way to do this is to render your game board at a really high resolution, then let the ScrollView handle scaling it down to your display size automatically.
Basically, set the contentSize of your ScrollView to something big (say 1024x1024), and make your game board one giant view inside it, of the same size (say 1024x1024). Then simply let the ScrollView handle all the scaling questions. You don't even need to intercept setNeedsDisplay or anything; as far as your game is concerned, it's just always rendering at the highest resolution.