Making only a small section of a vector drawable visible - android-canvas

I have a large image, based on a vector drawable xml file, and I would like to visualize only part of it in an ImageView, like this (the darker regions would be off screen):
Under user control, the image would move around (or rather, the visible part of it), very much like a Google maps app.
I can manage to dig into Android Studio but I would like to get some hint on what would be a proper approach to handle this. Is using a large xml file and clipping it appropriate or is it better drawing it on a canvas in runtime?
I played with both vector drawables and drawing on a canvas, but not enough to be sure which way to go for this or to post any code.

Related

Better way of defining several clickable area in a single large image in Corona SDK

I have a large image background with several areas or objects in the images which can be clicked and an event is triggered. The method I'm currently using to accomplish this is by slicing the images of objects and area and positioning them over the background image and assigning a click handler to them.
This works for the moment but I feel that there should be a better way of doing this. One way I thought and tried is to fill the sliced images with black or white, position them over the background image, make their opacity 0, make them hit testable and assign a click handler.
Does this method have advantage over the previous one? Does making an image object transparent use less texture memory or is it the same?
And are there other better ways to do so? My main objective is about making the game use less texture memory and cutting the overall project file size by using less of those sliced images.
Use display.newRect/newCircle to mark down area, make them transparent and hit testable.
This should be more efficient than using images.

Collision detection of uneven shapes in iOS

I am working on drag and drop activity for iPad. I have a rectangle PNG image (see the image named as obj2). When I drag obj1 only on the black portion of the rectangle then it should react.
if (CGRectIntersectsRect(obj1.frame, obj2.frame))
{
NSLog(#" hit test done!! ");
}
Right now, this piece of code takes hit test even on the transparent area. How to prevent that to happen?
For something as simple as your specific example (triangle and circle), the link that David Rönnqvist gives is very useful. You should definitely look at it to see some available tools. But for the general case, the best bet is clipping, drawing, and searching.
For some background, see Clipping a CGRRect to a CGPath.
First, create an alpha-only bitmap image. This is explained in the above link.
Next, clip your context to one of your images using CGContextClipToMask().
Now, draw your other image onto the context.
Finally, search the bitmap data for any colored pixels (see the above link for example code).
If any of the pixels colored, then there is some overlap.
Another, similar approach (which might actually be faster), is to draw each image into its own alpha-only CGBitmapContext. Then walk the pixels in each context and see if they ever are both >128 at the same time.

Create an irregular shaped frame

I've created a canvas within which I display an image that is clipped when it goes over the edges. I can do this fine with a square shaped frame, however the frame I want to use is the one below. Is there any way I can clip the image inside the frame without having to add a non transparent square border around the image, i.e. just using the black line that I've already drawn? (on iPad)
You'll need to use Core Graphics and Quartz to handle this sort of clipping/graphics manipulation.
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001066
If you're using UIBezierPath, you may be able to achieve the clipping you're after using the following process
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_paths/dq_paths.html#//apple_ref/doc/uid/TP30001066-CH211-TPXREF101
Convert your UIBezierPath to a CGPath
Get your image into a CGContext
Add your CGPath to the context via CGContextAddPath
Clip your context using CGContextClip
Alternatively, if you don't want to be messing with paths (and depending on whether this technique is suitable for your situation, your description of the issue makes it hard to tell), it might be worth using image masking to achieve the effect you're after. See the first link and look under "Bitmap Images and Image Masks".

How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect

I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.

iOS cropping and resizing ensuring rect stays visible

My app downloads images from a website. These images are all manner of sizes, from 800x600 up to 1800x1600. I analyze the image using facial recognition, and then want to resize and crop the image. However, it's important that the detected CGRect be visible on the cropped image.
I was using the excellent UIImage+Resize code and using UIViewContentModeScaleAspectFill, but it doesn't seem to have a programatic way of specifying an arbitrary location that needs to be visible in the final image. So if a face is located at the 1600px range of an 1800x1600 image, it'll get cut off.
Is there an easy solution to this, or do I need to dig around in the depths of UIImage+Resize? Any guidance would be appreciated!