Collision detection of uneven shapes in iOS - objective-c

I am working on drag and drop activity for iPad. I have a rectangle PNG image (see the image named as obj2). When I drag obj1 only on the black portion of the rectangle then it should react.
if (CGRectIntersectsRect(obj1.frame, obj2.frame))
{
NSLog(#" hit test done!! ");
}
Right now, this piece of code takes hit test even on the transparent area. How to prevent that to happen?

For something as simple as your specific example (triangle and circle), the link that David Rönnqvist gives is very useful. You should definitely look at it to see some available tools. But for the general case, the best bet is clipping, drawing, and searching.
For some background, see Clipping a CGRRect to a CGPath.
First, create an alpha-only bitmap image. This is explained in the above link.
Next, clip your context to one of your images using CGContextClipToMask().
Now, draw your other image onto the context.
Finally, search the bitmap data for any colored pixels (see the above link for example code).
If any of the pixels colored, then there is some overlap.
Another, similar approach (which might actually be faster), is to draw each image into its own alpha-only CGBitmapContext. Then walk the pixels in each context and see if they ever are both >128 at the same time.

Related

Blender border render internals

I would like to know how Blender's border render works internally. How can Blender compute lights if it has not information about the lights in the tiles he won't render? I have not found any reference (source code excluded) on how this feature of blender works. Can somebody explain it (or give me some reference)?
The render border setting only alters what part of the image is rendered, it does not alter what data is sent to the render engine to generate the image.
You can test this by placing an object with a reflective surface in front of the camera and another object behind the camera, the object behind the camera will show in the reflection. The border setting doesn't change the reflection in the object, it only changes what part of the image is rendered.
Rendering an image starts at the pixel that will be visible in the final image and sends a "ray" into the scene to determine what colour the specific pixel will be. Each ray will bounce around in the scene from object to object to light source based on render settings to calculate the final result. While the render border will reduce the pixels used as the starting point for each ray, it does not reduce the objects or lights in the scene that each ray may come into contact with. Each ray going through the scene will see every visible object and light in the scene that can influence the final result for each pixel.
This conference video explains ray types and might give you a better grasp of how a ray goes through a scene to get the final image.

How to make a PhysicsBody based on Alpha Values

Suppose there there is a scene as follows:
There is a scene with the same size as the frame of the device. The scene has a red ball, which is able to move throughout the 'world'. This world is defined by black and white areas, where the ball is ONLY able to move in the area that is white. Here is a picture to help explain:
Parts of the black area can be erased, as if the user is drawing with white color over the scene. This would mean that the area in which the ball can be moved is constantly changing. Now, how would one go about implementing a physicsBody for the an edge between the white and black areas?
I tried redefining the physicsBody every time it is changed, but once the shape becomes complex enough, this isn't a viable solution at all. I tried creating a two-dimensional array of 'boxes' that are invisible and specify whether most of the area within each box is white or black, and if the ball touched a box that was black, it would be pushed back. However, this required heavy rendering and iterating over the array too much. Since my original array contained boxes a little bigger than a pixel, I tried making these boxes bigger to smooth the motion a little, but this eventually caused part of the ball to be stopped by white areas and appear to be inside the black area. This was undesired, since the user could feel invisible barriers that they seemed to be hitting.
I tried searching for other methods to implement this 'destructible terrain' type scene, but the solutions that I found and tried were using other game engines. To further clarify, I am using Objective-C and Apple's SpriteKit framework; and I am not looking for a detailed class full of code, but rather some pseudo-code or implementation ideas that would lead me to a solution.
Thank you.
If your deployment target is iOS 8, this may be what you're looking for...
+ bodyWithTexture:alphaThreshold:size:
Here's a description from Apple's documentation
Creates a physics body from the contents of a texture. Only texels
that exceed a certain transparency value are included in the physics
body.
where a texel is a texture element. You will need to convert an image to the texture before creating the SKPhysicsBody.
I'm not sure if it will allow for a hole in the middle like your drawing. If not, I suspect you can connect two physics bodies, a left half and a right half, to form the hole.

How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect

I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.

iOS cropping and resizing ensuring rect stays visible

My app downloads images from a website. These images are all manner of sizes, from 800x600 up to 1800x1600. I analyze the image using facial recognition, and then want to resize and crop the image. However, it's important that the detected CGRect be visible on the cropped image.
I was using the excellent UIImage+Resize code and using UIViewContentModeScaleAspectFill, but it doesn't seem to have a programatic way of specifying an arbitrary location that needs to be visible in the final image. So if a face is located at the 1600px range of an 1800x1600 image, it'll get cut off.
Is there an easy solution to this, or do I need to dig around in the depths of UIImage+Resize? Any guidance would be appreciated!

App graphic making (transparent and no extra spaces)

I am a coder but not a graphic maker. I can decently produce graphics that meet the quality standards visually although I cannot produce graphics that will technically "work." This is what I mean:
I am using CGRectIntersectsRect for colliding images. My image has SOME extra space which I have made completely transparent using Adobe PhotoShop but even if this extra transparent space is not visible, when the two images collide, it will look like you will be hitting nothing as this extra invisible transparent space is PART of the image and when CGRectIntersectsRect is called it detects touch between two images. So if the other image touches the transparent space, CGRectIntersectsRect is called and my code is executed. I only want my code to be executed if it hits the actual COLOR space of the image. Here is two things that could help me through that, they follow through with questions.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
Thank you guys!
Regarding your question #1, it depends. All images are rectangular, all. So, if your sprite is rectangular, you can crop it in Photoshop to just the rectangular area. But if you want to handle, say, a circle ball, then you can't do such thing as "remove extra space". Your circle ball will always be stored in a rectangular image, with transparent space on the corners.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
You can manually select an area using the Rectangular Marquee Tool and Image > Crop or automatically trim the image based on an edge pixel color using Image > Trim.
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
You can use pixel-perfect collisions or create better bounding shapes for your game objects. For example, instead of using pixel-perfect collision for a spaceship like this one, you could use a triangle for the wings, a rectangle for the body, and a triangle for the head.
Pixel-perfect collision
One way you could implement it would be to
Have an blank image in memory.
Draw visible pixels from one image in blue (#0000ff).
Draw visible pixels from the other image in red (#ff0000).
If there's any purple pixels in the image (#ff00ff), then there's an intersection.
Alternative collision detection solution
If your game is physics-based, then you can use a physics engine like Box2D. You can use circles, rectangles, and polygons to represent all of your game objects and it'll give you accurate results without unnecessary overhead.
For collision detection for non-rectangular shapes, you should look into one of the many game and/or physics libraries available for iOS. Cocos2d coupled with Box2d or chipmunk are popular choices.
If you want to do it yourself, you'll need to start with something like a custom CGPath tracing the actual shape of each object, then use a function like CGPathContainsPoint (that's from memory, it may be wrong). But it is not a simple job. Angry birds uses box2d, AFAIK.