CAEmitterCell particle position - objective-c

I have a snow falling animation using the uiview particle emitter. I am able to animate the snow in various ways thanks to much research, including changing the image content of the particles. There is a circular region where I would like to change the snow image of the particles, and then change the image back when the particle leaves the region. (snow falling through lamp light)
I was thinking of cooking up a formula to switch content images based on the position of the particle in a animation loop, but I am unable to get the x, y of the particle. Is it possible to get the position of individual particles?
Or perhaps particle collision detection?

Related

Rotate camera but render sprites so that they appear in their original world positions

TLDR: I want to rotate camera but render sprites in regards to their world position not camera position.
Howdy,
I'm currently using LibGDX and have come across an issue in regards to camera/object rotation.
Say I have my camera with a rotation of 0 and I have an object(sprite) to the left of the camera's center.
i.e.
Camera Normal (0 degrees rotation)
The sprite renders fine when given a standard world coordinate, however once I rotate my camera, that world coordinate differs from the camera's new (x, y) values.
If I then rotate my camera smoothly 90 degrees to the right(clockwise so that the up direction is facing to the right like the picture below), the object(sprite) that used to be on the left should have simulated a left rotation in regards to the camera (the rotation happens via the camera, the sprite just needs to update position) and now be below the camera's center point.
i.e.
Camera Rotated (90 degrees clockwise)
I'm confused as to how I would calculate the sprite's new locations/positions during the smooth rotation.
Cheers,
Solist.
After looking everywhere for a solution to this problem for 3 weeks it was merely a matter of me needing to call the method
batch.setProjectionMatrix(camera.combined);
in order to update the sprites to their new position in regards to the changing camera rotation.
This link here explains how the Projection Matrix works.

How to clip part of a sprite based on its position?

I'm designing a game in Cocos2d, and at one point I have coins shooting out on a platform from a zelda-ish perspective. I'd like to display the coin's shadow sprite (a different sprite from the coin) on the platform, but mask or clip the shadow sprite on the edge of the platform. The coin can continue off the edge of the platform, but the shadow should stop at the edge. The platform also moves, so I need the shadow sprite to track with the platform's movement.
I thought it could work to use a CCClippingNode for this, but I can't add it as a child of anything in a spriteBatchNode which is how I'm making my platform. Without having the shadow as a child of the platform, I'll mess up z-order and the shadow movement won't track correctly. I also checked out Ray Wenderlich's tutorial on masking a sprite but I don't think that'll work since it looks like it masks an individual sprite texture and not an area of the view where the sprite shouldn't be displayed. Any ideas on how to solve this?

How to calibrate a camera and a robot

I have a robot and a camera. The robot is just a 3D printer where I changed the extruder for a tool, so it doesn't print but it moves every axis independently. The bed is transparent, and below the bed there is a camera, the camera never moves. It is just a normal webcam (playstation eye).
I want to calibrate the robot and the camera, so that when I click on a pixel on a image provided by the camera, the robot will go there. I know I can measure the translation and the rotation between the two frames, but that will probably return lots of errors.
So that's my question, how can I relate the camera and a robot. The camera is already calibrated using chessboards.
In order to make everything easier, the Z-axis can be ignored. So the calibration will be over X and Y.
It depends of what error is acceptable for you.
We have similar setup where we have camera which looks at some plane with object on it that can be moved.
We assume that the image and plane are parallel.
First lets calculate the rotation. Put the tool in such position that you see it on the center of the image, move it on one axis select the point on the image that is corresponding to tool position.
Those two points will give you a vector in the image coordinate system.
The angle between this vector and original image axis will give the rotation.
The scale may be calculated in the similar way, knowing the vector length (in pixels) and the distance between the tool positions(in mm or cm) will give you the scale factor between the image and real world axis.
If this method won't provide enough accuracy you may calibrate the camera for distortion and relative position to the plane using computer vision techniques. Which is more complicated.
See the following links
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html

How to warp an UIImage using Open GL or any other method...?

I am trying to develop an iOS app to make any given image (UIImage) warp on selected locations.
So for this task to be accomplished what should be the rightmost way going forward, for now i'm doing some research on doing this on OpenGL (frankly any heads up on the framework would be nice too).
So finally the requirement is to get the UIImage warp on some given locations. (If x, y coordinates are there)
If you're sufficiently familiar with (or willing to learn) OpenGL, then you could do this:
Create a flat, rectangular grid of points to be a mesh that will be displayed with OpenGL.
Apply the image to the mesh as a texture.
When distorting the image at a particular location, you can just decide which points on the mesh will be affected by the distortion, and move them.
You can push points out from the center, or in toward a center, or shift them all in the same direction. If the distortion affects a large area, then you change a lot of points (possibly changing those in the center by more than those near the edges of the affected area).
Not sure what you mean by 'warp'. Do you mean skew it in 3 dimensions? If so you can adjust the CGAffineTransform for the UIImageView you are displaying it in to get that effect.
If you mean some kind of image processing warp, and you are using iOS 5, you can use Core Image for that.

How to recognize the touch of a non regular sprite image?

I have a sprite and if it is touched the touch should be recognized. I used the coordinates to do so. I took the coordinates (min x, min y, max x , max y)of the sprite image. But The sprite image is not a rectangular shape. So, even if I touch the coordinates outside the sprite and inside the rectangular bounds the sprite is recognized.
But for my application I need only the sprite to be recognized. So, I have to take only the coordinates of the sprite, but it is not regular shape. I am using CCSprite in my program.
So, what can I do to for only the sprite to be selected ? Which classes should use for this?
Thank You.
You could try one of the following...
Create a bounding box smaller than the absolute extents of the sprite image. Yes it will be smaller than the sprite. This will eliminate the dead space click detection of the sprite the trade off being parts of your sprite which look selectable won't be
Use a circular bounding area to detect if the user has clicked on your sprite. Again you will have the dead space problem in my first suggestion but the sphere may give you some better coverage area over the sprite giving you better results on touch detection
This is a standard problem in physics collision detection systems which often end up using circles or rectangles as their collision bodies. I would go with the either a circle or rectangle smaller than the size of your sprite as your bounding area. Going finer detail than that you could generate bounding area polygons. This would however introduce a whole bunch of new issues and concerns.
I am building a Cocos2D game right now and what I am doing is first I step through my sprites and see which sprites the touch hit (they overlap in my app)
Then, for each sprite hit I use [sprite convertTouchToNodeSpace] to get an X,Y co-ordinate inside the sprite, which I can use (although the Y axis is flipped) to reference the CGImage I created the sprite with.
If the pixel at the touch point is 'clear' ie alpha 0, then the sprite was not really touched, and I check the next sprite in the z-order to see if it has color where it was touched.
Sometimes I think I should be using a two color mask image to go along with each sprite, not the sprite image. But, I am mr. make it work, then make it fast.
I realise this is not super efficient, but I do not have very many sprites and I do this only for touches.