animation in game application in android - android-animation

I am working on image animation. I have 200 transparent png images which I am trying to show one by one over a background image.
Can you tell me the best way to do it. The image should change in such a way that it should appear that a cartoon is running.
thanx
pavan

Did you end up finding a solution for this? I would be interested in knowing. If you do not need to animate quicker than every 300ms then this may work for you see my post:
See: 750 frame transparent PNG animation in ImageView at 23fps
Also using Animation.drawable could be an option if you can split your cartoon it up into small pieces 40-50 frames at a time and then play them one after the other.
I'm still looking for a better solution for this so would be interested to see another way.

Related

Better way of defining several clickable area in a single large image in Corona SDK

I have a large image background with several areas or objects in the images which can be clicked and an event is triggered. The method I'm currently using to accomplish this is by slicing the images of objects and area and positioning them over the background image and assigning a click handler to them.
This works for the moment but I feel that there should be a better way of doing this. One way I thought and tried is to fill the sliced images with black or white, position them over the background image, make their opacity 0, make them hit testable and assign a click handler.
Does this method have advantage over the previous one? Does making an image object transparent use less texture memory or is it the same?
And are there other better ways to do so? My main objective is about making the game use less texture memory and cutting the overall project file size by using less of those sliced images.
Use display.newRect/newCircle to mark down area, make them transparent and hit testable.
This should be more efficient than using images.

Is there a way to take an image file and make its background transparent via VB .NET?

We have a system where people are being taken a face shot via a DSLR camera. We need the people's images with transparent background. What we're currently doing is taking the image and editing and cropping it in Photoshop, removing the background image with the Magic Eraser tool.
What I am looking for is a way to parse the image and automatically erase the semi-white background we have, along with the resizing and cropping. Is there some kind of library or code sample that does this without requiring manual intervention?
This is a real complex problem. Like the answer below suggested you'll need to do a fuzzy match on each pixel and set it to be transparent but you also need to detected other nearby pixels to make sure they are not close in color. A white tag on the shirt, white eyelids, hair, pale skin reflecting the flash. All are candidates to be removed by any greedy fuzzy logic.
Think about the Magic Wand tool in Photoshop. How good is it at detecting the edges of the person in the picture? Yeah, and that's the top standard of image editing software with thousands of engineering hours behind it.
This is not a feasible request for a Q&A format, and this is one of those things that humans just do better than machine. BUT, that doesn't mean it's not possible, and who knows, you might be the one to do it. Just don't do it in VB.NET please :)
Some pseudo-code to get an idea of what you need to do:
Bitmap faceShot = Bitmap.FromFile(filepath)
foreach pixel in faceShot
//the following line is where the magic happens, you can do any fuzzy match on the color that suits you
//figure out your color range and do a fuzzy match percentage wise
if (pixel between RGB(255,255,255) and RGB(250,235,215)) //white and antique white
pixel.setAlpha=0
endif
end foreach
You could start with this as a starting point for processing a single image,
http://www.java2s.com/Code/VB/2D/ProcessanImageinvertPixel.htm
Basically, if you have a constant background color (like the TV green-screen), it's just a matter of selecting pixels close to the color you are erasing and setting their Alpha level to 0 (transparent). Treating the RGB values like XYZ coordinates, you can do a 3d distance from your background color, and make everything within a certain threshold transparent.
As an improvement, you could also make everything within another threshold semi-transparent so the edges right around hair and stuff like that look softer and less harsh.
Alternatively, you could probably do the same exact thing with good results in Photoshop, as it should support batch processing.
Edit, thinking about it some more, you may want to use a green screen type background as well instead of an off-white one like you stated, as you may make people's eyes transparent. I would definitely try to batch it in Photoshop/Gimp/etc.

Best way to play video frame by frame - zoomed

I need some advice on playing video frame by frame...
Right now I shoot a video and extract all frames using mpmovieplayer thumbnailImageAtTime for each frame.
The video could be zoomed as well. I am zooming by extracting the frames as mentioned above then resizing and cropping the frames.
This would be great except that thumbnailImageAtTime seems to be very slow. My videos will be less than 30 seconds long...most of the time only a few seconds and it takes about 20 seconds on iphone 4s to grab 60 frames. If you think this should be faster I can post the code I am using, but it is pretty straight forward. I am performing it on a background thread so UI is not affected.
I have been looking at AVFoundation to grab the frames, but have read that it is not exact and I need all 30 fps.
I am really looking for advice on the best way to do this. I need to be able to use a slider and buttons to move frame to frame backwards and forwards as well as jump to a specific frame. As I said the video might be digitally zoomed as well.
Should I not extract frames and just use the video file and move from frame to frame? If so, what is the best way to do this because the mpmovieplayer doesn't seem to allow me to move to an exact frame easily. Also, if I just use the video file what is the best way to zoom? Can I go through each frame of an asset and resize and crop it then save back to the video file? Is this the best way? Can I achieve everything I want to do using AVFoundation?
I have been trying things for about a week now and I do have everything working extracting the frames using mpmovieplayer...the speed is just unacceptable. If I could extract the frames very quickly that solution would be the best in my opinion. I might mention I only have to extract the frames once, not each time the user clicks on the video...if that makes a difference.
I hope this all makes sense and sorry for rambling. Any help would be much appreciated!
After a bit of research I am going to go with AVFoundation to play the video frame by frame and not extract the frames. It works great.

Directly Record Screen on Mac

OK so I want to record the screen of a Mac directly to a .mov or .m4v. I've taken a look at Son of Grab from Apple, but I would prefer not to deal with screenshots and individual images and just work with video.
I thought there should be something in QTKit but I can't find it. I know this can be done in OpenGL, but 1) I don't know how and 2) I'd like to avoid that if possible.
Just to elaborate, I am recording from iSight using QTCaptureDeviceInput and (obviously a QTDevice) because I need to solution to work on Snow Leopard.
It seems like there should be a way to just target the screen as the input device for QTMediaTypeVideo.
Any help would be greatly appreciated.
You can use AVFoundation to do screen recording on the Mac. It's only available on 10.7 though.
You can use CGDisplayCreateImage/CGDisplayCreateImageFromRect APIs (10.6+) to obtain still images of screen and then making a movie out of them.
I'm not sure how good will be the performance though.
I have found that when faced with the question, will it be fast enough or not, just give it a try. Do a quick test by gabbing frame after frame say 1000 times and time it. CGDisplayCreateImageFromRect is not that hard to call at all. I have called it for single screen shots of the whole screen when the mouse was clicked, and it hardly slowed my mac down (only a basic dual core machine).
Apple has two samples showing the main two ways this can be done: :-
ScreenSnapshot
SonOfGrab
It would be easy to modify these to do it say 1000 times in a loop!

Image Sequencer using CoreAnimation

I have a list of images (say 180 images of sequence) to animate. If I use UIImageView with its default image sequence, I get memory issue.
I wanted to use CoreAnimation API, but really don't know how to do it?
What is the best way to do this.
Regards
iWasRobot :P
There's a very hard to google example on how to perform view transitions with Animation, here it is:
http://developer.apple.com/library/ios/#samplecode/ViewTransitions/Introduction/Intro.html
Here you would need as little as 2 views. You would load images into the hidden view, then "transition" it in. Assuming your slideshow mode is long enough, you should have no issues with loading times.
Another thing that comes to mind is paging with UIScrollView. Here you would need as little as 3 views
http://ykyuen.wordpress.com/2010/05/22/iphone-uiscrollview-with-paging-example/
I hope this helps!