Animating frame Images with ease effect on iPad - objective-c

Let me show you example (360 Deg 3D Object Rotator): Demo: http://activeden.net/item/interactive-renders-360-deg-3d-object-rotator/39718?ref=mixDesign
As you see, there is a camera 3D rotating on mouse event. Actually, it is a collection of images (frames) animating frame by frame depending on mouse event.
I want to implement this animation with objective - c using swipe gesture (or maybe I should use another gesture?). So that I can make rotation by my finger, to the left, to the right (I want animation with smooth ease effect, depending on swipe speed velocity).
Note: I have ready images for each frame.
Sample codes, online tutorials doing this will really help me.
! Should I use some external graphics library, in order to keep performance? I have hundreds of images (PNG), each with size of 300kb
Thank you in advance, I really need your help!

Maybe it will be easier to go with touchesBegan:, touchesMoved:, and touchesEnded: here? This will allow you to react to velocity and direction changes very fast.
Update: example can be found here.

I don't think you should use swipe gesture here. I recommend you LongPressGesture with short minimumPressDuration.
Let me show example code:
longPress = [ [UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(handleLongPressGesture:)]
longPress.delegate = self;
longPress.minimumPressDuration = 0.05;
[viewWithImage addGestureRecognizer:longPress];
float startX;
float displacement = 0;
-(IBAction)handleLongPressGesture:(UILongPressGestureRecognizer *)sender
{
float nowX;
if ( sender.state == UIGestureRecognizerStateBegan )
{
startX = [sender locationInView:viewWithImage].x;
}
if ( sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateCancelled)
{
... do something at end ...
}
nowX = [sender locationInView:mainWidgetView].x;
displacement = nowX - startX;
// set right rotation with displacement value
[self rotateImageWith:displacement];
}

Related

Cocos 2d background map gesture

Cocos2d is prety new for me so i don't know what i should do with this situation:
I want to make a game thats something like risk. Now i made a background image like a world map (just to test). and on this map i want a swipe gesture so i can move accross the map on my ipad ( the map is prety big so i want to swipe it arround).
My problem is i don't know what the objects are called i should use. And how i can implement the gestures the best way (do i need to calculate the movement myself?).
Thanks!
Stefan.
You could maybe connect UIKit's Pan Gesture Recognizer to CCDirector's view and handle pan gesture in CCLayer class. In that way, you could have handle method which moves background with every pan movement. (Code with cocos2d 1.0.1, similar could be done with 2.0 version)
UIPanGestureRecognizer* pan = [[[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanGesture:)] autorelease];
CCDirector* director = [CCDirector sharedDirector];
[[director openGLView] addGestureRecognizer:pan];
Handler method would look like this:
- (void)handlePanGesture:(UIGestureRecognizer*)gestureRecognizer {
// If there is more than one pan gesture recognizer connected with this method, you should remember pan and check if gestureRecognizer is equal to pan
switch (gestureRecognizer.state) {
case UIGestureRecognizerStateBegan: {
// Do something that needs to be done when pan gesture started
break;
}
case UIGestureRecognizerStateChanged: {
// Get pan gesture recognizer translation
CGPoint translation = [(UIPanGestureRecognizer*)gestureRecognizer translationInView:gestureRecognizer.view];
// Invert Y since position and offset are calculated in gl coordinates
translation = ccp(translation.x, -translation.y);
// Here you should move your background, probably in oposite direction of translation vector, something like
background.position = ccp(background.position.x - translation.x, background.position.y - translation.y);
// Refresh pan gesture recognizer
[(UIPanGestureRecognizer*)gestureRecognizer setTranslation:CGPointZero inView:gestureRecognizer.view];
break;
}
case UIGestureRecognizerStateEnded: {
// Do some work that should be done after panning is finished
break;
}
default:
break;
}
}
I think you're looking for this to add an object:
CCSprite *objectName = [CCSprite spriteWithFile:#"fileName.png"];
[self addChild:objectName];
By default, I believe the object will be in the bottom left corner.

iOS - Math help - base image zooms with pinch gesture need overlaid images adjust X/Y coords relative

I have an iPad application that has a base image UIImageView (in this case a large building or site plan or diagram) and then multiple 'pins' can be added on top of the plan (visually similar to Google Maps). These pins are also UIImageViews and are added to the main view on tap gestures. The base image is also added to the main view on viewDidLoad.
I have the base image working with the pinch gesture for zooming but obviously when you zoom the base image all the pins stay in the same x and y coordinates of the main view and loose there relative positioning on the base image (whose x,y and width,height coordinates have changed).
So far i have this...
- (IBAction)planZoom:(UIPinchGestureRecognizer *) recognizer;
{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGRect pinFrame = pin.frame;
// ****************************************
// code to reposition the pins goes here...
// ****************************************
pin.frame = pinFrame;
}
}
}
I need help to calculate the math to reposition the pins x/y coordinates to retain there relative position on the zoomed in or out plan/diagram. The pins obviously do not want to be scaled/zoomed at all in terms of their width or height - they just need new x and y coordinates that are relative to there initial positions on the plan.
I have tried to work out the math myself but have struggled to work it through and unfortunately am not yet acquainted with the SDK enough to know if there is provision available built in to help or not.
Help with this math related problem would be really appreciated! :)
Many thanks,
Michael.
InNeedOfMathTuition.com
First, you might try embedding your UIImageView in a UIScrollView so zooming is largely accomplished for you. You can then set the max and min scale easily, and you can scroll around the zoomed image as desired (especially if your pins are subviews of the UIImageView or something else inside the UIScrollView).
As for scaling the locations of the pins, I think it would work to store the original x and y coordinates of each pin (i.e. when the view first loads, when they are first positioned, at scale 1.0). Then when the view is zoomed, set x = (originalX * zoomScale) and y = (originalY * zoomScale).
I had the same problem in an iOS app a couple of years ago, and if I recall correctly, that's how I accomplished it.
EDIT: Below is more detail about how I accomplished this (I'm looking my old code now).
I had a UIScrollView as a subview of my main view, and my UIImageView as a subview of that. My buttons were added to the scroll view, and I kept their original locations (at zoom 1.0) stored for reference.
In -(void)scrollViewDidScroll:(UIScrollView *)scrollView method:
for (id element in myButtons)
{
UIButton *theButton = (UIButton *)element;
CGPoint originalPoint = //get original location however you want
[theButton setFrame:CGRectMake(
(originalPoint.x - theButton.frame.size.width / 2) * scrollView.zoomScale,
(originalPoint.y - theButton.frame.size.height / 2) * scrollView.zoomScale,
theButton.frame.size.width, theButton.frame.size.height)];
}
For the -(UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView method, I returned my UIImageView. My buttons scaled in size, but I didn't include that in the code above. If you're finding that the pins are scaling in size automatically, you might have to store their original sizes as well as original coordinates and use that in the setFrame call.
UPDATE...
Thanks to 'Mr. Jefferson' help in his answer above, albeit with a differing implementation, I was able to work this one through as follows...
I have a scrollView which has a plan/diagram image as a subview. The scrollView is setup for zooming/panning etc, this includes adding UIScrollViewDelegate to the ViewController.
On user double tapping on the plan/diagram a pin image is added as a subview to the scrollView at the touch point. The pin image is a custom 'ZonePin' class which inherits from UIImageView and has a couple of additional properties including 'baseX' and 'baseY'.
The code for adding the pins...
- (IBAction)planDoubleTap:(UITapGestureRecognizer *) recognizer;
{
UIImage *image = [UIImage imageNamed:#"Pin.png"];
ZonePin *newPin = [[ZonePin alloc] initWithImage:image];
CGPoint touchPoint = [recognizer locationInView:planContainer];
CGFloat placementX = touchPoint.x - (image.size.width / 2);
CGFloat placementY = touchPoint.y - image.size.height;
newPin.frame = CGRectMake(placementX, placementY, image.size.width, image.size.height);
newPin.zoneRef = [NSString stringWithFormat:#"%#%d", #"BF", pinSeq++];
newPin.baseX = placementX;
newPin.baseY = placementY;
[planContainer addSubview:newPin];
}
I then have two functions for handling the scrollView interaction and this handles the scaling/repositioning of the pins relative to the plan image. These methods are as follows...
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return planImage;
}
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGFloat newX, newY;
newX = (pin.baseX * scrollView.zoomScale) + (((pin.frame.size.width * scrollView.zoomScale) - pin.frame.size.width) / 2);
newY = (pin.baseY * scrollView.zoomScale) + ((pin.frame.size.height * scrollView.zoomScale) - pin.frame.size.height);
CGRect pinFrame = pin.frame;
pinFrame.origin.x = newX;
pinFrame.origin.y = newY;
pin.frame = pinFrame;
}
}
}
For reference, the calculations for position the pins, by the nature of them being pins' centres the pin image on the x axis but has the y-axis bottom aligned.
The only thing left for me to do with this is to reverse the calculations used in the scrollViewDidScroll method when I add pins when zoomed in. The code for adding pins above will only work properly when the scrollView.zoomScale is 1.0.
Other than that, it now works great! :)

CALayer transforming

I need to rotate a square when the user does a UIRotationGesture.
I have the gesture all set up. The problem is every time the user moves their fingers the square returns to the starting position and then animates to the new position. Rather than move from the previous position to the new position.
i.e if the rotation of the square is 90 degrees and the user continues to rotate to 100. the square will go back to 0 degrees and animate to 100.
Essentially I want the square to mirror the user when they perform a rotate gesture.
- (void)respondToGesture:(UIRotationGestureRecognizer *)rec{
NSLog(#"Rotation: %f", rec.rotation);
[self rotateWithRadian:rec.rotation];
if (rec.state == UIGestureRecognizerStateEnded) {
NSLog(#"gesture ended");
}
}
- (void)rotateWithRadian:(float)radian{
CABasicAnimation *spin = [CABasicAnimation animationWithKeyPath:#"transform.rotation"];
spin.removedOnCompletion = NO;
spin.fillMode = kCAFillModeForwards;
[spin setByValue:[NSNumber numberWithFloat:radian]];
[spin setDuration:1.0];
[squarelayer addAnimation:spin forKey:#"spinAnimation"];
Is there a reason you're not directly setting the layer's transform instead of using an animation?
This should do what you want:
- (void)respondToGesture:(UIRotationGestureRecognizer *)rec {
if (rec.state == UIGestureRecognizerStateChanged) {
CGAffineTransform currentTransform = squareLayer.affineTransform;
squareLayer.affineTransform = CGAffineTransformRotate(currentTransform, gesture.rotation);
gesture.rotation = 0;
}
}
You also need to adjust squareLayer.anchorPoint if you want the rotation to happen around the centre of the user's rotation instead of the centre of the layer.
There is a open source ( https://github.com/kirbyt/KTOneFingerRotationGestureRecognizer )for single finger rotation. You can check it out, will be much better. Even if you want only normal rotation gesture, you can look into the code for better rotation code. It uses center as origin.

taking image snapshot of CATiledLayer-backed view in UIScrollView

I've got a custom map view which is made of a UIScrollView. The scroll view's subview is backed by a CATiledLayer. Everything works great here. Panning & zooming loads up new map tiles and everything performs well.
What I want to do is capture frames of video of animations to this scroll view. Essentially, I want to create a video of animated changes to the scroll view's contentOffset and zoomScale.
I know that the concept is sound as I can get the private API function UIGetScreenImage() to capture the app's screen at, say, 10fps, combine these images, and I get playback animations that are smooth and have the timing curves used by the scroll view animations.
My problem, of course, is that I can't use the private API. Going through the alternatives outlined by Apple here leaves me with pretty much one supposedly valid option: asking a CALayer to renderInContext and taking a UIGraphicsGetImageFromCurrentImageContext() from that.
This just doesn't seem to work with CATiledLayer-backed views, though. A blocky, un-zoomed image is what is captured, as if the higher-resolution tiles never load. This somewhat makes sense given that CATiledLayer draws in background threads for performance and calling renderInContext from the main thread might not catch these updates. The result is similar even if I render the tiled layer's presentationLayer as well.
Is there an Apple-sanctioned way of capturing an image of a CATiledLayer-backed view during the course of the containing scroll view's animations? Or at any point, for that matter?
BTW, this is doable if you properly implement renderLayer:inContext: in your CATiledLayer-backed view.
I did a quick test, and using renderInContext: on a view wrapping the scroll view seemed to work. Have you tried that?
This code works for me.
- (UIImage *)snapshotImageWithView:(CCTiledImageScrollView *)view
{
// Try our best to approximate the best tile set zoom scale to use
CGFloat tileScale;
if (view.zoomScale >= 0.5) {
tileScale = 2.0;
}
else if (view.zoomScale >= 0.25) {
tileScale = 1.0;
}
else {
tileScale = 0.5;
}
// Calculate the context translation based on how far zoomed in or out.
CGFloat translationX = -view.contentOffset.x;
CGFloat translationY = -view.contentOffset.y;
if (view.contentSize.width < CGRectGetWidth(view.bounds)) {
CGFloat deltaX = (CGRectGetWidth(view.bounds) - view.contentSize.width) / 2.0;
translationX += deltaX;
}
if (view.contentSize.height < CGRectGetHeight(view.bounds)) {
CGFloat deltaY = (CGRectGetHeight(view.bounds) - view.contentSize.height) / 2.0;
translationY += deltaY;
}
// Pass the tileScale to the context because that will be the scale used in drawRect by your CATiledLayer backed UIView
UIGraphicsBeginImageContextWithOptions(CGSizeMake(CGRectGetWidth(view.bounds) / view.zoomScale, CGRectGetHeight(view.bounds) / view.zoomScale), NO, tileScale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, translationX / view.zoomScale, translationY / view.zoomScale);
// The zoomView is a subview of UIScrollView. The CATiledLayer backed UIView is a subview of the zoomView.
[view.zoomView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Full sample code found here: https://github.com/gortega56/CCCanvasView

Draw waveform in NSView

I need to draw a waveform in an NSView. (I have all the samples in an array). The drawing must be efficient and really fast without clipping, flickering, etc, it must be smooth. The waveform will be "moving" according to the song position and some changes to the samples (DSP processing) will be shown as visual representation onto NSView in realtime.
I'm familiar drawing lines, arcs, etc onto canvas objects and I have developed apps doing such things but not on Mac OS X ...
I want to ask if anyone can guide me where to start drawing! Core Animation, OpenGL, simple override drawing methods, ??, etc. Which would be the best practice - API to use?
I would keep it simple and create an NSView subclass with an audioData property that uses Cocoa drawing. You could call [view setAudioData:waveArray] which would in turn call [self setNeedsDisplay:YES].
In your drawRect: method you could then iterate through the samples and use NSRectFill() accordingly. Here sample's value is between 0 and 1.
- (void)drawRect:(NSRect)dirtyRect {
[[NSColor blueColor]set];
for (id sample in self.waveArray) {
NSRect drawingRect = NSZeroRect;
drawingRect.origin.x = [self bounds].origin.x;
drawingRect.origin.y = [self bounds].origin.y + ([self.waveArray indexOfObject:sample]/([self.waveArray count] - 1.0));
drawingRect.size.width = [self bounds].size.width/[self.waveArray count];
drawingRect.size.height = [self bounds].size.height * [sample value];
NSRectFill(drawingRect);
}
}
This code isn't exact, and you should be sure to make it more efficent by only drawing samples inside dirtyRect.
I would start with a really long and thin image to represent a single bar/column for the waveform.
My plan would be to have a NSTimer that moves all bars of the wave one to the left every 0.01 seconds.
So something like this in the loop.
for (int x; x < [WaveArray count] ; x++)
{
UIImageView * Bar = [WaveArray ObjectAtIndex: x];
[Bar setCenter:CGPointMake(Bar.center.x-1,Bar.center.y)];
}
Now all you have to do is create the objects at the correct hight and add them to the WaveArray and they all will be moved to the left.