How can I create a Quicktime movie from a series of generated images? - objective-c

I need to create a movie from a series of generated images. (I'm creating the images based on the output of a physics modeling program.)
I found Apple's sample in QtKitCreateMovie and used that as a starting point. Instead of loading jpgs from the application bundle, I'm drawing to an NSImage and then adding that NSImage to the movie object. Here's the basic code I used for testing. mMovie is an instance of QTMovie:
NSImage *anImage = [[NSImage alloc] initWithSize:NSMakeSize(frameSize, frameSize)];
[anImage lockFocus];
float blendValue;
for (blendValue = 0.0; blendValue <= 1.0; blendValue += 0.05) {
[[[NSColor blueColor] blendedColorWithFraction:blendValue ofColor:[NSColor redColor]] setFill];
[NSBezierPath fillRect:NSMakeRect(0, 0, frameSize, frameSize)];
[mMovie addImage:anImage forDuration:duration withAttributes:myDict];
}
[anImage unlockFocus];
[anImage release];
This works under OS X 10.5, but under OS X 10.6 I get an array index beyond bounds exception on the call to addImage:forDuration:withAttributes: (http://openradar.appspot.com/radar?id=1146401)
What's the proper way to create a movie under 10.6?
Also, although this works under 10.5, I run out of memory if I try to create a movie with thousands of frames. That also makes me think I'm not using the correct approach.

You're doing it right, but you're doing it wrong.
The correct way hasn't changed in QTKit. Your mistake is that you're trying to add the image before you have finished it, which happens when you unlock focus. Since you don't unlock focus until after you try to add the image (20 times), you are trying to add an unfinished image (20 times), which doesn't work.
The “out of bounds” exception is because the image has no representations. QTMovie, it seems, is trying to loop through the array returned by the image in response to a representations message, but that array is empty because the image is not finished.
Somehow, you got away with this in Leopard (probably due to an implementation detail that changed in Snow Leopard), but I'd say it was no less your bug then.
The solution is simply to lock focus and unlock focus on the image each time through the loop:
float blendValue;
for (blendValue = 0.0; blendValue <= 1.0; blendValue += 0.05) {
[anImage lockFocus];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
[[[NSColor blueColor] blendedColorWithFraction:blendValue ofColor:[NSColor redColor]] setFill];
[NSBezierPath fillRect:NSMakeRect(0, 0, frameSize, frameSize)];
[anImage unlockFocus];
[mMovie addImage:anImage forDuration:duration withAttributes:myDict];
}

Related

-(void) drawRect:(CGRect)rect; is using up nearly all of iPhone CPU

- (void)drawRect:(CGRect)rect{
float sliceSize = rect.size.width / imagesShownAtOnce;
//Apply our clipping region and fill it with black
[clippingRegion addClip];
[clippingRegion fill];
//Draw the 3 images (+1 for inbetween), with our scroll amount.
CGPoint loc;
for (int i=0;i<imagesShownAtOnce+1;i++){
loc = CGPointMake(rect.origin.x+(i*sliceSize)-imageScroll, rect.origin.y);
[[buttonImages objectAtIndex:i] drawAtPoint:loc];
}
//Draw the text region background
[[UIColor blackColor] setFill];
[textRegion fillWithBlendMode:kCGBlendModeNormal alpha:0.4f];
//Draw the actual text.
CGRect textRectangle = CGRectMake(rect.origin.x+16,rect.origin.y+rect.size.height*4/5.6,rect.size.width/1.5,rect.size.height/3);
[[UIColor whiteColor] setFill];
[buttonText drawInRect:textRectangle withFont:[UIFont fontWithName:#"Avenir-HeavyOblique" size:22]];
}
clippingRegion and textRegion are UIBezierPaths to give me the rounded rectangles I want (First for a clipping region, 2nd as an overlay for my text)
The middle section is drawing 3 images and letting them scroll along, which im updating every 2 refreshes from a CADisplayLink, and that invalidates the draw region by calling [self setNeedsDisplay], and also increasing my imageScroll variable.
Now that that background information is done, here is my issue:
It runs, and even runs smoothly. But it is using up an absolutely high amount of CPU time (80%+)!! How do I push this off to the GPU on the phone instead? Someone told me about CALayers but I've never dealt with them before
Draw each component of your drawing once into something (a view or layer) and let it hold the cached the drawing. Then you just move or transform each component, and exactly as you say, it's all done by the GPU.
You could do this with individual views or with individual layers, but that doesn't really matter (a view is a layer, under the hood). The point is that there is no need to be constantly redrawing from scratch when all you really want is to move the same persistent pieces around.
Learning about CALayer would be a good idea, as it is in fact the basis of all drawing on iOS. What could be more important to know about than that?

Down-scaling an UIImage

I am having problems with resizing of my JPEG picture.
I would like it to be the size of the original picture, but instead it covers up the whole screen. I was trying myImageView = [[UIImageView alloc] initWithImage:[[UIImage imageNamed:#"Cube Tile.jpeg"]
myImageView.contentMode = UIViewContentModeScaleAspectFit;
This is what happened:
My simulator screen gets covered with my image, but the height is slightly smaller.
I also tried adding this: myImageView = [[UIImageView alloc] initWithImage:[[UIImage imageNamed:#"Cube Tile.jpeg"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 0, 0, 0)]];
This is what happened:
My simulator screen gets covered with tiny versions of my image.
Is it possible doing this without too many lines of code?
Excuse me if I am a bit vaque.
Note 1: The original picture is 24 x 24 pixels
Note 2: I am a new developer, so I was just experimenting.
Thanks in advance, Marnix.
Have you tried setting the frame on the image view? e.g.,
float x = 0.0;
float y = 0.0;
[myImageView setFrame:CGRectMake(x,y,24,24)];
(and to make sure it's not trying to autosize based on a setting in the .xib)
[myImageView setAutoresizingMask:UIViewAutoresizingNone];
You could change the value of
myImageView.contentMode = UIViewContentModeScaleAspectFit;
to one of these Values:
typedef enum {
UIViewContentModeScaleToFill,
UIViewContentModeScaleAspectFit,
UIViewContentModeScaleAspectFill,
UIViewContentModeRedraw,
UIViewContentModeCenter,
UIViewContentModeTop,
UIViewContentModeBottom,
UIViewContentModeLeft,
UIViewContentModeRight,
UIViewContentModeTopLeft,
UIViewContentModeTopRight,
UIViewContentModeBottomLeft,
UIViewContentModeBottomRight,
} UIViewContentMode;
See more at the official documentation http://developer.apple.com/library/ios/#documentation/uikit/reference/uiview_class/uiview/uiview.html

Divide UIImage into two parts along a UIBezierPath

How to divide this UIImage by the black line into two parts. The upper contour set of UIBezierPath.
I need to get two resulting UIImages. So is it possible?
The following set of routines create versions of a UIImage with either only the content inside a path, or only content outside that path.
Both make use of the compositeImage method, which uses CGBlendMode. CGBlendMode is very powerful for masking anything you can draw against anything else you can draw. Calling compositeImage: with other blend modes can have interesting (if not always useful) effects. See the CGContext Reference for all the modes.
The clipping method I described in my comment to your OP does work and is probably faster, but only if you have UIBezierPaths defining all the regions you want to clip.
- (UIImage*) compositeImage:(UIImage*) sourceImage onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// First draw an opaque path...
[path fill];
// ...then composite with the image.
[sourceImage drawAtPoint:CGPointZero blendMode:blend alpha:1.0];
// With drawing complete, store the composited image for later use.
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return maskedImage;
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaInsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceIn];
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaOutsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceOut];
}
I tested clipping, and in a few different tests it was 25% slower than masking to achieve the same result as the [maskImage: toAreaInsidePath:] method in my other answer. For completeness I include it here, but please don't use it without a good reason.
- (UIImage*) clipImage:(UIImage*) sourceImage toPath:(UIBezierPath*) path;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// Clipping means drawing only happens within the path.
[path addClip];
// Draw the image to the context.
[sourceImage drawAtPoint:CGPointZero];
// With drawing complete, store the composited image for later use.
UIImage *clippedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return clippedImage;
}
This can be done but it requires some trigonometry. Let's consider the case for the upper image. First, determine the bottommost end point of the UIBezierPath and use UIGraphicsBeginImageContext to get the top part of the image above the line. This will look as follows:
Now, assuming that your line is straight, move pixel by pixel along the line drawing vertical strokes of clearColor (loop for top portion. Proceed on similar lines for bottom portion):
for(int currentPixel_x=0;currentPixel_x<your_ui_image_top.size.width)
UIGraphicsBeginImageContext(your_ui_image_top.size);
[your_ui_image_top drawInRect:CGRectMake(0, 0, your_ui_image_top.size.width, your_ui_image_top.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeClear);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(),[UIColor clearColor].CGColor);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, m*currentPixel_x + c);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, your_ui_image_top.size.height);
CGContextStrokePath(UIGraphicsGetCurrentContext());
your_ui_image_top = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Your UIBezierPath will have to be converted to a straight line of the form y = m*x + c. The x in this equation will be currentPixel_x above. Iterate through the width of the image, increasingcurrentPixel_x by 1 each time. next_y_point_on_your_line will be calculated as:
next_y_point_on_your_line = m*currentPixel_x + c
Each vertical stroke will be 1 pixel wide and its height will depend on how you traverse through them. After some iterations, your image will look roughly (please excuse my poor photo-editing skills!) like:
There are multiple ways of how you draw the clear strokes and this is just one way of going about it. You can also have clear strokes that are parallel to the given path if it gives better results.
Another way is to set the alpha of the pixels below the line to 0.

Mirroring CIImage/NSImage

Currently I have the following
CIImage *img = [CIImage imageWithCVImageBuffer: imageBuffer];
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:img];
NSImage *image = [[[NSImage alloc] initWithSize: [imageRep size]] autorelease];
[image addRepresentation:imageRep];
This works perfectly, I can use the NSImage and when written to a file the image is exactly how I need it to be.
However, I'm pulling this image from the users iSight using QTKit, so I need to be able to flip this image across the y axis.
My first thought was to transform the CIImage using something like this, however my final image always comes out completely blank. When written to a file the dimensions are correct but it's seemingly empty.
- (CIImage *)flipImage:(CIImage *)image
{
return [image imageByApplyingTransform:CGAffineTransformMakeScale(-1, 1)];
}
Am I approaching this the wrong way? Or have I made a mistake in my code?
That transform flips it, but the axis around which it flips is not at the center of the image, but at the left edge. You must also translate the image by its width to account for the movement caused during the scale.
Here is some code that may help someone out ==>
CGAffineTransform rotTrans = CGAffineTransformMakeRotation(M_PI_2);
CGAffineTransform transTrans1 = CGAffineTransformTranslate(rotTrans, 0.0f, 320.0f);
CGAffineTransform scaleTrans = CGAffineTransformScale(transTrans1, 1.0, -1.0);
CGAffineTransform transTrans2 = CGAffineTransformTranslate(scaleTrans, -0.0f, -320.0f);
self.view.transform = transTrans2;
I use it to flip frames from the front camera horizontally so they always appear up no matter what the rotation of the device. This stuff does get kind of tricky. One thing to do to help figure out what is going on is scaling down along either of the axes and seeing what the result is.

Odd problem with NSImage -lockFocusFlipped:

I'm using NSImage's -lockFocusFlipped: method to do some drawing into an image. My code looks like this:
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(256, 256)];
[image lockFocusFlipped:YES];
NSShadow *shadow = [[NSShadow alloc] init];
[shadow setShadowColor:[NSColor blackColor]];
[shadow setShadowBlurRadius:6.0];
[shadow setShadowOffset:NSMakeSize(0, 3)];
[shadow set];
NSRect shapeRect = NSMakeRect(0, 0, 256, 100);
[[NSColor redColor] set];
NSRectFill(shapeRect);
[image unlockFocus];
This code works to a certain point. I can confirm that the context is indeed flipped because [[NSGraphicsContext currentContext] isFlipped] returns YES, and also because shapeRect is drawn at the right position (using the top left corner as the origin). That said, the NSShadow does not seem to respect the flipped status of the context. Setting the shadow offset to (0, 3) should move the shadow down when the context is flipped, but it actually moves it up (which is what would happen in a standard non-flipped context).
This problem seems specific to -lockFocusFlipped, because when I'm drawing using this same code into a CALayer with a flipped coordinate system, the shadow is drawn just fine (respecting the flip). Documentation on -lockFocusFlipped also seems to be quite vague. This is all it says in the NSImage class documentation:
Prepares the image to receive drawing commands using the specified flipped state.
And I also found this note in the Snow Leopard AppKit Release Notes:
There are cases, for example drawing directly via NSLayoutManager, that require a flipped context. To cover this case, we add
- (void)lockFocusFlipped:(BOOL)flipped;
This doesn't alter the state of the image itself, only the context on which focus is locked. It means that (0,0) is at the top left and positive along the Y-axis is down in the locked context.
None of the docs seem to explain NSShadow's behaviour in this case. And through further testing, it seems NSGradient does not seem to respect the flipped state of the drawing context used by NSImage either.
Any insight is greatly appreciated :-)
From the NSShadow class reference:
Shadows are always drawn in the default user coordinate space, regardless of any transformations applied to that space. This means that rotations, translations and other transformations of the current transformation matrix (the CTM) do not affect the resulting shadow.
And that's what flipping ultimately is: Translate up, scale back the other way.
There's no such statement for NSGradient, so I'd suggest filing a bug about that one.