I'm having an issue syncing external content in a CALayer with an AVPlayer at high precision.
My first thought was to lay out an array of frames (equal to the number of frames in the video) within a CAKeyframeAnimation and sync with an AVSynchronizedLayer. However, upon stepping through the video frame-by-frame, it appears that AVPlayer and Core Animation redraw on different cycles, as there is a slight (but noticeable) delay between them before they sync up.
Short of processing and displaying through Core Video, is there a way to accurately sync with an AVPlayer on the frame level?
Update: February 5, 2012
So far the best way I've found to do this is to pre-render through AVAssetExportSession coupled with AVVideoCompositionCoreAnimationTool and a CAKeyFrameAnimation.
I'm still very interested in learning of any real-time ways to do this, however.
What do you mean by 'high precision?'
Although the docs claim that an AVAssetReader is not designed for real-time usage, in practice I have had no problems reading video in real-time using it (cf https://stackoverflow.com/a/4216161/42961). The returned frames come with a 'Presentation timestamp' which you can fetch using CMSampleBufferGetPresentationTimeStamp.
You'll want one part of the project to be the 'master' timekeeper here. Assuming your CALayer animation is quick to compute and doesn't involve potentially blocky things like disk access, I'd use that as the master time source. When you need to draw content (eg in the draw selector on your UIView subclass) you should read currentTime from the CALayer animation, if necessary proceed through the AVAssetReader's video frames using copyNextSampleBuffer until CMSampleBufferGetPresentationTimeStamp returns >= currentTime, draw the frame, and then draw the CALayer animation content over the top.
If your player is using an AVURLAsset, did you load it with the precise duration flag set? I.e. something like:
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *urlAsset = [AVURLAsset URLAssetWithURL:aUrl options:options];
Related
I add my sprite frames to CCSpriteFrameCache. Then I create a CCSpriteBatchNode with my desired image file.
This is what I don't quite understand:
When I make a CCSprite, if I want to take advantage of the CCSpriteBatchNode, I need to initialize the CCSprite with [CCSprite spriteWithBatchNode: rect:]? But if that's the case, I don't see how am I taking advantage of CCSpriteFrameCache to get the frames, since now I would be manually making the rect.
So I guess I use [CCSprite spriteWithSpriteFrameName:] and then I add this sprite to the batch node. But I am still unsure.
You should use:
CCSprite *sp = [CCSprite spriteWithSpriteFrameName:#"monster.png"];
The .plist that you specified in the SpriteFrameCache will take care of the frames for you.
Then you create the sprite and add to the batch.
If you create the batchnode with a file called "myArt.png", you CAN ONLY add a sprite to it that is contained inside "myArt.png".
Hope it helps!
According to what I've learned of cocos2d. SpriteFrameCache and SpriteBatchNode have the same result but are used differently and can notice a slight performance difference if your game is very big...
CCSpriteFrameCache loads your frames according to when they are called by their named according to the plist file it was given. The atlas associated with the plist has to be added to the project as well or else the frames will be called but nothing will be found to be drawn. The Plist is like the address of where the image is located inside the image atlas.
The good part of CCSpriteFrameCache is that the code is neater, and smaller than CCSpriteBatchNode method, at the cost that for every call of that frame, it goes to that specific atlas and draws it.
CCSpriteBatchNode, on the other hand, loads the atlas and loads it in one draw call. This is efficient because it reduces the amount of times the draw has to be done per need in the game. The only difficulty here is that you need to do math for the rectangles of each sprite in the atlas. This is because lets say your atlas is of 2 actions of a character, the atlas image file has a size of 1024x1024, and each sprite has a size of 128x128. so you would do the math to get each rectangle for the whole jump action for example.(This is where .plist come in handy to avoid doing such math)
The code gets complicated as you can see but it will only do one call, making it performance-wise your best call.
Another way to use CCSpriteBatchNode is to have different static sprites and you would just do one draw call for those multiple static images or sprites.
If you need example code just ask, I would be more than happy to provide it.
Update: Adding Link for SpriteBatchNode and an Example of my own.
SpriteBatchNode:
Example using SpriteBatchNode with Ray Wenderlich
I believe in this guy, and I have learned alot of Cocos2d from his tutorials. I would suggest you to read other of his tutorials.
In a nutshell, CCSpriteBatchNode is the exact same process we did below with the CCSpriteFrameCache the ONLY difference and its that you add the Sprite Child Node to the CCSpriteBatchNode and not the Layer, BUT you do Add the CCSpriteBatchNode to the Layer.
This is the hard concept that new comers to Cocos2d get entangled at.
SpriteFrameCache:
The SpriteFrameCache I couldn't find a good example so here is one simple one.
//By doing this your sprites are now in the cache ready to be used
//by their names declared in the .plist file.
-(void) loadingSprites:(NSString*) plistName {
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:plistName];
}
-(id)initGameLayer {
//CCSprite accepts CCSpriteFrame and your img is now ready to be displayed.
//However is still not drawn yet.
CCSprite * mySprite = [[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:framename];
//set a position if desired
//20 pixels to the right and 0 pixels to the top.
mySprite.position = CGPointMake(20,0);
//Now the Image has been drawn, making 1 draw call.
[self addChild:mySprite];
}
It is noteworthy to point out that CCSpriteBatchNode makes just 1 drawcall, HOWEVER all the sprites being added to the batchnode have to be part of the same SpriteAtlas.
And using SpriteFrameCache only its easier and simpler, but for every child added to the layer it means +1 draw call is being done.(This is the downside, performance)
So if you add 10 Sprites to the layer with SpriteFrameCache you will have 10 drawcalls.
However if you implement the SpriteBatchNode and add those 10 Sprites in the CCSpriteBatchNode instead and just add that CCSpriteBatchNode to the layer, you will have the same 10 sprites added but only ONE draw call will be done. Hence the Performance difference(for the best) will be significant in larger games.
Hope it helps, Cheers!
I'm making an applications that let users take a photo and show them both in thumbnail and photo viewer.
I have NSManagedObject class called photo and photo has a method that takes UIImage and converts it to PNG using UIImagePNGRepresentation() and saves it to filesystem.
After this operation, resize the image to thumbnail size and save it.
The problem here is UIImagePNGRepresentation() and conversion of image size seems to be really slow and I don't know if this is a right way to do it.
Tell me if anyone know the best way to accomplish what I want to do.
Thank you in advance.
Depending on the image resolution, UIImagePNGRepresentation can indeed be quite slow, as can any writing to the file system.
You should always execute these types of operations in an asynchronous queue. Even if the performance seems good enough for your application when testing, you should still do it an asynch queue -- you never know what other processes the device might have going on which might slow the save down once your app is in the hands of users.
Newer versions of iOS make saving asynchronously really, really easy using Grand Central Dispatch (GCD). The steps are:
Create an NSBlockOperation which saves the image
In the block operation's completion block, read the image from disk & display it. The only caveat here is that you must use the main queue to display the image: all UI operations must occur on the main thread.
Add the block operation to an operation queue and watch it go!
That's it. And here's the code:
// Create a block operation with our saves
NSBlockOperation* saveOp = [NSBlockOperation blockOperationWithBlock: ^{
[UIImagePNGRepresentation(image) writeToFile:file atomically:YES];
[UIImagePNGRepresentation(thumbImage) writeToFile:thumbfile atomically:YES];
}];
// Use the completion block to update our UI from the main queue
[saveOp setCompletionBlock:^{
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
UIImage *image = [UIImage imageWithContentsOfFile:thumbfile];
// TODO: Assign image to imageview
}];
}];
// Kick off the operation, sit back, and relax. Go answer some stackoverflow
// questions or something.
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[queue addOperation:saveOp];
Once you are comfortable with this code pattern, you will find yourself using it a lot. It's incredibly useful when generating large datasets, long operations on load, etc. Essentially, any operation that makes your UI laggy in the least is a good candidate for this code. Just remember, you can't do anything to the UI while you aren't in the main queue and everything else is cake.
Yes, it does take time on iPhone 4, where the image size is around 6 MB. The solution is to execute UIImagePNGRepresentation() in a background thread, using performSelectorInBackground:withObject:, so that your UI thread does not freeze.
It will probably be much faster to do the resizing before converting to PNG.
Try UIImageJPEGRepresentation with a medium compression quality. If the bottleneck is IO then this may prove faster as the filesize will generally be smaller than a png.
Use Instruments to check whether UIImagePNGRepresentation is the slow part or whether it is writing the data out to the filesystem which is slow.
I'm familiar with how to stream audio data from the ipod library using AVAssetReader, but I'm at a loss as to how to seek within the track. e.g. start playback at the halfway point, etc. Starting from the beginning and then sequentially getting successive samples is easy, but surely there must be a way to have random access?
AVAssetReader has a property, timeRange, which determines the time range of the asset from which media data will be read.
#property(nonatomic) CMTimeRange timeRange
The intersection of the value of this property and CMTimeRangeMake(kCMTimeZero, asset.duration) determines the time range of the asset from which media data will be read.
The default value is CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity). You cannot change the value of this property after reading has started.
So, if you want to seek to the middle the track, you'd create a CMTimeRange from asset.duration/2 to asset.duration, and set that as the timeRange on the AVAssetReader.
AVAssetReader is amazingly slow when seeking. If you try to recreate an AVAssetReader to seek while the user is dragging a slider, your app will bring iOS to its knees.
Instead, you should use an AVAssetReader for fast forward only access to video frames, and then also use an AVPlayerItem and AVPlayerItemVideoOutput when the user wants to seek with a slider.
It would be nice if Apple combined AVAssetReader and AVPlayerItem / AVPlayerItemVideoOutput into a new class that was performant and was able to seek quickly.
Be aware that AVPlayerItemVideoOutput will not give back pixel buffers unless there is an AVPlayer attached to the AVPlayerItem. This is obviously a strange implementation detail, but it is what it is.
If you are using AVPlayer and AVPlayerLayer, then you can simply use the seek methods on AVPlayer itself. The above details are only important if you are doing custom rendering with the pixel buffers and/or need to send the pixel buffers to an AVAssetWriter.
How can I slow down an audio file (for playback only) on Mac OS X, but preserve good quality? I tried using QTKit to slow down audio but the quality is bad.
Edit: I'm using this code:
QTMovie *audio = [[QTMovie alloc] initWithFile:mediaClipURL error:&error];
// ... (error handling)
[audio setRate:0.5];
As "markratledge" guessed, I also suspect you want "speed adjustment without pitch bending." It's pretty straightforward to do without third-party code. You can set the QTMovieRateChangesPreservePitchAttribute attribute and just adjust the movie's rate:
QTMovie = [[QTMovie alloc] initWithURL:mediaClipURL error:nil];
if (movie)
{
// Set preserve-pitch attribute
[movie setAttribute:[NSNumber numberWithBool:YES] forKey:QTMovieRateChangesPreservePitchAttribute];
[movie setRate:0.5];
}
// ...
Note: The further away from 1.0 you are, the more distortion you're going to have. There's really no way around this. Samples will be repeated when going slow at the same pitch and samples will be cut very short when going fast at the same pitch. It's a fact of audio processing - the harder the effect, the more distortion you'll eventually have.
The audio editor Audacity http://audacity.sourceforge.net/ has an effect that increases of decreases tempo without changing pitch, and is open source. So it might be good for some applicable code.
In my application I needed something like a particle system so I did the following:
While the application initializes I load a UIImage
laserImage = [UIImage imageNamed:#"laser.png"];
UIImage *laserImage is declared in the Interface of my Controller. Now every time I need a new particle this code makes one:
// add new Laserimage
UIImageView *newLaser = [[UIImageView alloc] initWithImage:laserImage];
[newLaser setTag:[model.lasers count]-9];
[newLaser setBounds:CGRectMake(0, 0, 17, 1)];
[newLaser setOpaque:YES];
[self.view addSubview:newLaser];
[newLaser release];
Please notice that the images are only 17px * 1px small and model.lasers is a internal array to do all the calculating seperated from graphical output. So in my main drawing loop I set all the UIImageView's positions to the calculated positions in my model.lasers array:
for (int i = 0; i < [model.lasers count]; i++) {
[[self.view viewWithTag:i+10] setCenter:[[model.lasers objectAtIndex:i] pos]];
}
I incremented the tags by 10 because the default is 0 and I don't want to move all the views with the default tag.
So the animation looks fine with about 10 - 20 images but really gets slow when working with about 60 images. So my question is: Is there any way to optimize this without starting over in OpenGl ES?
As jeff7 and FenderMostro said, you're using the high-level API (UIKit), and you'd have better performance using the lower APIs, either CoreAnimation or OpenGL. (cocos2d is built on top of OpenGL)
Your best option would be to use CALayers instead of UIImageViews, get a CGImageRef from your UIImage and set it as the contents for these layers.
Also, you might want to keep a pool of CALayers and reuse them by hiding/showing as necessary. 60 CALayers of 17*1 pixels is not much, I've been doing it with hundreds of them without needing extra optimization.
This way, the images will already be decompressed and available in video memory. When using UIKit, everything goes through the CPU, not to mention the creation of UIViews which are pretty heavy objects.
Seems like you're trying to code a game by using the UIKit API, which is not really very suitable for this kind of purpose. You are expending the device's resources whenever you allocate a UIView, which incurs slowdowns because object creation is costly. You might be able to obtain the performance you want by dropping to CoreAnimation though, which is really good at drawing hundreds of images in a limited time frame, although it would still be much better if you used OpenGL or an engine like Cocos2d.
The UIImageView is made to display single OR multiple images. So, instead of creating every time a UIImageView, you should consider creating a new image and add it to the UIImageView instead.
See here.
I'd recommend starting over using OpenGL ES, there is an excellent framework called cocos2d for iPhone that can make this type of programming very easy and fast. From a quick look at your code, you're lasers can be remodeled as CCSprite which is an easy way to move images around a scene among many other things.