QTMovie at 29.97 with QTMakeTime - objective-c

I'm trying to use QTKit to convert a list of images to a quicktime movie. I've figured out how to do everything except get the frame rate to 29.97. Through other forums and resources, the trick seems to be using something like this:
QTTime frameDuration = QTMakeTime(1001, 30000)
However, all my attempts using this method, or even (1000, 29970) still produce a movie at 30fps. This fps is what shows up when playing with Quicktime player.
Any ideas? Is there some other way to set the frame rate for the entire movie once its created?
Here's some sample code:
NSDictionary *outputMovieAttribs = [NSDictionary dictionaryWithObjectsAndKeys:#"jpeg", QTAddImageCodecType, [NSNumber numberWithLong:codecHighQuality], QTAddImageCodecQuality, nil];
QTTime frameDuration = QTMakeTime(1001, 30000);
QTMovie *outputMovie = [[QTMovie alloc] initToWritableFile:#"/tmp/testing.mov" error:nil];
[outputMovie setAttribute:[NSNumber numberWithBool:YES] forKey:QTMovieEditableAttribute];
[outputMovie setAttribute:[NSNumber numberWithLong:30000] forKey:QTMovieTimeScaleAttribute];
if (!outputMovie) {
printf("ERROR: Chunk: Could not create movie object:\n");
} else {
int frameID = 0;
while (frameID < [framePaths count]) {
NSAutoreleasePool *readPool = [[NSAutoreleasePool alloc] init];
NSData *currFrameData = [NSData dataWithContentsOfFile:[framePaths objectAtIndex:frameID]];
NSImage *currFrame = [[NSImage alloc] initWithData:currFrameData];
if (currFrame) {
[outputMovie addImage:currFrame forDuration:frameDuration withAttributes:outputMovieAttribs];
[outputMovie updateMovieFile];
NSString *newDuration = QTStringFromTime([outputMovie duration]);
printf("new Duration: %s\n", [newDuration UTF8String]);
currFrame = nil;
} else {
printf("ERROR: Could not add image to movie");
}
frameID++;
[readPool drain];
}
}
NSString *outputDuration = QTStringFromTime([outputMovie duration]);
printf("output Duration: %s\n", [outputDuration UTF8String]);

Ok, thanks to your code, I could solve the issue. I was using the development tool called Atom Inpector to see that the data structure looked totally different than the movies I am currently working with. As I said, I never created a movie from images as you do, but it seems that this is not the way to go if you want to have a movie afterwards. QuickTime recognizes the clip as "Photo-JPEG", so not a normal movie file. The reason for this seems to be, that the added pictures are NOT added to a movie track but just somewhere in the movie. This can also be seen with Atom Inspector.
With the "movieTimeScaleAttribute", you set a timeScale that is not used!
To solve the issue I changed the code just a tiny bit.
NSDictionary *outputMovieAttribs = [NSDictionary dictionaryWithObjectsAndKeys:#"jpeg",
QTAddImageCodecType, [NSNumber numberWithLong:codecHighQuality],
QTAddImageCodecQuality,[NSNumber numberWithLong:2997], QTTrackTimeScaleAttribute, nil];
QTTime frameDuration = QTMakeTime(100, 2997);
QTMovie *outputMovie = [[QTMovie alloc] initToWritableFile:#"/Users/flo/Desktop/testing.mov" error:nil];
[outputMovie setAttribute:[NSNumber numberWithBool:YES] forKey:QTMovieEditableAttribute];
[outputMovie setAttribute:[NSNumber numberWithLong:2997] forKey:QTMovieTimeScaleAttribute];
Everything else is unaltered.
Oh, by the way. To print the timeValue and timeScale, you could also do :
NSLog(#"new Duration timeScale : %ld timeValue : %lld \n",
[outputMovie duration].timeScale, [outputMovie duration].timeValue);
This way you can see better if your code does as desired.
Hope that helps!
Best regards

I have never done what you're trying to do, but I can tell you how to get the desired framerate I guess.
If you "ask" a movie for its current timing information, you always get a QTTime structure, which contains the timeScale and the timeValue.
For a 29.97 fps video, you would get a timeScale of 2997 ( for example, see below ).
This is the amount of "units" per second.
So, if the playback position of the movie is currently at exactly 2 seconds, you would get a timeValue of 5994.
The frameDuration is therefore 100, because 2997 / 100 = 29.97 fps.
QuickTime cannot handle float values, so you have to convert all the values to a long value by multiplication.
By the way, you don't have to use 100, you could also use 1000 and a timeScale of 29970, or 200 as frame duration and 5994 timeScale. That's all I can tell you from what you get if you read timing information from already existing clips.
You wrote that this didn't work out for you, but this is how QuickTime works internally.
You should look into it again!
Best regards

Related

Saving NSBitmapImageRep as image

I've done an exhaustive search on this and I know similar questions have been posted before about NSBitmapImageRep, but none of them seem specific to what I'm trying to do which is simply:
Read in an image from the desktop (but NOT display it)
Create an NSBitmap representation of that image
Iterate through the pixels to change some colours
Save the modified bitmap representation as a separate file
Since I've never worked with bitmaps before I thought I'd just try to create and save one first, and worry about modifying pixels later. That seemed really straightforward, but I just can't get it to work. Apart from the file saving aspect, most of the code is borrowed from another answer found on StackOverflow and shown below:
-(void)processBitmapImage:(NSString*)aFilepath
{
NSImage *theImage = [[NSImage alloc] initWithContentsOfFile:aFilepath];
if (theImage)
{
CGImageRef CGImage = [theImage CGImageForProposedRect:nil context:nil hints:nil];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithCGImage:CGImage];
NSInteger width = [imageRep pixelsWide];
NSInteger height = [imageRep pixelsHigh];
long rowBytes = [imageRep bytesPerRow];
// above matches the original size indicating NSBitmapImageRep was created successfully
printf("WIDE pix = %ld\n", width);
printf("HIGH pix = %ld\n", height);
printf("Row bytes = %ld\n", rowBytes);
// We'll worry about this part later...
/*
unsigned char* pixels = [imageRep bitmapData];
int row, col;
for (row=0; row < height; row++)
{
// etc ...
for (col=0; col < width; col++)
{
// etc...
}
}
*/
// So, let's see if we can just SAVE the (unmodified) bitmap first ...
NSData *pngData = [imageRep representationUsingType: NSPNGFileType properties: nil];
NSString *destinationStr = [self pathForDataFile];
BOOL returnVal = [pngData writeToFile:destinationStr atomically: NO];
NSLog(#"did we succeed?:%#", (returnVal ? #"YES": #"NO")); // the writeToFile call FAILS!
[imageRep release];
}
[theImage release];
}
While I like this code for its simplicity, another potential issue down the road might be that Apple docs advise us treat bitmaps returned with 'initWithCGImage' as read-only objects…
Can anyone please tell me where I'm going wrong with this code, and how I could modify it to work. While the overall concept looks okay to my non-expert eye, I suspect I'm making a dumb mistake and overlooking something quite basic. Thanks in advance :-)
That's a fairly roundabout way to create the NSBitmapImageRep. Try creating it like this:
NSBitmapImageRep* imageRep = [NSBitmapImageRep imageRepWithContentsOfFile:aFilepath];
Of course, the above does not give you ownership of the image rep object, so don't release it at the end.

QTKit QTMovie::insertSegmentOfTrack not working on OS X 10.7 or 10.8

I have a class written using QTKit that reads and creates QuickTime movie files. My function to add an audio track to a movie, which works just fine on Snow Leopard, does not work on Lion or Mountain Lion. The audio information does appear in "Show Movie Inspector" in the QuickTime Player, and the movie is the correct duration. However, the audio is inaudible upon movie playback. Here is my code that works great in 10.6, but not 10.7 or 10.8:
// here's the code that creates the exported movie
NSString *filePath = [NSString stringWithUTF8String:"Foo.mov"];
NSError *errorPtr;
qtMovie = [[QTMovie alloc] initToWritableFile:filePath error:&errorPtr];
// and now here is my function to add an audio track
bool PLQTMovie::addSoundTrack(const std::string &fileName, const long long srcStartTime, const long long srcEndTime, const long long dstStartTime)
{
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
NSString *file = [NSString stringWithUTF8String:fileName.c_str()];
NSError *errorPtr = nil;
QTMovie *audioMovie = [QTMovie movieWithFile:file error:&errorPtr];
NSArray *tracks = [audioMovie tracksOfMediaType:QTMediaTypeSound];
if([tracks count] == 0)
return false;
QTTrack *soundTrack = [tracks objectAtIndex:0];
QTMedia *media = [soundTrack media];
NSValue *durationValue = [media attributeForKey:QTMediaDurationAttribute];
QTTime soundTrackTime;
[durationValue getValue:&soundTrackTime];
long audioScale = soundTrackTime.timeScale;
long long srcDuration = 0;
if(srcEndTime < 0) // use entire source track duration
{
srcDuration = soundTrackTime.timeValue;
}
else
{
srcDuration = srcEndTime - srcStartTime;
}
QTTime startTime = QTMakeTime(srcStartTime, audioScale);
QTTime srcDurationTime = QTMakeTime(srcDuration, audioScale);
QTTimeRange srcTimeRange = QTMakeTimeRange(startTime, srcDurationTime);
QTTime dstTime = QTMakeTime(dstStartTime, audioScale);
QTTrack *audioTrack = [(QTMovie*)qtMovie insertSegmentOfTrack:soundTrack timeRange:srcTimeRange atTime:dstTime];
[pool drain];
return true;
}
I know that Apple encourages transitioning from QTKit to AV Foundation, but the old QTKit code should still work. I'm at a loss as to what I should be doing differently here. Any ideas?
Don't know if this will help, but we had a similar problem and the answer turned out to be that you need to flatten the movie after adding the audio.
Basically, we had to write the movie to a tmp file, then export that with [QTMovie writeToFile:withAttributes:error] with an attribute of QTMovieFlatten set to YES.
Hope this helps, so I can start to pay back some of the great tips/answers I've gotten here.

AVAssetReader to AudioQueueBuffer

Currently, I'm doing a little test project to see if I can get samples from an AVAssetReader to play back using an AudioQueue on iOS.
I've read this:
( Play raw uncompressed sound with AudioQueue, no sound )
and this: ( How to correctly read decoded PCM samples on iOS using AVAssetReader -- currently incorrect decoding ),
Which both actually did help. Before reading, I was getting no sound at all. Now, I'm getting sound, but the audio is playing SUPER fast. This is my first foray into audio programming, so any help is greatly appreciated.
I initialize the reader thusly:
NSDictionary * outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
output = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:uasset.tracks audioSettings:outputSettings];
[reader addOutput:output];
...
And I grab the data thusly:
CMSampleBufferRef ref= [output copyNextSampleBuffer];
// NSLog(#"%#",ref);
if(ref==NULL)
return;
//copy data to file
//read next one
AudioBufferList audioBufferList;
NSMutableData *data = [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
// NSLog(#"%#",blockBuffer);
if(blockBuffer==NULL)
{
[data release];
return;
}
if(&audioBufferList==NULL)
{
[data release];
return;
}
//stash data in same object
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
// NSData* throwData;
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
[self.delegate streamer:self didGetAudioBuffer:audioBuffer];
/*
Float32 *frame = (Float32*)audioBuffer.mData;
throwData = [NSData dataWithBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
[self.delegate streamer:self didGetAudioBuffer:throwData];
[data appendBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
*/
}
which eventually brings us to the audio queue, set up in this way:
//Apple's own code for canonical PCM
audioDesc.mSampleRate = 44100.0;
audioDesc.mFormatID = kAudioFormatLinearPCM;
audioDesc.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
audioDesc.mBytesPerPacket = 2 * sizeof (AudioUnitSampleType); // 8
audioDesc.mFramesPerPacket = 1;
audioDesc.mBytesPerFrame = 1 * sizeof (AudioUnitSampleType); // 8
audioDesc.mChannelsPerFrame = 2;
audioDesc.mBitsPerChannel = 8 * sizeof (AudioUnitSampleType); // 32
err = AudioQueueNewOutput(&audioDesc, handler_OSStreamingAudio_queueOutput, self, NULL, NULL, 0, &audioQueue);
if(err){
#pragma warning handle error
//never errs, am using breakpoint to check
return;
}
and we enqueue thusly
while (inNumberBytes)
{
size_t bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
if (bufSpaceRemaining < inNumberBytes)
{
AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
fillBuf->mAudioDataByteSize = bytesFilled;
err = AudioQueueEnqueueBuffer(audioQueue, fillBuf, 0, NULL);
}
bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
size_t copySize;
if (bufSpaceRemaining < inNumberBytes)
{
copySize = bufSpaceRemaining;
}
else
{
copySize = inNumberBytes;
}
if (bytesFilled > packetBufferSize)
{
return;
}
AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(inInputData + offset), copySize);
bytesFilled += copySize;
packetsFilled = 0;
inNumberBytes -= copySize;
offset += copySize;
}
}
I tried to be as code inclusive as possible so as to make it easy for everyone to point out where I'm being a moron. That being said, I have a feeling my problem occurs either in the output settings declaration of the track reader or in the actual declaration of the AudioQueue (where I describe to the queue what kind of audio I'm going to be sending it). The fact of the matter is, I don't really know mathematically how to actually generate those numbers (bytes per packet, frames per packet, what have you). An explanation of that would be greatly appreciated, and thanks for the help in advance.
Not sure how much of an answer this is, but there will be too much text and links for a comment and hopefully it will help (maybe guide you to your answer).
First off I know with my current project adjusting the sample rate will effect the speed of the sound, so you can try to play with those settings. But 44k is what I see in most default implementation including the apple example SpeakHere. However I would spend some time comparing your code to that example because there are quite a few differences. like checking before enqueueing.
First check out this posting https://stackoverflow.com/a/4299665/530933
It talks about how you need to know the audio format, specifically how many bytes in a frame, and casting appropriately
also good luck. I have had quite a few questions posted here, apple forums, and the ios forum (not the official one). With very little responses/help. To get where I am today (audio recording & streaming in ulaw) I ended up having to open an Apple Dev Support Ticket. Which prior to tackling the audio I never knew existed (dev support). One good thing is that if you have a valid dev account you get 2 incidents for free! CoreAudio is not fun. Documentation is sparse, and besides SpeakHere there are not many examples. One thing I did find is that the framework headers do have some good info and this book. Unfortunately I have only started the book otherwise I may be able to help you further.
You can also check some of my own postings which I have tried to answer to the best of my abilities.
This is my main audio question which I have spent alot of time on to compile all pertinent links and code.
using AQRecorder (audioqueue recorder example) in an objective c class
trying to use AVAssetWriter for ulaw audio (2)
For some reason, even though every example I've seen of the audio queue using LPCM had
ASBD.mBitsPerChannel = 8* sizeof (AudioUnitSampleType);
For me it turns out I needed
ASBD.mBitsPerChannel = 2*bytesPerSample;
for a description of:
ASBD.mFormatID = kAudioFormatLinearPCM;
ASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
ASBD.mBytesPerPacket = bytesPerSample;
ASBD.mBytesPerFrame = bytesPerSample;
ASBD.mFramesPerPacket = 1;
ASBD.mBitsPerChannel = 2*bytesPerSample;
ASBD.mChannelsPerFrame = 2;
ASBD.mSampleRate = 48000;
I have no idea why this works, which bothers me a great deal... but hopefully I can figure it all out eventually.
If anyone can explain to me why this works, I'd be very thankful.

Access geocode (lat/lon) metadata from photo taken with camera and/or chosen from camera roll [duplicate]

My problem actually seems rather silly... I am writing an iPhone application that uses MKMapKit. The app grabs the EXIF metadata from a provided geotagged photo. The problem is that the latitude and longitude coordinates that I retrieve, for example:
Lat: 34.25733333333334
Lon: 118.5373333333333
returns a location in China. Is there a regional setting that I am missing or do I need to convert the lat/long coordinates before using them?
Thank you all in advance for any help you can provide.
Here is the code I am using to grab the GPS data. You'll notice I am logging everything to the console so I can see what the values are:
void (^ALAssetsLibraryAssetForURLResultBlock)(ALAsset *) = ^(ALAsset *asset)
{
NSDictionary *metadata = asset.defaultRepresentation.metadata;
NSLog(#"Image Meta Data: %#",metadata);
NSDictionary *gpsdata = [metadata objectForKey:#"{GPS}"];
self.lat = [gpsdata valueForKey:#"Latitude"];
self.lng = [gpsdata valueForKey:#"Longitude"];
NSLog(#"\nLatitude: %#\nLongitude: %#",self.lat,self.lng);
};
NSURL *assetURL = [mediaInfo objectForKey:UIImagePickerControllerReferenceURL];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:assetURL
resultBlock:ALAssetsLibraryAssetForURLResultBlock
failureBlock:^(NSError *error) {
}];
UIImage *img = [mediaInfo objectForKey:#"UIImagePickerControllerEditedImage"];
previewImage.image = nil;
self.previewImage.image = img;
NSData *imageData = UIImagePNGRepresentation(img);
if ([imageData length] > 0) {
self._havePictureData = YES;
}
i think you should grab the value using following:
CLLocation *location = [asset valueForProperty:ALAssetPropertyLocation];
Are you sure you're not missing a minus sign on that 118? 34.257, -118.5373 is nicely inside Los Angeles, California.
While you can get the location from the asset per #Allen, it is also valid to get it from the GPS metadata as you were trying to do initially. I'm not 100% sure the asset library coordinate will be the same as the coord in the GPS metadata, it depends on how Apple stores this coord. For example, if you are using a timestamp, the Asset library timestamp is different than the EXIF creation date (a different topic, admittedly).
In any case, the reason you have the coord wrong is b/c you also need to get the direction info as follows:
NSDictionary *metadata = asset.defaultRepresentation.metadata;
NSLog(#"Image Meta Data: %#",metadata);
NSDictionary *gpsdata = [metadata objectForKey:#"{GPS}"];
self.lat = [gpsdata valueForKey:#"Latitude"];
self.lng = [gpsdata valueForKey:#"Longitude"];
// lat is negative is direction is south
if ([[gpsdata valueForKey:#"LatitudeRef"] isEqualToString:#"S"]) {
self.lat = -self.lat;
}
// lng is negative if direction is west
if ([[gpsdata valueForKey:#"LongitudeRef"] isEqualToString:#"W"]) {
self.lng = -self.lng;
}
NSLog(#"\nLatitude: %#\nLongitude: %#",self.lat,self.lng);
This also will works,
void (^ALAssetsLibraryAssetForURLResultBlock)(ALAsset *) = ^(ALAsset *asset)
{
ALAssetRepresentation *rep = [asset defaultRepresentation];
NSDictionary *metadata = rep.metadata;
NSMutableDictionary *GPSDictionary = [[[metadata objectForKey:(NSString *)kCGImagePropertyGPSDictionary]mutableCopy] autorelease];
};
I believe that the reason that there isn't a negative sign is because of the metadata: exif:GPSLongitudeRef: W which (I believe) means that there should be a negative sign in front of the longitude since it is referencing the western hemisphere. I believe that this also applies to the latitude but with exif:GPSLatitudeRef: N for Northern and Southern hemispheres. Hope that this helped. Just realized this is exactly what #XJones said. Metadata using ImageMagick.

AVFoundation: Read RAW single frames from mov file (v210, Uncompressed 10-bit 4:2:2)

For extracting the RAW image buffer from an uncompressed v210 (with pixel format kCVPixelFormatType_422YpCbCr10) I tried to follow this great post:
Reading samples via AVAssetReader
The problem is, that when it comes to startReading of my assetReader I got an AVAssetReaderStatusFailed (with setting NSMutableDictionary object kCVPixelBufferPixelFormatTypeKey for the output settings to kCVPixelFormatType_422YpCbCr10). If I leave the outputSettings be nil, it parses each frame, but the CMSampleBufferRef buffers are empty.
I tried lots of the pixel formats such as
kCVPixelFormatType_422YpCbCr10
kCVPixelFormatType_422YpCbCr8
kCVPixelFormatType_422YpCbCr16
kCVPixelFormatType_422YpCbCr8_yuvs
kCVPixelFormatType_422YpCbCr8FullRange
kCVPixelFormatType_422YpCbCr_4A_8BiPlanar
but none of them work.
What am I doing wrong?
Any comments welcome...
Here is my code:
/* construct an AVAssetReader based on an file based URL */
NSError *error=[[NSError alloc]init];
NSString *filePathString=[[NSString alloc]
initWithString:#"/Users/johann/Raw12bit.mov"];
NSURL *movieUrl=[[NSURL alloc] initFileURLWithPath:filePathString];
AVURLAsset *movieAsset=[[AVURLAsset alloc] initWithURL:movieUrl options:nil];
/* determine image dimensions of images stored in movie asset */
CGSize size=[movieAsset naturalSize];
NSLog(#"movie asset natual size: size.width=%f size.height=%f",
size.width, size.height);
/* allocate assetReader */
AVAssetReader *assetReader=[[AVAssetReader alloc] initWithAsset:movieAsset
error:&error];
/* get video track(s) from movie asset */
NSArray *videoTracks=[movieAsset tracksWithMediaType:AVMediaTypeVideo];
/* get first video track, if there is any */
AVAssetTrack *videoTrack0=[videoTracks objectAtIndex:0];
/* set the desired video frame format into attribute dictionary */
NSMutableDictionary* dictionary=[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_422YpCbCr10],
(NSString*)kCVPixelBufferPixelFormatTypeKey,
nil];
/* construct the actual track output and add it to the asset reader */
AVAssetReaderTrackOutput* assetReaderOutput=[[AVAssetReaderTrackOutput alloc]
initWithTrack:videoTrack0
outputSettings:dictionary]; //nil or dictionary
/* main parser loop */
NSInteger i=0;
if([assetReader canAddOutput:assetReaderOutput]){
[assetReader addOutput:assetReaderOutput];
NSLog(#"asset added to output.");
/* start asset reader */
if([assetReader startReading]==YES){
/* read off the samples */
CMSampleBufferRef buffer;
while([assetReader status]==AVAssetReaderStatusReading){
buffer=[assetReaderOutput copyNextSampleBuffer];
i++;
NSLog(#"decoding frame #%ld done.", i);
}
}
else {
NSLog(#"could not start reading asset.");
NSLog(#"reader status: %ld", [assetReader status]);
}
}
else {
NSLog(#"could not add asset to output.");
}
Best Regards,
Johann
Try kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, it worked fine for me.
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
kCVPixelFormatType_32BGRA
system only supports this.