Merge two videos without ffmpeg (Cocoa) - objective-c

I've looked and looked for an answer, but can't seem to find one. Lots have asked, but none have gotten answers. I have an app that have two video paths. Now I just want to merge them into one file that can be saved in a ".mov" format. Does anyone have any clue as to how this can be done?
Note : I want to to this without installing and obviously using ffmpeg.
Please if you have time, some code would be very helpful.

First, obviously you need to make sure that the movie type is readable/playable by the quicktime libraries.
But, assuming that's the case, the procedure is basically like this:
Get a pointer to some memory to store the data:
QTMovie *myCombinedMovie = [[QTMovie alloc] initToWritableData:[NSMutableData data] error:nil];
Next, grab the first movie that you want to use and insert it into myCombinedMovie You can have the parts you want combined in an array and enumerate over them to combine as many parts as you like. Also, if you wanted, you could alter destination range to add an offset:
QTMovie *firstMovie = [QTMovie movieWithURL:firstURL error:nil];
// NOTE THAT THE 3 LINES BELOW WERE CHANGED FROM MY ORIGINAL POST!
QTTimeRange timeRange = QTMakeTimeRange(QTZeroTime, [firstMovie duration]);
QTTime insertionTime = [myCombinedMovie duration];
[myCombinedMovie insertSegmentOfMovie:firstMovie timeRange:timeRange atTime:insertionTime];
Rinse and repeat for the second movie part.
Then, output the flattened movie (flattening makes it self-contained):
NSDictionary *writeAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], QTMovieFlatten, nil]; //note that you can add a QTMovieExport key with an appropriate value here to export as a specific type
[myCombinedMovie writeToFile:destinationPath withAttributes:writeAttributes];
EDITED: I edited the above as insertion times were calculating wrong. This way seems easier. Below is the code all together as one, including enumerating through an array of movies and lots of error logging.
NSError *err = nil;
QTMovie *myCombinedMovie = [[QTMovie alloc] initToWritableData:[NSMutableData data] error:&err];
if (err)
{
NSLog(#"Error creating myCombinedMovie: %#", [err localizedDescription]);
return;
}
NSArray *myMovieURLs = [NSArray arrayWithObjects:[NSURL fileURLWithPath:#"/path/to/the/firstmovie.mov"], [NSURL fileURLWithPath:#"/path/to/the/secondmovie.mov"], nil];
for (NSURL *url in myMovieURLs)
{
QTMovie *theMovie = [QTMovie movieWithURL:url error:&err];
if (err){
NSLog(#"Error loading one of the movies: %#", [err localizedDescription]);
return;
}
QTTimeRange timeRange = QTMakeTimeRange(QTZeroTime, [theMovie duration]);
QTTime insertionTime = [myCombinedMovie duration];
[myCombinedMovie insertSegmentOfMovie:theMovie timeRange:timeRange atTime:insertionTime];
}
NSDictionary *writeAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], QTMovieFlatten, nil];
bool success = [myCombinedMovie writeToFile:#"/path/to/outputmovie.mov" withAttributes:writeAttributes error:&err];
if (!success)
{
NSLog(#"Error writing movie: %#", [err localizedDescription]);
return;
}

Related

Recording Microphone and Music at the same time with AVAudioEngine

I'm currently trying to record a player node and input node at the same time but i'm having some difficulty. I can get them to record individually and at some point have set my code up to record both. The only problem is when I manage to do this the recorded content from the input node will play through the speakers as its recording creating an echo that eventually creates feedback that is too much to bear. So I've tried to shape my code as follows so the input node doesn't play through the speakers but i'm finding it difficult.
I'm trying to set up my code as follows:
Player Node Input Node (mic)
| |
| |
| |
v AVAudioConnectionPoint v
Main Mixer ----------------------->>Second Mixer
| |
| Node Tap
|
v
Output
I've tried a number of combinations but I can't put them all here so i'll lay out some fundamental code and hope to Jesus someone has a little expertise in this area.
First I create my engine and attach the nodes:
- (void)createEngineAndAttachNodes
{
_engine = [[AVAudioEngine alloc] init];
[_engine attachNode:_player];
_inputOne = _engine.inputNode;
_outputOne = _engine.outputNode;
}
Then I make the engine connections (this is where I need to know where i'm going wrong.)
- (void)makeEngineConnections
{
_mainMixerOne = [_engine mainMixerNode];
_secondaryMixerOne = [_engine mainMixerNode];
_commonFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:44100 channels:2 interleaved:false];
AVAudioFormat *stereoFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100 channels:2];
[_engine connect:_player to:_mainMixerOne format:stereoFormat];
[_engine connect:_mainMixerOne to:_outputOne format:_commonFormat];
[_engine connect:_inputOne to:_secondaryMixerOne format:_commonFormat];
//Fan out mainMixer
NSArray<AVAudioConnectionPoint *> *destinationNodes = [NSArray arrayWithObjects:
[[AVAudioConnectionPoint alloc] initWithNode:_mainMixerOne bus:1], [[AVAudioConnectionPoint alloc]
initWithNode:_secondaryMixerOne bus:0], nil];
[_engine connect:_player toConnectionPoints:destinationNodes fromBus:0 format:_commonFormat];
}
Whenever I try to directly connect one mixer to the other I get the error:
thread 1 exc_bad_access code=2
Finally we have where I try and put this all together in my record function. The function you see will record the player but not the input.
-(void)setupRecordOne
{
NSError *error;
if (!_mixerFileOne) _mixerFileOne = [NSURL URLWithString:[NSTemporaryDirectory() stringByAppendingString:#"mixerOutput.caf"]];
// AVAudioMixerNode *mainMixer = [_engine mainMixerNode];
AVAudioFile *mixerOutputFile = [[AVAudioFile alloc] initForWriting:_mixerFileOne
settings:_commonFormat.settings error:&error];
NSAssert(mixerOutputFile != nil, #"mixerOutputFile is nil, %#", [error localizedDescription]);
[self startEngine];
[_secondaryMixerOne installTapOnBus:0 bufferSize:4096
format:_commonFormat block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when) {
NSError *error;
BOOL success = NO;
success = [mixerOutputFile writeFromBuffer:buffer error:&error];
NSAssert(success, #"error writing buffer data to file, %#", [error localizedDescription]);
}];
_isRecording = YES;
}
The only way I can record the input is by setting the node tap as follows:
[self startEngine];
[_inputOne installTapOnBus:0 bufferSize:4096 format:_commonFormat
block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when) {
NSError *error;
BOOL success = NO;
success = [mixerOutputFile writeFromBuffer:buffer error:&error];
NSAssert(success, #"error writing buffer data to file, %#", [error localizedDescription]);
}];
But then it wont record the player.
Please tell me there is someone out there who knows what to do.
Thanks,
Joshua

Creating a CVPixelBufferRef from a IDeckLinkVideoInputFrame

I'm using the BlackMagic DeckLink SDK to try capture frames from a BM device.
I'm trying to grab the pixel data from a IDeckLinkVideoInputFrame in the DeckLinkController::VideoInputFrameArrived callback and convert it to a CVPixelBufferRef to be able to write it to disk with AVFoundation's AVAssetWriterInputPixelBufferAdaptor and AVAssetWriter. The code I'm using seems to be working, apart from the fact that all frames written to disk are green. (BlackMagic's example code that generates a preview on screen does show an image, so the device and device settings should be OK).
The AVAssetWriter is set up as follows:
writer = [[AVAssetWriter assetWriterWithURL:destinationUrl
fileType:AVFileTypeAppleM4V
error:&error] retain];
if(error)
NSLog(#"ERROR: %#", [error localizedDescription]);
NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264
forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt:1920]
forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt:1080]
forKey: AVVideoHeightKey];
NSMutableDictionary * compressionProperties = [[NSMutableDictionary alloc] init];
[compressionProperties setObject: [NSNumber numberWithInt: 1000000]
forKey: AVVideoAverageBitRateKey];
[compressionProperties setObject: [NSNumber numberWithInt: 16]
forKey: AVVideoMaxKeyFrameIntervalKey];
[compressionProperties setObject: AVVideoProfileLevelH264Main31
forKey: AVVideoProfileLevelKey];
[outputSettings setObject: compressionProperties
forKey: AVVideoCompressionPropertiesKey];
writerVideoInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings] retain];
NSMutableDictionary * pixBufSettings = [[NSMutableDictionary alloc] init];
[pixBufSettings setObject: [NSNumber numberWithInt: kCVPixelFormatType_422YpCbCr8_yuvs]
forKey: (NSString *) kCVPixelBufferPixelFormatTypeKey];
[pixBufSettings setObject: [NSNumber numberWithInt: 1920]
forKey: (NSString *) kCVPixelBufferWidthKey];
[pixBufSettings setObject: [NSNumber numberWithInt: 1080]
forKey: (NSString *) kCVPixelBufferHeightKey];
writerVideoInput.expectsMediaDataInRealTime = YES;
writer.shouldOptimizeForNetworkUse = NO;
adaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerVideoInput
sourcePixelBufferAttributes:pixBufSettings] retain];
[writer addInput:writerVideoInput];
For reference, these output settings and compression options should be correct, but I have tried several different alternatives.
When a frame comes in from the device, I convert it to a CVPixelBufferRef as follows:
void *videoData;
int64_t frameTime;
int64_t frameDuration;
videoFrame->GetBytes(&videoData);
videoFrame->GetStreamTime(&frameTime, &frameDuration, 3000);
CMTime presentationTime = CMTimeMake(frameDuration, 3000);
CVPixelBufferRef buffer = NULL;
CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &buffer);
CVPixelBufferLockBaseAddress(buffer, 0);
void *rasterData = CVPixelBufferGetBaseAddress(buffer);
memcpy(rasterData, videoData, (videoFrame->GetRowBytes()*videoFrame->GetHeight()));
CVPixelBufferUnlockBaseAddress(buffer, 0);
if (buffer)
{
if(![adaptor appendPixelBuffer:buffer withPresentationTime:presentationTime]) {
NSLog(#"ERROR appending pixelbuffer: %#", writer.error);
[writerVideoInput markAsFinished];
if(![writer finishWriting])
NSLog(#"ERROR finishing writing: %#", [writer.error localizedDescription]);
}
else {
NSLog(#"SUCCESS");
if(buffer)
CVPixelBufferRelease(buffer);
}
}
This code is appending frames to the AVAssetWriterInputPixelBufferAdaptor, but all the frames are green.
Can anybody see what I'm doing wrong here, or does anybody have any experience using AVFoundation capturing and compressing frames using the BlackMagic Decklink SDK?
When you see 'green' and are working in the YUV color space, you are seeing values of 0 in the buffer. Since AVWriter is writing frames, the odds are that 'buffer' contains values of 0. I see a couple of ways that could happen.
1) The buffer your are appending is most likely initialized with 0, so it is possible your copy is failing. In your code that could happen if (videoFrame->GetRowBytes()*videoFrame->GetHeight()) somehow evaluates to 0. It seems impossible, but I'd check that.
2) The CVPixelBufferGetBaseAddress is either returning the wrong pointer, the PixelBuffer itself is the wrong format or possible invalid (yet didn't crash because of safeguards in the API).
3) 'videoData' is, for whatever reason, itself full of 0. DeckLinkCaptureDelegate returns frames with nothing in them when it doesn't like the input format (usually this is because the BMDDisplayMode passed to EnableVideoInput doesn't match your video source.
int flags=videoFrame->GetFlags();
if (flags & bmdFrameHasNoInputSource)
{
//our input format doesn't match the source
}
Other than changing your source mode and trying again, a quick check would be to change the memcpy line to the following:
memset(rasterData, 0x3f, 1920*1080*2);
If you still see green frames then take a hard look at #2. If you see different colored frames, then your problem is #1 or #3 and most likely the resolution of your video input doesn't match the BMDDisplayMode that you chose.
One other thing to note. I think the line where you create the presentation time is wrong. It probably should be (note changing frameDuration to frameTime:
CMTime presentationTime = CMTimeMake(frameTime, 3000);

NSPredicate (== comparison to numeric string) not working

So here's the deal:
// A. Inserting
Item *item = (Item *)[NSEntityDescription insertNewObjectForEntityForName:#"Item" inManagedObjectContext:managedObjectContext];
NSError *error = nil;
[managedObjectContext save:&error];
..
[item setItemID:#"15"];
[managedObjectContext save:&error];
NSLog(#"Error: %#", error); // outputs (null)
// B. Fetching all records
NSFetchRequest *request = [[NSFetchRequest alloc] initWithEntityName:#"Item"];
request.returnsObjectsAsFaults = NO;
NSArray *allItems = [managedObjectContext executeFetchRequest:request error:nil];
NSLog(#"All Items: %#", allItems);
Now, this outputs a huge list, containing the previously inserted item:
"<Item: 0x7eb7bc0> (entity: Item; id: 0x7eb71c0 <x-coredata://BC6EB71C-47C0-4445-905D-7D42E6FC611B/Item/p2> ; data: {\n itemID = 15;\n})"
So far so good, but I want to check whether this particular item does exist (I know it may sound strange in this context, but it really makes sense here). However, the predicate I'm using fails (and I don't see why):
// C. Fetching a single record
NSFetchRequest *singleRequest = [[NSFetchRequest alloc] initWithEntityName:#"Item"];
singleRequest.predicate = [NSPredicate predicateWithFormat:#"itemID == %#", #"15"];
NSError *error = nil;
NSArray *results = [managedObjectContext executeFetchRequest:singleRequest error:&error];
NSLog(#"Error: %#", error); // outputs (null) again
NSLog(#"Results: %#", results); // outputs () ...
I don't really understand how to "fix" this.
Here are some other facts:
Using persistent SQLite store with CoreData (pretty much default configuration, not even relationships, just plain key-value in 3 tables).
The itemIDs always are strings
When reopening the app, the second code block, does return an item (= the item inserted in the previous run). Could it be that save: writes to disk asynchronously, and that the NSPredicate only filters items wrote to disk?
Part A happens in a different method, but on the same thread as B and C. C is directly below B and both are placed in the same method.
If you're comparing strings, try this :
#"itemID LIKE %#"
Have a read of this, the section titled 'String Comparisons"
Okay got it. I used #synthesize instead of #dynamic in the particular model's .m-file. Didn't know it would be such a big problem .. :)
For some reason, updating the SQLite-database goes wrong when using #synthesize ..

Parsing a .csv file from a server with Objective-C

I have looked for an answer of a long time and still not found one so I thought I'd ask the question myself.
In my iPad app, I need to have the capability of parsing a .csv file in order to populate a table. I am using http://michael.stapelberg.de/cCSVParse to parse the csv files. However, I have only been successful in parsing local files. I have been trying to access a file from a server but am getting nowhere.
Here is my code to parse a local .csv file:
- (void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex
{
if (buttonIndex == 1)
{
//UITextField *reply = [alertView textFieldAtIndex:buttonIndex];
NSString *fileName = input.text;
NSLog(#"fileName %#", fileName);
CSVParser *parser = [CSVParser new];
if ([fileName length] != 0)
{
NSString *pathAsString = [[NSBundle mainBundle]pathForResource:fileName ofType:#"csv"];
NSLog(#"%#", pathAsString);
if (pathAsString != nil)
{
[parser openFile:pathAsString];
NSMutableArray *csvContent = [parser parseFile];
NSLog(#"%#", csvContent);
[parser closeFile];
NSMutableArray *heading = [csvContent objectAtIndex:0];
[csvContent removeObjectAtIndex:0];
NSLog(#"%#", heading);
AppDelegate *ap = [AppDelegate sharedAppDelegate];
NSManagedObjectContext *context = [ap managedObjectContext];
NSString *currentHeader = [heading objectAtIndex:0];
NSString *currentValueInfo = [heading objectAtIndex:1];
NSManagedObject *newObject = [NSEntityDescription insertNewObjectForEntityForName:#"Field" inManagedObjectContext:context];
[newObject setValue:#"MIS" forKey:#"header"];
[newObject setValue:currentHeader forKey:#"fieldName"];
for (NSArray *current in csvContent)
{
NSManagedObject *newField = [NSEntityDescription insertNewObjectForEntityForName:#"Field" inManagedObjectContext:context];
[newField setValue:currentHeader forKey:#"header"];
[newField setValue:currentValueInfo forKey:#"valueInfo"];
NSLog(#"%#", [current objectAtIndex:0]);
[newField setValue:[current objectAtIndex:0] forKey:#"fieldName"];
[newField setValue:[NSNumber numberWithDouble:[[current objectAtIndex:1] doubleValue]] forKey:#"value"];
}
NSError *error;
if (![context save:&error])
{
NSLog(#"Couldn't save: %#", [error localizedDescription]);
}
[self storeArray];
[self.tableView reloadData];
}
}
}
input.text = nil;
}
Forgive the weird beginning and ending brace indentation. :/
Anyway, so that is my code to take input from a user and access a file locally which I'm sure you guys have realized already. Now I want to know how to get the path of a file in my server.
Also if you guys see anything else wrong such as writing style and other bad habits please tell me as I'm new to iOS.
Thank you so much in advance! If you didn't understand my question please clarify as I'm bad at explaining myself at times! :)
As I am guessing you are trying to get data from a server's .csv file and want to show that data in table view list.
so I suggest you try to get that .csv file data in NSData and then work on that.
NSData *responseData = [NSData dataWithContentsOfURL:[NSURL URLWithString:#"serverUrl"]];
NSString *csvResponseString = [[[NSString alloc] initWithData:responseData encoding:NSUTF8StringEncoding] autorelease];
NSLog(#"responseString--->%#",csvResponseString);
Now try to use nsstring's method (componentsSeparatedByString) with coma (')
arrSepratedData = [[responseString componentsSeparatedByString:#","];
Now use this arr for UITableView data populate.

Displaying images in uiwebview from core data record

So I have an app I've written for the iPad, and I'd like to be able to allow users to insert images into their documents by selecting an image from an album or the camera. All that works great. Because the user might keep the document longer than they keep the image in an album, I make a copy of it, scale it down a bit, and store it in a core data table that is just used for this purpose.
I store the image like this:
NSManagedObjectContext* moc=[(ActionNote3AppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext];
NSString* imageName=[NSString stringWithFormat:#"img%lf.png",[NSDate timeIntervalSinceReferenceDate]];
Image* anImage = [NSEntityDescription insertNewObjectForEntityForName:#"Image" inManagedObjectContext:moc];
anImage.imageName=imageName;
anImage.imageData=UIImagePNGRepresentation(theImage);
NSError* error=nil;
if(![moc save:&error]) {...
I sub-class NSURLCache, as suggested on Cocoa With Love, and ovverride cachedResponseForRequest thusly:
- (NSCachedURLResponse *)cachedResponseForRequest:(NSURLRequest *)request {
NSString *pathString = [[[request URL] absoluteString]lastPathComponent];
NSData* data = [Image dataForImage:pathString];
if (!data) {
return [super cachedResponseForRequest:request];
}
NSURLResponse *response =[[[NSURLResponse alloc]
initWithURL:[request URL]
MIMEType:[NSString stringWithString:#"image/png"]
expectedContentLength:[data length]
textEncodingName:nil]
autorelease];
NSCachedURLResponse* cachedResponse =[[[NSCachedURLResponse alloc] initWithResponse:response data:data] autorelease];
return cachedResponse;
}
I also make sure the app uses the sub-classed NSURLCache by doing this in my app delegate in didFinishLaunchingWithOptions:
ANNSUrlCache* uCache=[[ANNSUrlCache alloc]init];
[NSURLCache setSharedURLCache:uCache];
The method that returns the image data from the core data record looks like this:
+(NSData*)dataForImage:(NSString *)name {
NSData* retval=nil;
NSManagedObjectContext* moc=[(ActionNote3AppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext];
NSEntityDescription *entityDescription = [NSEntityDescription entityForName:#"Image" inManagedObjectContext:moc];
NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease];
[request setEntity:entityDescription];
NSPredicate *predicate = [NSPredicate predicateWithFormat:#"imageName==%#", name];
[request setPredicate:predicate];
NSError* error=nil;
NSArray *array = [moc executeFetchRequest:request error:&error];
if ([array count]>0) {
retval=((Image*)[array objectAtIndex:0]).imageData;
}
return retval;
}
To insert the image into the web view, I have an html img tag where the name in src="" relates back to the name in the image table. The point of the NSURLCache code above is to watch for a name we have stored in the image table, intercept it, and send the actual image data for the image requested.
When I run this, I see the image getting requested in my sub-classed NSURLCache object. It is finding the right record, and returning the data as it should. However, I'm still getting the image not found icon in my uiwebview:
So Marcus (below) suggested that I not store the image data in a core data table. So I made changes to accomodate for that:
Storing the image:
NSString* iName=[NSString stringWithFormat:#"img%lf.png",[NSDate timeIntervalSinceReferenceDate]];
NSData* iData=UIImagePNGRepresentation(theImage);
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:iName];
[iData writeToFile:fullPathToFile atomically:NO];
Retrieving the image:
- (NSCachedURLResponse *)cachedResponseForRequest:(NSURLRequest *)request {
NSString *pathString = [[[request URL] absoluteString]lastPathComponent];
NSString* iPath = [Image pathForImage:pathString];
if (!iPath) {
return [super cachedResponseForRequest:request];
}
NSData* idata=[NSData dataWithContentsOfFile:iPath];
NSURLResponse *response =[[[NSURLResponse alloc]
initWithURL:[request URL]
MIMEType:#"image/png"
expectedContentLength:[idata length]
textEncodingName:nil]
autorelease];
NSCachedURLResponse* cachedResponse =[[[NSCachedURLResponse alloc] initWithResponse:response data:idata] autorelease];
return cachedResponse;
}
In debug mode, I see that idata does get loaded with the proper image data.
And I still get the image-not-found image! Obviously, I'm doing something wrong here. I just dont know what it is.
So... What am I doing wrong here? How can I get this to work properly?
Thank you.
I would strongly suggest that you do not store the binary data in Core Data. Storing binary data in Core Data, especially on an iOS device, causes severe performance issues with the cache.
The preferred way would be to store the actual binary data on disk in a file and have a reference to the file stored within Core Data. From there it is a simple matter to change the image url to point at the local file instead.
So it turns out I was way overthinking this. When I write the HTML, I just write the path to the image in with the image tag. Works like a charm.
I would love to know why the solution I posed in my question did not work, though.
And, I did wind up not storing the images in a table.