The first time i call this method file1 will be nil and file2 will be returned. When this hapens the file will play normally (so the calling of this method should be fine). But when i call it for the second time it will return an NSURL which the AVAudioPlayer does not play. My guess is I have missed something in the header. In the debugging mode i have seen that the totalLength is exactly as long as the data's length.
+(NSURL *)mergeFile1:(NSURL *)file1 withFile2:(NSURL *)file2 {
if(file1 == nil) {
return [file2 copy];
}
NSData * wav1Data = [NSData dataWithContentsOfURL:file1];
NSData * wav2Data = [NSData dataWithContentsOfURL:file2];
int wav1DataSize = [wav1Data length] - 46;
int wav2DataSize = [wav2Data length] - 46;
if (wav1DataSize <= 0 || wav2DataSize <= 0) {
return nil;
}
NSMutableData * soundFileData = [NSMutableData dataWithData:[wav1Data subdataWithRange:NSMakeRange(0, 46)]];
[soundFileData appendData:[wav1Data subdataWithRange:NSMakeRange(46, wav1DataSize)]];
[soundFileData appendData:[wav2Data subdataWithRange:NSMakeRange(46, wav2DataSize)]];
unsigned int totalLength = [soundFileData length];
NSLog(#"Calculated: %d - Real: %d", totalLength, [soundFileData length]);
[soundFileData replaceBytesInRange:NSMakeRange(4, 4)
withBytes:&(UInt32){NSSwapHostIntToLittle(totalLength-8)}];
[soundFileData replaceBytesInRange:NSMakeRange(42, 4)
withBytes:&(UInt32){NSSwapHostIntToLittle(totalLength)}];
[soundFileData writeToURL:file1 atomically:YES];
return [file1 copy];
}
If anyone sees something that can be of help it would be much appreciated!
Any questions will be answered asap.
EDIT
I know there are 2 sorts of wav headers: 44 bytes or 46 bytes. I have tried both.
EDIT
I have looked at the Audio File Services Reference which contains a lot of nice stuff i might want to use, but i can't figure out how to use all this. I'm not really known with c. Hope anyone could help me out with this.
EDIT
An example of a merged wav file is found here: 7--443522512
Looks like your WAV file includes a broken FLLR chunk before the data chunk, or at least VLC thinks the FLLR chunk is over 2GB large so it tries to skip to the next chunk which is beyond the file end.
Maybe you should try to create WAV files without FLLR chunk before merging, the kAudioFileFlags_DontPageAlignAudioData seams to make Audio File Services skip it.
Another option is to extract the data chunks and write a new wav file, a did a proof of concept implementation here: https://gist.github.com/1555889
Related
It contains METADATA in between binary data. I'm able to parse the first line with the title Agent_of_Chang2e, but I need to get the metadata on the bottom of the header as well. I know there are not standard specifics for it.
This code isn't able to decode the bottom lines. For example I get the following wrong formatted text:
FÃHANGE</b1èrX)¯ÌiadenÕniverse<sup><smalÀ|®¿8</¡Îovelÿ·?=SharonÌeeándÓteveÍiller8PblockquoteßßÚ>TIa÷orkyfiction.Áll#eãacÐ0hðortrayedén{n)áreïrzus0¢°usly.Ôhatíean0authhmxétlõp.7N_\
©ß© 1988âyÓOOKãsòeserved.0ðart)publicaZmayâehproduc
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
char buffer[1024];
FILE* file = fopen([path UTF8String], "r");
if (file != 0)
{
while(fgets(buffer, 1024, file) != NULL)
{
NSString* string = [[NSString alloc] initWithCString: buffer encoding:NSASCIIStringEncoding];
NSLog(#"%#",string);
[string release];
}
fclose(file);
}
[pool drain];
nielsbot already posted a link to the format specification.
As you can read there, the file is not text file, but binary encoded. Parsing it with NSString instances is no good idea.
You have to read the file binary, i. e. using NSData:
NSData content = [NSData dataWithContentsOfFile:path];
Then you have to take out the relevant information by yourself. For example, if you want to read the uncompressed text length, you will find in the linked document that this information starts at position 4 and has a length of 4.
int32_t uncompressedTextLength; // 4 bytes are 32 bit.
[content getBytes:&uncompressedLenght range:NSMakeRange(4, 4)];
Maybe you have to deal with endianess.
Use NSTask or system() to pass the file through the strings utility and parse the output of that:
strings /bin/bash | more
...
...
677778899999999999999999999999999999:::;;<<====>>>>>>>>>>>????????
#(#)PROGRAM:bash PROJECT:bash-92
...
...
First, I am pretty sure the texts will be UTF-8 or UTF-16 encoded.
Second, you cannot just take random 1024 bytes and expect them to work as a text. What about byte order (big endian vs little endian)?
My MAC app need create file that has fixed size. Example : I want to create file has name : test.txt, fixed size : 1069 bytes. How can i do that? I use below code to write file :
NSError *err;
NSString* arrayText = [writeArray componentsJoinedByString: #"\n"];
[filemgr createFileAtPath:[NSString stringWithFormat:#"%#/test.txt",sd_url] contents:nil attributes:nil];
[arrayText writeToFile:[NSString stringWithFormat:#"%#/test.txt",sd_url] atomically:YES encoding:NSUTF8StringEncoding error:&err];
Thanks
This function will write some junk data to file.
+(BOOL)writeToFile:(NSString *)path withSize:(size_t)bytes;
{
FILE *file = fopen([path UTF8String], "wb");
if(file == NULL)
return NO;
void *data = malloc(bytes); // check for NULL!
if (data==NULL) {
return NO;
}
fwrite(data, 1, bytes, file);
fclose(file);
return YES;
}
If you dont want to use fwrite
+(BOOL)writeToFile:(NSString *)path withSize:(size_t)bytes;
{
void *data = malloc(bytes); // check for NULL!
if (data==NULL) {
return NO;
}
NSData *ldata = [NSData dataWithBytes:data length:bytes];
[ldata writeToFile:path atomically:NO];
return YES;
}
In order for the file to contain a certain number of bytes, you must write that many bytes to the file. There's no way to make a file's size be something other than the number of bytes it contains.
You can use FSAllocateFork (or fcntl with F_PREALLOCATE) to reserve space to write into. You'd use this, for example, if you were implementing your own download logic (if, for some reason, NSURLDownload wasn't good enough) and wanted to (a) make sure enough space is available for the file you're downloading and (b) grab it before something else does.
But, even that doesn't actually change the size of the file, just ensures (if successful) that your writes will not fail for insufficient space.
The only way to truly grow a file is to write to it.
The best way to do that is to use either -[NSData writeToURL:options:error:], which is the easy way if you have the data all ready at once, or NSFileHandle, which will enable you to write the data in chunks rather than having to build it all up in memory first.
I'm using some tricks to try to read the raw output of an AVAssetWriter while it is being written to disk. When I reassemble the individual files by concatenating them, the resulting file is the same exact number of bytes as the AVAssetWriter's output file. However, the reassembled file will not play in QuickTime or be parsed by FFmpeg because there is data corruption. A few bytes here and there have been changed, rendering the resulting file unusable. I assume this is occurring on the EOF boundary of each read, but it isn't consistent corruption.
I plan to eventually use code similar to this to parse out individual H.264 NAL units from the encoder to packetize them and send them over RTP, however if I can't trust the data being read from disk I might have to use another solution.
Is there an explanation/fix for this data corruption? And are there any other resources/links you have found on how to parse the NAL units to packetize over RTP?
Full code here: AVAppleEncoder.m
// Modified from
// http://www.davidhamrick.com/2011/10/13/Monitoring-Files-With-GCD-Being-Edited-With-A-Text-Editor.html
- (void)watchOutputFileHandle
{
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
int fildes = open([[movieURL path] UTF8String], O_EVTONLY);
source = dispatch_source_create(DISPATCH_SOURCE_TYPE_VNODE,fildes,
DISPATCH_VNODE_DELETE | DISPATCH_VNODE_WRITE | DISPATCH_VNODE_EXTEND | DISPATCH_VNODE_ATTRIB | DISPATCH_VNODE_LINK | DISPATCH_VNODE_RENAME | DISPATCH_VNODE_REVOKE,
queue);
dispatch_source_set_event_handler(source, ^
{
unsigned long flags = dispatch_source_get_data(source);
if(flags & DISPATCH_VNODE_DELETE)
{
dispatch_source_cancel(source);
//[blockSelf watchStyleSheet:path];
}
if(flags & DISPATCH_VNODE_EXTEND)
{
//NSLog(#"File size changed");
NSError *error = nil;
NSFileHandle *fileHandle = [NSFileHandle fileHandleForReadingFromURL:movieURL error:&error];
if (error) {
[self showError:error];
}
[fileHandle seekToFileOffset:fileOffset];
NSData *newData = [fileHandle readDataToEndOfFile];
if ([newData length] > 0) {
NSLog(#"newData (%lld): %d bytes", fileOffset, [newData length]);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
NSString *movieName = [NSString stringWithFormat:#"%d.%lld.%d.mp4", fileNumber, fileOffset, [newData length]];
NSString *path = [NSString stringWithFormat:#"%#/%#", basePath, movieName];
[newData writeToFile:path atomically:NO];
fileNumber++;
fileOffset = [fileHandle offsetInFile];
}
}
});
dispatch_source_set_cancel_handler(source, ^(void)
{
close(fildes);
});
dispatch_resume(source);
}
Here are some similar questions I have found, but don't exactly answer my question:
Get PTS from raw H264 mdat generated by iOS AVAssetWriter
streaming video FROM an iPhone
Parsing h.264 NAL units from a quicktime MOV file
Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone)
When I eventually figure this out, I will release an open source library to assist people who try to do this in the future.
Thank you!
Update: The corruption doesn't happen at the EOF boundary. It seems like parts of the file are re-written after finishWriting is called. This first file was chunked at 4KB, so the area changed isn't anywhere near an EOF boundary. It seems to be corrupted near new "moov" elements as well when movieFragmentInterval is enabled.
Correct file on the left, broken file on the right.
I ended up abandoning the "read while it's written" approach in favor of a manual chunking approach where I call finishWriting every 5 seconds on a background thread. I was able to drop a negligible number of frames using a method originally described here:
- (void) segmentRecording:(NSTimer*)timer {
AVAssetWriter *tempAssetWriter = self.assetWriter;
AVAssetWriterInput *tempAudioEncoder = self.audioEncoder;
AVAssetWriterInput *tempVideoEncoder = self.videoEncoder;
self.assetWriter = queuedAssetWriter;
self.audioEncoder = queuedAudioEncoder;
self.videoEncoder = queuedVideoEncoder;
//NSLog(#"Switching encoders");
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
[tempAudioEncoder markAsFinished];
[tempVideoEncoder markAsFinished];
if (tempAssetWriter.status == AVAssetWriterStatusWriting) {
if(![tempAssetWriter finishWriting]) {
[self showError:[tempAssetWriter error]];
}
}
if (self.readyToRecordAudio && self.readyToRecordVideo) {
NSError *error = nil;
self.queuedAssetWriter = [[AVAssetWriter alloc] initWithURL:[self newMovieURL] fileType:(NSString *)kUTTypeMPEG4 error:&error];
if (error) {
[self showError:error];
}
self.queuedVideoEncoder = [self setupVideoEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:videoFormatDescription bitsPerSecond:videoBPS];
self.queuedAudioEncoder = [self setupAudioEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:audioFormatDescription bitsPerSecond:audioBPS];
//NSLog(#"Encoder switch finished");
}
});
}
Full source code: https://github.com/chrisballinger/FFmpeg-iOS-Encoder/blob/master/AVSegmentingAppleEncoder.m
When reading a MOV file that is ACTIVELY recording on iOS, you MUST check the 4 bytes mentioned for changes, and re-write this four bytes, then check for additional data in file, and send additional data. Then when done, truncate the file to the file size written.
Obviously this depends on where you are sending the file. I use a send (offset,number of bytes) to receiver. So I send "additional data", "more additional data", ... , new data at (24,4), "more additional data".
Typically iOS only writes the 4 byte (size of data section) record when file is about to be closed (aka after last media write). (see info on "Quicktime atoms"). Unfortunately, this also means the MOV file is not PLAYABLE until recording is completed (and movie descriptors written at END of file).
Currently, I'm doing a little test project to see if I can get samples from an AVAssetReader to play back using an AudioQueue on iOS.
I've read this:
( Play raw uncompressed sound with AudioQueue, no sound )
and this: ( How to correctly read decoded PCM samples on iOS using AVAssetReader -- currently incorrect decoding ),
Which both actually did help. Before reading, I was getting no sound at all. Now, I'm getting sound, but the audio is playing SUPER fast. This is my first foray into audio programming, so any help is greatly appreciated.
I initialize the reader thusly:
NSDictionary * outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
output = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:uasset.tracks audioSettings:outputSettings];
[reader addOutput:output];
...
And I grab the data thusly:
CMSampleBufferRef ref= [output copyNextSampleBuffer];
// NSLog(#"%#",ref);
if(ref==NULL)
return;
//copy data to file
//read next one
AudioBufferList audioBufferList;
NSMutableData *data = [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
// NSLog(#"%#",blockBuffer);
if(blockBuffer==NULL)
{
[data release];
return;
}
if(&audioBufferList==NULL)
{
[data release];
return;
}
//stash data in same object
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
// NSData* throwData;
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
[self.delegate streamer:self didGetAudioBuffer:audioBuffer];
/*
Float32 *frame = (Float32*)audioBuffer.mData;
throwData = [NSData dataWithBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
[self.delegate streamer:self didGetAudioBuffer:throwData];
[data appendBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
*/
}
which eventually brings us to the audio queue, set up in this way:
//Apple's own code for canonical PCM
audioDesc.mSampleRate = 44100.0;
audioDesc.mFormatID = kAudioFormatLinearPCM;
audioDesc.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
audioDesc.mBytesPerPacket = 2 * sizeof (AudioUnitSampleType); // 8
audioDesc.mFramesPerPacket = 1;
audioDesc.mBytesPerFrame = 1 * sizeof (AudioUnitSampleType); // 8
audioDesc.mChannelsPerFrame = 2;
audioDesc.mBitsPerChannel = 8 * sizeof (AudioUnitSampleType); // 32
err = AudioQueueNewOutput(&audioDesc, handler_OSStreamingAudio_queueOutput, self, NULL, NULL, 0, &audioQueue);
if(err){
#pragma warning handle error
//never errs, am using breakpoint to check
return;
}
and we enqueue thusly
while (inNumberBytes)
{
size_t bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
if (bufSpaceRemaining < inNumberBytes)
{
AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
fillBuf->mAudioDataByteSize = bytesFilled;
err = AudioQueueEnqueueBuffer(audioQueue, fillBuf, 0, NULL);
}
bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
size_t copySize;
if (bufSpaceRemaining < inNumberBytes)
{
copySize = bufSpaceRemaining;
}
else
{
copySize = inNumberBytes;
}
if (bytesFilled > packetBufferSize)
{
return;
}
AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(inInputData + offset), copySize);
bytesFilled += copySize;
packetsFilled = 0;
inNumberBytes -= copySize;
offset += copySize;
}
}
I tried to be as code inclusive as possible so as to make it easy for everyone to point out where I'm being a moron. That being said, I have a feeling my problem occurs either in the output settings declaration of the track reader or in the actual declaration of the AudioQueue (where I describe to the queue what kind of audio I'm going to be sending it). The fact of the matter is, I don't really know mathematically how to actually generate those numbers (bytes per packet, frames per packet, what have you). An explanation of that would be greatly appreciated, and thanks for the help in advance.
Not sure how much of an answer this is, but there will be too much text and links for a comment and hopefully it will help (maybe guide you to your answer).
First off I know with my current project adjusting the sample rate will effect the speed of the sound, so you can try to play with those settings. But 44k is what I see in most default implementation including the apple example SpeakHere. However I would spend some time comparing your code to that example because there are quite a few differences. like checking before enqueueing.
First check out this posting https://stackoverflow.com/a/4299665/530933
It talks about how you need to know the audio format, specifically how many bytes in a frame, and casting appropriately
also good luck. I have had quite a few questions posted here, apple forums, and the ios forum (not the official one). With very little responses/help. To get where I am today (audio recording & streaming in ulaw) I ended up having to open an Apple Dev Support Ticket. Which prior to tackling the audio I never knew existed (dev support). One good thing is that if you have a valid dev account you get 2 incidents for free! CoreAudio is not fun. Documentation is sparse, and besides SpeakHere there are not many examples. One thing I did find is that the framework headers do have some good info and this book. Unfortunately I have only started the book otherwise I may be able to help you further.
You can also check some of my own postings which I have tried to answer to the best of my abilities.
This is my main audio question which I have spent alot of time on to compile all pertinent links and code.
using AQRecorder (audioqueue recorder example) in an objective c class
trying to use AVAssetWriter for ulaw audio (2)
For some reason, even though every example I've seen of the audio queue using LPCM had
ASBD.mBitsPerChannel = 8* sizeof (AudioUnitSampleType);
For me it turns out I needed
ASBD.mBitsPerChannel = 2*bytesPerSample;
for a description of:
ASBD.mFormatID = kAudioFormatLinearPCM;
ASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
ASBD.mBytesPerPacket = bytesPerSample;
ASBD.mBytesPerFrame = bytesPerSample;
ASBD.mFramesPerPacket = 1;
ASBD.mBitsPerChannel = 2*bytesPerSample;
ASBD.mChannelsPerFrame = 2;
ASBD.mSampleRate = 48000;
I have no idea why this works, which bothers me a great deal... but hopefully I can figure it all out eventually.
If anyone can explain to me why this works, I'd be very thankful.
I have this file which I need to read the first bytes to check the information.
I don't need to load the whole file, only the beginning..
The code in C is, more or less, what follows. It is a big code, so I just wrote the basic functionality here.
Now I want to make it 100% Objective-C, but I cannot find a way to do it properly.
FILE *f;
char *buf;
f = fopen ("/Users/foo/Desktop/theFile.fil", "rb");
if(f) {
fseek(f , 0 , SEEK_END);
size = ftell(f);
rewind (f);
buf = (char*) malloc (sizeof(char)*size);
switch( ntohl(*(uint32 *)buf) ) {
case 0x28BA61CE:
case 0x28BA4E50:
NSLog(#"Message");
break;
}
fclose(f);
free (buf);
The most close I got to this is what follows:
NSData *test = [[NSData alloc] initWithContentsOfFile:filePath];
This gets me all the binary, but anyway, I got stuck. Better try to start all over..
Any help appreciated!
Well, valid C code is valid Objective-C code. So this is already in Objective-C.
What's your actual goal? What are you trying to do with the file? Is there a reason you can't use NSData?
C code is already Obj-C. It's perfectly reasonable to just use what you're already doing. But if you're dead-set on using Obj-C objects to perform this, you want to take a look at NSInputStream.
The most close I got to this is what follows:
NSData *test = [[NSData alloc] initWithContentsOfFile:filePath];
This gets me all the binary, but anyway, I got stuck.
It's not clear where you're stuck, because that's the (simplest) correct way to read a file in one big slurp in Cocoa. You've successfully read the file; there's nothing more to do for that.
If you're looking to proceed to the switch statement, the pointer to the bytes that it read is [test bytes]. That's the pointer that you will want to assign to buf. See the NSData documentation.
Well.. I sorted that out.. And did what follows.
Cheers and thanks for the help!
NSString *filePath = [[NSString alloc] initWithFormat:#"/Users/foo/Desktop/theFile.fil"];
NSData *data = [NSData dataWithContentsOfFile:filePath];
NSUInteger len = [data length];
Byte *byteData = (Byte*)malloc(len);
memcpy(byteData, [data bytes], len);
NSNumber *size = [[NSNumber alloc] initWithUnsignedLong:len/2^20];
NSLog(#"%#", size);
switch( ntohl(*(uint32 *)byteData) ) {
case 0x28BA61CE:
case 0x28BA4E50:
NSLog(#"Message");
break;
}
[size release];
[filePath release];