Looping AVAudioPlayer w/o Gap - objective-c

I'm recording a sound using AVAudioRecorder and then attempting to play back the sound using AVAudioPlayer. I'm trying to get the sound to loop indefinitely, but the sound has a short gap in between loops. I've tried recording the AVAudioRecorder recording to all possible file types, yet I can't find something that will allow seamless looping. Thanks.

This is a great post that helped me eliminate loops in my AVAudioPlayer implementation: http://forums.macrumors.com/showthread.php?t=640862
the gist of the post is that compressed audio adds blank sound to pad out the length of the sample to an even multiple of 1024. Using uncompressed audio, or audio that is specifically output for the purpose of looping will eliminate the glitch.

Related

How to playback multiple audio files synchronously in Expo-av?

In my app users record themselves singing over a backing track, and then later playback the recorded audio and this backing track at the same time. I use expo-av for my audio system. The problem is that at the playback stage the audio is often out of sync because expo only really supports asynchronous audio. Does anyone have any advice on how to approach this problem at a high level?
A few of my ideas:
Mix the two audio files into a single file for playback. This almost works except for the fact that the recording and backing track are also out of sync. If I knew exactly how much they were offset, I could just add that amount of silence to one of the files when mixing. However, I haven't found a way to accurately calculate this offset.
Reduce time it takes for recording and playback to start, so that the latency is not noticeable. Some things I've found that help here are recording at lower quality and using smaller audio files. Any other tips here would be appreciated.
Use a different audio library than expo-av. Is there one that comes to mind that better supports synchronous audio? Ideally it would also be supported by Expo or at least React Native.

QTKit: Analog for VideoContext for the sound

I am writing a simple application for streaming video over the network, using a slightly different from the ordinary "H.264 over RTP" approach (i am using my own codecs).
To achieve this, i need raw frames and raw audio samples that QTMovie, when playing back a movie, implicitly sends to QTMovieView.
The most common way to retrieve raw video frames is to use VisualContext - and then, using a display link callback, i "generate" a CVPixelBufferRef, using this VisualContext. So i am getting frames with some frequency that is synchronized with my current refresh rate (not that i need this synchronization - i only need to have a "stream" of frames that i can transmit over the network - but CoreVideo Programming Guide and most Apple samples related to video promote this approach).
The first problem i have faced with - is when i attach a VisualContext to a QTMovie, the picture can't be rendered onto the QTMovieView anymore. I don't know why does this happen (i guess it's related to the idea of GWorld and the rendering being "detached" from it when i attach VisualContext). Ok, at least i have frames, which i could render onto a simple NSView (though this sounds wrong, and performance-unfriendly. Am i doing it right?)
What about the sound, i have no idea what to do. I need to get raw samples of sound as the movie being played (ideally - something similar to what QTCaptureDecompressedAudioOutput returns in its callback).
I have prepared myself to delving into deprecated Carbon QuickTime APIs, if there is no other way. But I don't know even where to start. Should i use the same CoreVideo Display link and periodically retrieve sound somehow? Should i get QTDataReference and locate the sound frames manually?
I am actually a beginner with programming video and audio services. If you could share some experience i would REALLY appreciate any idea you could share with me :)
Thank you,
James

Is it possible to fake a (mp4) moov atom?

I'm trying to play a MP4-Stream. The stream is send from my android phone. The problem is, that the moov atom, which is needed to play the mp4, is only written if the phone is done with the recording progress. So at the moment I'm only to play the streamed data, after the recording has finished.
My idea was to write the ftyp and moov atom by myself, so that the streamed data can be played, while the phone is recording.
I tried to use the moov atom from another videofile but this didn't work. I also have read, that normally it's impossible to build a moov atom, if only the mdat atom is given.
But in my case I know the recording conditions, like framerate, etc..
So my question is, is it possible to generate a valid/useable moov atom for the incoming stream if I know the recording parameters?
It's possible. I've done it 4 years ago to implement "live streaming" to original iPhone. Just fill STSZ and STCO atoms with constant size frames, then pad each frame with zeros. Yeah, size will be huge, but you'll get real live streaming :-)
It seems to be rather impossible to stream not yet finished mp4 file because player would need special tables with chunks and offsets to locate every data sample. You can fake FTYP, MOOV and other atoms, but you can't generate all tables without having file complete. Better strategy would be to generate many short mp4 files and send them file by file...

How to programmatically test for audio sync

I have a multimedia application that among other things converts video using FFMpeg. Video conversion being the pain that it is, I have in my test suits some tests that check our ability to convert various video formats, with emphasis on sample videos known not to work.
A common problem we've noticed from users is that some videos end up with their audio desynched after being processed, and I am looking for a way to check this in my tests.
Extracting the audio portion of the resulting videos is not a problem.
My best idea so far would be to check the offset of the first non-silence at both the beginning and end and compare each between the two videos, but I'm hoping someone smart has a better idea.
The application language/environment is Java, but since this is for testing, I'm free to use any toolset.
The basic problem is likely that the video and audio are different lengths. Extract the audio and test its length vs. the video length. If they are significantly different (more than maybe .05 sec, I'm not really sure what is detectable as "off"), then there's a problem.
To fix it, re-encode the audio to match the video length, and then put the audio and video back into a container format.

how to sync audio in iphone sdk with NStimer?

I am running a countdown timer which updates time on a label.i want to play a tick sound every second.i have the sound file but it is not sync perfectly with time.how do i sync time with audio?also if i use uiimagepicker the sound stops how to manage this? if someone have tick sound like a clock has it would be great.
The best way to sync up your sound and time would be to actually play short - less than a second long - sound files (tick sound) once per second as TSTimer fires. It won't sound as nice as a real clock or chronometer ticking, but it would be easy to do. And if the sounds are that small, then you don't have to worry too much about latency. I think to be realistic you need to play two ticks per second, with the first and second ticks about 0.3 seconds apart, and the next one starting at the next second, with the fourth, again only about 0.3 seconds later. And so on.
For even tighter integration of sounds and GUI, you should read up on Audio Toolbox:
Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of incoming audio, parse audio streams, convert audio formats, and record audio with access to individual packets. For details, see Audio Toolbox Framework Reference and the SpeakHere sample code project.