Is OpenAL the right audio library to use for cross platform audio processing? - cross-platform

I am making an application that will do things like pitch shifting and time stretching to audio files, and play them back in real time. Is OpenAL the right library for this? Or is there something that could do this better, and would be easy to reuse for different platforms?

OpenAL can't do pitch shifting or time stretching. For that, you'll need a 3rd party library such as SoundTouch.
As well, OpenAL doesn't support realtime audio processing. You can kind of fake it using buffer queues, but it's a bit hokey because you'd need to keep polling to see when a buffer has finished playing and then queue the next processed buffer, and you'd need to keep your buffers very small or risk laggy audio response. However, small queued buffers can potentially lead to performance, timing, and clicking issues.

Related

How to playback multiple audio files synchronously in Expo-av?

In my app users record themselves singing over a backing track, and then later playback the recorded audio and this backing track at the same time. I use expo-av for my audio system. The problem is that at the playback stage the audio is often out of sync because expo only really supports asynchronous audio. Does anyone have any advice on how to approach this problem at a high level?
A few of my ideas:
Mix the two audio files into a single file for playback. This almost works except for the fact that the recording and backing track are also out of sync. If I knew exactly how much they were offset, I could just add that amount of silence to one of the files when mixing. However, I haven't found a way to accurately calculate this offset.
Reduce time it takes for recording and playback to start, so that the latency is not noticeable. Some things I've found that help here are recording at lower quality and using smaller audio files. Any other tips here would be appreciated.
Use a different audio library than expo-av. Is there one that comes to mind that better supports synchronous audio? Ideally it would also be supported by Expo or at least React Native.

Streaming music over WebRTC cutting in and out

We would like to be able to play music in another tab (say YouTube, Spotify, Soundcloud, etc) and then stream that over a WebRTC connection to other peers.
We are doing this through the screenshare and it's mostly working, but the music will sometimes cut in and out for the listeners, giving it a choppy sound. In other words, it sounds smooth to the person sending it (ie sharing it from the originating URL), but it sounds choppy to the on the receiving side of the WebRTC connection.
Any thoughts on what might be causing this? Is this a buffering issue? If so, is it more likely buffering on the sending or the receiving side?
Thanks so much for any help!
WebRTC favors low latency over quality, with the goal of ensuring you can have normal speech communication. To do this, a lot of things happen to your audio:
Playback rate is constantly changed. If playback gets behind, the rate speeds up. If it's too far ahead, it slows down.
There is a very small buffer, creating more opportunities for the playback buffer run dry.
If packets are lost, the audio for their time is simply discarded... skipped over. Playback isn't likely to buffer a bit and then continue.
When audio is lost, a bit of a trail-off is synthesized. This is fine for speech, but sounds bad for music.
On the media capture end, there are also audio "enhancements" designed for dealing with bad webcam microphones which can sometimes be applied to other mediastreams if configured incorrectly. These include:
Echo cancellation
Noise reduction
Automatic gain control
Finally, it's usually the case that audio bitrates are quite low by default. You'll usually have to munge the SDP if you want stereo high quality audio.
All this to say, WebRTC might not be the right choice for you if you are concerned with quality. I often resort to the MediaRecorder API.

Is there a way to stream audio from MIC and play that stream in Silverlight

So I want to stream the audio from a mic using NAudio and then pass that stream to WCF which a Siverlight app can consume to broadcast the live audio sound. I want the latency to be as low as possible.
Any suggestions or if some one has already done it please point the source. Thanks in advance
what you are asking is certainly possible, but will be a fair amount of work to do.
NAudio can handle to capturing microphone audio.
At the Silverlight end you can play custom audio formats (in this case PCM) using a custom media element streaming source. See this one: http://code.msdn.microsoft.com/wavmss
I suspect latency would not be very good. You can reduce it by keeping the buffer sizes small. Also bear in mind that WAV is not a very efficient format to be sending over the network.
To have low latency as possible, you should use the netTcpBinding and stream your audio in binary format. I would use MemoryStream for this and try to play with the buffersize to figure out what the best performance is. Also, try checking audio formats for best performance. This also depends of the audio quality you expect.

how to sync audio in iphone sdk with NStimer?

I am running a countdown timer which updates time on a label.i want to play a tick sound every second.i have the sound file but it is not sync perfectly with time.how do i sync time with audio?also if i use uiimagepicker the sound stops how to manage this? if someone have tick sound like a clock has it would be great.
The best way to sync up your sound and time would be to actually play short - less than a second long - sound files (tick sound) once per second as TSTimer fires. It won't sound as nice as a real clock or chronometer ticking, but it would be easy to do. And if the sounds are that small, then you don't have to worry too much about latency. I think to be realistic you need to play two ticks per second, with the first and second ticks about 0.3 seconds apart, and the next one starting at the next second, with the fourth, again only about 0.3 seconds later. And so on.
For even tighter integration of sounds and GUI, you should read up on Audio Toolbox:
Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of incoming audio, parse audio streams, convert audio formats, and record audio with access to individual packets. For details, see Audio Toolbox Framework Reference and the SpeakHere sample code project.

Using Cocoa to detect when a running application plays audio

I'm looking into writing an app that runs as a background process and detects when an app (say, Safari) is playing audio. I can use NSWorkspace to get the process ID's of the currently running applications but I'm at a loss when it comes to detecting what those processes are doing. I assume that there is a way to listen in on a process and detect what public messages the objects are sending. I apologize for my ignorance on the subject.
Has anyone attempted anything like this or are aware of any resources that can help?
I don't think that your "answer" is an answer at all...
and there IS an answer (which is not "42")
your best bet for doing this would be to write a pass-through audio output device. Much like soundflower, actually. so your audio output device would then load the actual (physical) audio output device and pass the audio data along to it directly (after first having a look at the audio stream, of course!). then you only need to convince your users to configure your audio device as the default audio output device so that the majority of applications which play sound will use it automatically. and voila...
your audio processing function will probably just do a quick RMS on the buffer before passing it along to the actual output device. and when the audio power crosses a certain threshold (probably something like -54dB with apple audio hardware), then you know that some app is making sound.
|K<
SoundFlower is an open-source project that allows Mac OS X applications to pass audio to each other. It almost certainly does something similar to what you describe.
I've been informed on another thread that while this is possible, it is an extremely advanced technique and not recommended. It would involve using Application Enhancer (APE) and is considered a not 'nice' thing to do. Looks like that app idea is destined for the big recycling bin in the sky :)