Programmatically capture audio in OS X [duplicate] - objective-c

I'm due to work on a small application that captures audio from the Mac's Audio Queue and needs to save it to disk in some reasonable audio format.
Does anyone have a some decent sample code (Cocoa / Objective-C) that they can share?
I specifically need to capture the audio that is being passed to the Built-in Output device in order to record it. Any insights? The answers so far have been helpful, but have not helped me understand how the data going to the output can be captured, agnostic of the input source.

Working with audio in Mac OS X involves interfacing with Core Audio. For a quick overview, take a look at the Core Audio Overview.
You will need to interface with the AUHAL to perform input and output; a technical note exists detailing the steps required to do so. This code seems to usually be written in C++, as that is the procedure taken in the SimplePlayThru demo.
This doesn't cover the actual steps required to capture that audio input. However, these links should provide you with enough sample code to begin interfacing with your input device. I'll post more links in this answer if I happen across them.
Take a look at /Developer/Example/CoreAudio/Services/AudioFileTools. Specifically, look at afrecord.cpp. Admittedly, this is not Cocoa per se; Cocoa itself doesn't seem to have any specific capabilities for recording. If you'll want to interface with the C++ file there, you'll likely need to write some Objective C++ like in SimplePlayThru.

There is a good example code at Ulli Kusterers Github Repository
Cocoadev also has an article about that topic. The source code at the bottom of the page uses QuickTimes Sequence Grabber API. I would go with Core Audio.

Related

Media Foundation - Custom Media Source & Sensor Profile

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

comparing two audio files [duplicate]

I want to record two voices and compare them. I think there is some Apple sample code for voice recording. I have no idea about
comparing two audio files. What is the right approach for this? Is there any framework Apple provides for this purpose or is there any third party framework?
It's not in objective C, but it does contain some fantastic explanation about how audio is compared by Shazam, and includes sample code (and source for a working application) in Java:
Check this out
Additionally, This Question has a fantastic link to audio fingerprinting, which is essentially the same as the article above, but more in depth.
Hope this helps
I'm using Visqol for this purpose. If your audio files are generally not more than 10sek this could be something worth looking into. Also check ffmpeg library for converting the files into the desired format(Visqol will require certain sample rate depending if it is just music or speech).
https://github.com/google/visqol

How to use audio units in an application

I've been looking for things online that teach how to use audio units in an application, with no luck. I'm trying to make an application that allows the user to apply AUTimePitch to the playback of audio files, on the fly. But i can't find anything online to teach a total beginner how to use audio units.
also, i'm making this for mac, not iOS
The best document by far on the subject is Apple's Audio Unit Hosting Guide for iOS in the dev library. For a more general introduction, you can check out the Core Audio Overview.
I also found the MixerHost and iPhoneMultichannelMixerTest sample code incredibly helpful in starting to use audio units.
Finally, I find class references and service references like the Audio Unit Processing Graph Services Reference and the Audio Unit Component Services Reference useful for exploring the functionality of particular methods, constants, classes, and so on.
Edit: I realized that your question doesn't say whether you're working in Mac OS or iOS. This answer is obviously heavily iOS-centric. Could you edit your question to tell us what environment you're in?

How can I change the audio output device in Objective-C?

I'm building a simple Cocoa app and I want to direct the audio output to a specific device, instead of the system selected one. I know some apps, like Skype, let you select where to send the output to. How do they do this?
I tried the MTCoreAudio framework but I can't even compile my app (or their AudioMonitor demo) with it included and the errors aren't helpful (_objc_fatal). Are there any complete examples that I can learn from? So far my searches haven't turned anything up.
Thanks!
The CAPlayThrough example on the Mac Dev Center Sample Code library shows how to list all of the available input and output devices, and select a default device from a menu.
Have you looked through the sample code on http://developer.apple.com ?
Look at these projects http://developer.apple.com/mac/library/navigation/index.html?section=Resource+Types&topic=Sample+Code
Namely the DefaultAudioUnit project.
I should say that working with Core Audio is more challenging than Cocoa. Most of the API's are C-based (I find that harder). You should read the Core Audio programming guide as well to get a sense of how the audio system is put together.

Code sample for capturing audio from a Mac in Cocoa and saving to file?

I'm due to work on a small application that captures audio from the Mac's Audio Queue and needs to save it to disk in some reasonable audio format.
Does anyone have a some decent sample code (Cocoa / Objective-C) that they can share?
I specifically need to capture the audio that is being passed to the Built-in Output device in order to record it. Any insights? The answers so far have been helpful, but have not helped me understand how the data going to the output can be captured, agnostic of the input source.
Working with audio in Mac OS X involves interfacing with Core Audio. For a quick overview, take a look at the Core Audio Overview.
You will need to interface with the AUHAL to perform input and output; a technical note exists detailing the steps required to do so. This code seems to usually be written in C++, as that is the procedure taken in the SimplePlayThru demo.
This doesn't cover the actual steps required to capture that audio input. However, these links should provide you with enough sample code to begin interfacing with your input device. I'll post more links in this answer if I happen across them.
Take a look at /Developer/Example/CoreAudio/Services/AudioFileTools. Specifically, look at afrecord.cpp. Admittedly, this is not Cocoa per se; Cocoa itself doesn't seem to have any specific capabilities for recording. If you'll want to interface with the C++ file there, you'll likely need to write some Objective C++ like in SimplePlayThru.
There is a good example code at Ulli Kusterers Github Repository
Cocoadev also has an article about that topic. The source code at the bottom of the page uses QuickTimes Sequence Grabber API. I would go with Core Audio.