I've been looking for things online that teach how to use audio units in an application, with no luck. I'm trying to make an application that allows the user to apply AUTimePitch to the playback of audio files, on the fly. But i can't find anything online to teach a total beginner how to use audio units.
also, i'm making this for mac, not iOS
The best document by far on the subject is Apple's Audio Unit Hosting Guide for iOS in the dev library. For a more general introduction, you can check out the Core Audio Overview.
I also found the MixerHost and iPhoneMultichannelMixerTest sample code incredibly helpful in starting to use audio units.
Finally, I find class references and service references like the Audio Unit Processing Graph Services Reference and the Audio Unit Component Services Reference useful for exploring the functionality of particular methods, constants, classes, and so on.
Edit: I realized that your question doesn't say whether you're working in Mac OS or iOS. This answer is obviously heavily iOS-centric. Could you edit your question to tell us what environment you're in?
Related
I want to generate some text from my bb10 app to give audio feedback to the user.
(But the screenreader like in the accessibility feature is not sufficient)
Has anybody already successfully got text-to-speech implemented?
There are countless open source projects that do this on PC platforms. You may have your best luck in fitting them to your needs. – Josh C
Any library you would recommend? It should have C or C++ interface and must work offline (no server based solution) and it should not occupy too much memory. – thowa
I had to check to make sure it was written in C++ which it is. It is called ESpeak. I heard about it nearly 7 years ago when I was looking for a speech synthesizer that was powerful/robust enough to sound like a human. I believe it was ESpeak, and back then it was a complicated task to get it to spew out realistic sounding speech.
http://sourceforge.net/projects/espeak/files/
This one looks promising as well; however it is written in java.
http://mary.dfki.de/Download/openmary-open-source-emotional-text-to-speech-synthesis-system-released
Found here https://github.com/marytts/marytts
I'm due to work on a small application that captures audio from the Mac's Audio Queue and needs to save it to disk in some reasonable audio format.
Does anyone have a some decent sample code (Cocoa / Objective-C) that they can share?
I specifically need to capture the audio that is being passed to the Built-in Output device in order to record it. Any insights? The answers so far have been helpful, but have not helped me understand how the data going to the output can be captured, agnostic of the input source.
Working with audio in Mac OS X involves interfacing with Core Audio. For a quick overview, take a look at the Core Audio Overview.
You will need to interface with the AUHAL to perform input and output; a technical note exists detailing the steps required to do so. This code seems to usually be written in C++, as that is the procedure taken in the SimplePlayThru demo.
This doesn't cover the actual steps required to capture that audio input. However, these links should provide you with enough sample code to begin interfacing with your input device. I'll post more links in this answer if I happen across them.
Take a look at /Developer/Example/CoreAudio/Services/AudioFileTools. Specifically, look at afrecord.cpp. Admittedly, this is not Cocoa per se; Cocoa itself doesn't seem to have any specific capabilities for recording. If you'll want to interface with the C++ file there, you'll likely need to write some Objective C++ like in SimplePlayThru.
There is a good example code at Ulli Kusterers Github Repository
Cocoadev also has an article about that topic. The source code at the bottom of the page uses QuickTimes Sequence Grabber API. I would go with Core Audio.
I would like to do some fairly simple audio programming on OS X (using Lion and Xcode 4.3) -- synthesizing tones with given frequencies, mainly. Trouble is, Apple's documentation on the subject is way too high-level for my current knowledge of the subject. I've searched for weeks now for something that will get me started, to no avail.
Does anyone know of some Core Audio basic tutorial, or even some sample code, that will help me do fairly simple Core Audio tasks so that I can progress to understanding the Apple documentation?
I would suggest the book Learning Core Audio There is also sample code from the book at that site.
If you are looking to synthesize audio fairly easy, you are going to want to use a 3rd party library. Two possible solutions are FMOD and SuperCollider.
The pros and cons between the two are really that supercollider runs as a server that you can connect from app as a client and FMOD is compiled into an app and uses core audio to synthesize the sound. FMOD is clearly the choice if you are planning on distributing this app. SuperCollider also has it's own language that you'd have to learn the basics of to start tailoring your sounds synthesis. Here are some links:
FMOD:
FMOD Downloads (Comes with a bunch of sample code)
Super Collider:
SC Server Download
Sine Wave Generator Sample App
Great source of SC scripts and examples
I'm building a simple Cocoa app and I want to direct the audio output to a specific device, instead of the system selected one. I know some apps, like Skype, let you select where to send the output to. How do they do this?
I tried the MTCoreAudio framework but I can't even compile my app (or their AudioMonitor demo) with it included and the errors aren't helpful (_objc_fatal). Are there any complete examples that I can learn from? So far my searches haven't turned anything up.
Thanks!
The CAPlayThrough example on the Mac Dev Center Sample Code library shows how to list all of the available input and output devices, and select a default device from a menu.
Have you looked through the sample code on http://developer.apple.com ?
Look at these projects http://developer.apple.com/mac/library/navigation/index.html?section=Resource+Types&topic=Sample+Code
Namely the DefaultAudioUnit project.
I should say that working with Core Audio is more challenging than Cocoa. Most of the API's are C-based (I find that harder). You should read the Core Audio programming guide as well to get a sense of how the audio system is put together.
I'm due to work on a small application that captures audio from the Mac's Audio Queue and needs to save it to disk in some reasonable audio format.
Does anyone have a some decent sample code (Cocoa / Objective-C) that they can share?
I specifically need to capture the audio that is being passed to the Built-in Output device in order to record it. Any insights? The answers so far have been helpful, but have not helped me understand how the data going to the output can be captured, agnostic of the input source.
Working with audio in Mac OS X involves interfacing with Core Audio. For a quick overview, take a look at the Core Audio Overview.
You will need to interface with the AUHAL to perform input and output; a technical note exists detailing the steps required to do so. This code seems to usually be written in C++, as that is the procedure taken in the SimplePlayThru demo.
This doesn't cover the actual steps required to capture that audio input. However, these links should provide you with enough sample code to begin interfacing with your input device. I'll post more links in this answer if I happen across them.
Take a look at /Developer/Example/CoreAudio/Services/AudioFileTools. Specifically, look at afrecord.cpp. Admittedly, this is not Cocoa per se; Cocoa itself doesn't seem to have any specific capabilities for recording. If you'll want to interface with the C++ file there, you'll likely need to write some Objective C++ like in SimplePlayThru.
There is a good example code at Ulli Kusterers Github Repository
Cocoadev also has an article about that topic. The source code at the bottom of the page uses QuickTimes Sequence Grabber API. I would go with Core Audio.