I know how to play mp3 files and whatnot in Xcode iOS. But how do I play a certain frequency, like if I just wanted to emit a C# note for 25 seconds; how might I do that? (The synth isn't as important to me as just the pitch of the note.)
You need to generate the PCM audio waveform that corresponds to the note you want to play and store that into a sample buffer in memory. Then you send that buffer to the audio hardware.
Here is a tutorial on generating waveforms of several types. The article goes into some details on the many aspects to a note you need to consider, including the frequency, volume, waveform shape, sampling rate, etc. The article comes with Flash source code, I think you should have no problem taking the concepts and adapting them to iOS.
If you also need a library that you can use to play the generated buffers on iOS, then I recommend the open source Finch.
I hope this helps!
You can synthesize waveforms of your desired frequency and feed them to the callbacks of either the Audio Queue or the RemoteIO Audio Unit API.
Here is a short tutorial on some of the code needed to create sine wave tones for iOS in C.
Related
The wife asked for a device to make the xmas lights 'rock' with the best of music. I am going to use an Arduino micro-controller to control relays hooked up to the lights, sending down 6 signals from C# winforms to turn them off and on. I want to use NAduio to separate the amplitude and rhythm to send the six signals. For a specific range of hertz like an equalizer with six bars for the six signals, then the timing from the rhythm. I have seen the WPF demo, and the waveform seems like the answer. I want to know how to get those values real time while the song is playing.
I'm thinking ...
1. Create a simple mp3 player and load all my songs.
2. Start the songs playing.
3. Sample the current dynamics of the song and put that into an integer that I can send to which channel on the Arduino micro-controller via usb.
I'm not sure how to capture real time the current sound information and give integer values for that moment. I can read the e.MaxSampleValues[0] values real time while the song is playing, but I want to be able to distinguish what frequency range is active at that moment.
Any help or direction would be appreciated for this interesting project.
Thank you
Sounds like a fun signal processing project.
Using the NAudio.Wave.WasapiLoopbackCapture object you can get the audio data being produced from the sound card on the local computer. This lets you skip the 'create an MP3 player' step, although at the cost of a slight delay between sound and lights. To get better synchronization you can do the MP3 decoding and pre-calculate the beat patterns and output states during playback. This will let you adjust the delay between sending the outputs and playing the audio block those outputs were generated from, getting near perfect synchronization between lights and music.
Once you have the samples, the next step is to use an FFT to find the frequency components. Fortunately NAudio includes a class to help with this: NAudio.Dsp.FastFourierTransform. (Thank you Mark!) Take the output of the FFT() function and sum out the frequency ranges you want for each controlled light.
The next step is Beat Detection. There's an interesting article on this here. The main difference is that instead of doing energy detection on a stream of sample blocks you'll be using the data from your spectral analysis stage to feed the beat detection algorithm. Those ranges you summed become inputs into individual beat detection processors, giving you one output for each frequency range you defined. You might want to add individual scaling/threshold factors for each frequency group, with some sort of on-screen controls to adjust these for best effect.
At the end of the process you will have a stream of sample blocks, each with a set of output flags. Push the flags out to your Arduino and queue the samples to play, with a delay on either of those operations to achieve your synchronization.
I'm wondering whether it's possible to slow down a sound in xcode. I mean I'll add some .mp3 file in my supporting files in xcode and I'll create app which will be able to speed it up or slow down. For example with slider. Is it even possible? If yes, could anyone help me with some idea? Thanks
AVAudioPlayer has a rate property which should be able to help you accomplish your goal.
http://developer.apple.com/library/IOS/#documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Reference/Reference.html
The audio player’s playback rate. #property float rate
Discussion
This property’s default value of 1.0 provides normal playback rate.
The available range is from 0.5 for half-speed playback through 2.0
for double-speed playback.
To set an audio player’s playback rate, you must first enable rate
adjustment as described in the enableRate property description.
I also found a good SO post on the AVAudioPlayer's rate:
AVAudioPlayer rate
Seems like as you'd mentioned you could set a slider with values from 0.5 to 2.0 and on valuechanged modify the audio players rate by using
- (IBAction)changeValue:(UISlider *)sender
{
//made up assumed ivar names
if ([_audioPlayer respondsToSelector:#selector(setEnableRate:)])
_audioPlayer.enableRate = YES;
if ([_audioPlayer respondsToSelector:#selector(setRate:)])
_audioPlayer.rate = [NSNumber numberWithFloat:slideValue];
}
Playing PCM audio at a faster or slower rate than its sample rate changes its pitch and also introduces considerable artefacts. If you're OK with this, the approach you would use is to decode the MP3 into PCM audio and then use Direct Digital Synthesis Oscillator to control the playback rate.
If you want to maintain pitch but change speed, you need a audio time-stretching algorithm.
Dirac3 from DSP Dimension is a commercial product that can do it, and which is available for licensing for use in iOS application. Other commercial solutions exist.
DSP Dimension's blog provides a helpful tutorial on the basics of how to implement pitch-shifting
using a FFT. Time stretching is essentially the same process. However, there's a fair bit of secret sauce in the DIRAC plug-in they don't tell you about.
Be warned that unless you're a Electronics engineering, physics or maths graduate, you'll probably it tough going to fill in the blanks.
I've been looking at the NAudio demo application "Audio file playback". What I'm missing from this demo is a way to get hold of the samples while the audio file is being played.
I figured that it would somehow be possible to fill a BufferedWaveProvider with samples using a callback whenever new samples are needed, but I can't figure out how.
My other (non-preferred) idea is to make a special version of e.g. DirectSoundOut where I can get hold of the samples before they are written to the sound card.
Any ideas?
With audio file playback in NAudio you construct an audio pipeline, starting with your audio file and going through various transformations (e.g. changing volume) along the way before ending up at your output device. The NAudioDemo does in fact show how the samples can be accessed along the way by drawing a waveform (pre-volume adjustment) and by showing a volume meter (post-volume adjustment).
You could, for example, create an implementer of IWaveProvider or ISampleProvider and insert it into the pipeline. Then, in the Read method, you read from your source, and then you can process or examine or write to disk the samples before passing them on to the next stage in the pipeline. Look at the AudioPlaybackPanel.CreateInputStream to see how this is done in the demo.
Does anybody know where I can find a MIDI library with pre-recorded sound samples of standard instruments? I need to integrate that into my Objective-C application code.
Well... there's no such thing exactly as a "MIDI library". What you are looking for is a sampler, which is an instrument that takes MIDI notes and plays out audio samples based on the note number.
In any case, the Iowa Orchestra recorded a lovely set of samples for a number of instruments, which can be found here:
http://www.sonicspot.com/news/free-musical-instrument-samples.html
The samples are great quality, and broken down by type and note number.
I have a multimedia application that among other things converts video using FFMpeg. Video conversion being the pain that it is, I have in my test suits some tests that check our ability to convert various video formats, with emphasis on sample videos known not to work.
A common problem we've noticed from users is that some videos end up with their audio desynched after being processed, and I am looking for a way to check this in my tests.
Extracting the audio portion of the resulting videos is not a problem.
My best idea so far would be to check the offset of the first non-silence at both the beginning and end and compare each between the two videos, but I'm hoping someone smart has a better idea.
The application language/environment is Java, but since this is for testing, I'm free to use any toolset.
The basic problem is likely that the video and audio are different lengths. Extract the audio and test its length vs. the video length. If they are significantly different (more than maybe .05 sec, I'm not really sure what is detectable as "off"), then there's a problem.
To fix it, re-encode the audio to match the video length, and then put the audio and video back into a container format.