how to record microphone and piano at same time - objective-c

I am trying to make a app which can record the piano sound make by AudioKit AKOscillatorBank and vocal from microphone at same time in swift 4. I tried AVFoundation's AVAudioRecorder, but it failed when I playing piano. I have tried using AVAudioSessionCategoryPlayAndRecord but it still seems not work:
func startRecording() {
fileName = String(format:"AudioFile%d.m4a",k)
k = k+1
let audioFilename = getDocumentsDirectory().appendingPathComponent(fileName)
address.append("\(audioFilename)")
print("adress = %s",address[num])
num = num + 1
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 2,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do {
try session.setCategory(AVAudioSessionCategoryPlayAndRecord, with:AVAudioSessionCategoryOptions.defaultToSpeaker)
try session.setActive(true)
SoundRecorder = try AVAudioRecorder(url: audioFilename, settings: settings)
SoundRecorder.isMeteringEnabled = true
SoundRecorder.prepareToRecord()
SoundRecorder.delegate = self
SoundRecorder.record()
} catch {
}
}
I have no Idea what can I do, is there any way to do it by swift or objective C? any advice and suggestions will be greatly appreciated.

To record different AudioKit nodes, use AKNodeRecorder. There is a "Recorder" example project included with AudioKit:
https://github.com/AudioKit/AudioKit/blob/v4-master/Examples/iOS/Recorder/Recorder/ViewController.swift

Related

Improved / efficient code for multiple if - React Native

I´m a technical project manager coordinating developers. My coding skills are limited and so I align with my devs on logical structure. For a mobile app (React Native to serve Android and iOS) we are currently developing user current location logic. Turned out we have specific Android and iOS settings, what may require native developmetn for location service handling.
I´m currently reviewing the following pseudocode and, besides others, concerned about the MULTIPLE IF statements and thinking if there is a more efficient way, as here we seem to jump into every IF, even when location is set already before.
I think there can be something like "until", "break" or similar .. and maybe relevant to consider the specific dev language, React Native in that case?
Any help, as pseudocode or if possible to make it specific already considering React Native, is very much appreciated.
Many thanks in advance!
Code
////Central Location Service to get current location.
var currentLocation;
var locationType
if Android {
DEFINED_ACCURACY = QUALITY_BALANCED_POWER
} else id iOS {
DEFINED_ACCURACY = kCLLocationAccuracyHundredMeters
}
DEFINED_DISTANCE = 3000
function startLocationService(){
var locationservice = LocationService()
locationservcie.accuracy = DEFINED_ACCURACY
locationservcie.distance = DEFINED_DISTANCE
locationservcie.startListnertoLocationUpdate() {
cacheData(location, currentTime);
currentLocation = location
}
}
function getLocationFromLocationService() {
if locationServcieEnabled == false
return NULL;
if locationPermissionGranted == false
return NULL;
return currentLocation;
}
function getCurrentLocation() {
var lcoation = NULL;
var accuracy = Not_defined;
lcoation = getLocationFromLocationService() //GPS, network
locationType = locationService
if lcoation == NULL {
locationdata == getLastKnownLocation()
if locationdata.Age < DEFINED_DURATION {
location = locationdata.location
locationType = locationService
}
}
if location == NULL {
location == getCachedLocationWithin24Hour()
locationType = locationService
}
if location == NULL {
location = getLocationfromTelephony()
locationType = other
}
if location == NULL {
if permissionToUseBilling == false {
askForPermissionToUseBilling()
}
location = getLocationFromBillimg()
accuracy = other
}
if location == NULL {
location = getLocationOfDEfaultCountry()
accuracy = other
}
return location, Other
}
I think you can use switches to cover different cases from javascript.
Probably your dev team knows, but did not want to bring it into pseudo code?
const location = getLocationFromLocationService() ;
switch (location) {
case 'NULL':
//DO SOMETHING WHEN NULL;
break;
case 'New York':
// DO SOMETHING WHEN New York;
// No break leads to run also into the next case
case 'Bern':
console.log('I love nice cities.');
break;
default:
console.log(`Sorry, we are out of ${location}.`);
}
Interestingly you seem to have multiple times the same if case?
Maybe you will to nest it or have to use a switch with multiple expressions.
case ("Bern" && "ANOTHERExpression to be true"):
Link to switch documentation
Please don't underestimate your dev team. ;-)

iOS 10 Local Notifications not showing

I've been working on setting up local notifications for my app in iOS 10, but when testing in the simulator I found that the notifications would be successfully scheduled, but would never actually appear when the time they were scheduled for came. Here's the code I've been using:
let UNcenter = UNUserNotificationCenter.current()
UNcenter.requestAuthorization(options: [.alert, .badge, .sound]) { (granted, error) in
// Enable or disable features based on authorization
if granted == true {
self.testNotification()
}
}
Then it runs this code (assume the date/time is in the future):
func testNotification () {
let date = NSDateComponents()
date.hour = 16
date.minute = 06
date.second = 00
date.day = 26
date.month = 1
date.year = 2017
let trigger = UNCalendarNotificationTrigger(dateMatching: date as DateComponents, repeats: false)
let content = UNMutableNotificationContent()
content.title = "TestTitle"
content.body = "TestBody"
content.subtitle = "TestSubtitle"
let request = UNNotificationRequest(identifier: "TestID", content: content, trigger: trigger)
UNUserNotificationCenter.current().add(request) {(error) in
if let error = error {
print("error: \(error)")
} else {
print("Scheduled Notification")
}
}
}
From this code, it will always print "Scheduled Notification", but when the notification is supposed to be triggered, it never triggers. I've been unable to find any fix for this.
Here are a few steps.
Make sure you have the permission. If not, use UNUserNotificationCenter.current().requestAuthorization to get that. Or follow the answer below if you want to show the request pop up more than once.
If you want to show the notification foreground, having to assign UNUserNotificationCenterDelegate to somewhere.
This answer might help.

Offline rendering with The Amazing Audio Engine

This post is also posted on The Amazing Audio Engine forum.
Hi everyone, I am new to The Amazing Audio Engine and iOS dev, and have been trying to figure out how to get the BPM of a track.
So far I have found two articles on offline rendering on the forum:
http://forum.theamazingaudioengine.com/discussion/comment/1743/#Comment_1743
http://forum.theamazingaudioengine.com/discussion/comment/649#Comment_649
As far as I know the AEAudioControllerRenderMainOutput function is only correctly implemented in this fork.
I am trying to do offline rendering to process a track and then use the algorithm described here (JavaScript) and implemented here.
So far I'm loading this fork, and I am using Swift (I am part of Make School Summer Academy at the moment, which teaches Swift).
When playing a track this code works for me (No offline rendering!)
let file = NSBundle.mainBundle().URLForResource("track", withExtension:
"m4a")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleavedFloatStereoAudioDescription())
let receiver = AEBlockAudioReceiver { (source, time, frames, audioBufferList) -> Void in
let leftSamples = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData)
// Advance the buffer sizeof(float) * 512
let rightSamples = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData) + 512
println("leftSamples: \(leftSamples) rightSamples: \(rightSamples)")
}
audioController.addChannels([channel])
audioController.addOutputReceiver(receiver)
audioController.start()
Trying offline rendering
This is the code I am trying to run while I am using this fork
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleaved16BitStereoAudioDescription())
let file = NSBundle.mainBundle().URLForResource("track", withExtension: "mp3")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
audioController.addChannels([channel])
audioController.start(nil)
audioController.stop()
var t = AudioTimeStamp()
let bufferLength: UInt32 = 4096
var buffer = AEAllocateAndInitAudioBufferList(audioController.audioDescription, Int32(bufferLength))
AEAudioControllerRenderMainOutput(audioController, t, bufferLength, buffer)
var renderDuration: NSTimeInterval = channel.duration
var sampleRate: Float64 = audioController.audioDescription.mSampleRate
var lengthInFrames: UInt32 = UInt32(renderDuration * sampleRate)
var songBuffer: [Float64]
t.mFlags = UInt32(kAudioTimeStampSampleTimeValid)
var frequencyAnalyzer = FrequencyAnalyzer()
println("renderDuration \(renderDuration)")
var outIsOpen = Boolean()
AUGraphClose(audioController.audioGraph)
AUGraphIsOpen(audioController.audioGraph, &outIsOpen)
println("AUGraphIsOpen: \(outIsOpen)")
for (var i: UInt32 = 0; i < lengthInFrames; i += bufferLength) {
AEAudioControllerRenderMainOutput(audioController, t, bufferLength, buffer);
t.mSampleTime += Float64(bufferLength)
println(t.mSampleTime)
let leftSamples = UnsafeMutablePointer<Int16>(buffer[0].mBuffers.mData)
let rightSamples = UnsafeMutablePointer<Int16>(buffer[0].mBuffers.mData) + 512
println("leftSamples: \(leftSamples.memory) rightSamples: \(rightSamples.memory)")
}
AEFreeAudioBufferList(buffer)
AUGraphOpen(audioController.audioGraph)
audioController.start(nil)
audioController.stop()
Offline rendering is not working for me ATM. The second example is not working it's getting me a lot of mixed errors which I don't understand.
A very common one is inside the channelAudioProducer function on this line:
// Tell mixer/mixer's converter unit to render into audio status = AudioUnitRender(group->converterUnit ? group->converterUnit : group->mixerAudioUnit, arg->ioActionFlags, &arg->originalTimeStamp, 0, *frames, audio);
It gives me EXC_BAD_ACCESS (code=EXC_I386_GPFLT). Among other errors this one is very common.
I am sorry I am a total noob on this field but some stuff I don't really understand. Should I use nonInterleaved16BitStereoAudioDescription or nonInterleavedFloatStereoAudioDescription? How does this implement the mData?
I would love to get some help on this since I'm kind of lost at the moment. Please when you answer me try to explain it as fully as you can, I am new to this stuff.
NOTE: Posting code in Objective-C is fine if you don't know Swift.

Is there a better way to fix my AS2 preloader?

I have a game with a preloader in scene 1, with the following code on the time line.
stop();
loadingBar._xscale = 1;
var loadingCall:Number = setInterval(preloadSite, 50);
function preloadSite():Void {
var siteLoaded:Number = _root.getBytesLoaded();
var siteTotal:Number = _root.getBytesTotal();
var percentage:Number = Math.round(siteLoaded/siteTotal*100);
loadingBar._xscale = percentage;
bytesDisplay.text = percentage + "%";
if (siteLoaded >= siteTotal) {
clearInterval(loadingCall);
gotoAndPlay("StartMenu", 1);
}
}
The code works fine when there are no music files linked to frame 1. If there are music files linked, then everything loads before the preloader shows up.
I found this great webpage about preloaders, which speaks about the linkage issue, and suggests I put all the big files on frame 2, after the preloader, then skip them. I put my large files on frame 2 as suggested and the preloader worked again.
My question is, is there a better way to do this. This solution seems like a hack.
The only better option I can think of, is to NOT store the MP3 file in your Flash file, but rather load it in your preloader with your flash file's content. This is provided that you're storing your MP3 file somewhere else online (like on a server).
stop();
loadingBar._xscale = 1;
var sound:Sound = new Sound();
sound.loadSound("http://www.example.com/sound.mp3", false);
var loadingCall:Number = setInterval(preloadSite, 50);
function preloadSite():Void {
var siteLoaded:Number = _root.getBytesLoaded()+sound.getBytesLoaded();
var siteTotal:Number = _root.getBytesTotal()+sound.getBytesTotal();
var percentage:Number = Math.round(siteLoaded / siteTotal * 100);
loadingBar._xscale = percentage;
bytesDisplay.text = percentage + "%";
if (siteLoaded >= siteTotal) {
clearInterval(loadingCall);
gotoAndPlay("StartMenu", 1);
sound.start();
}
}

FFMpeg Out of sync audio/video in iOS application

The app saves the camera output into a mov. file, then turn it to flv format that sent by AVPacket to rtmp server.
It switch every time between two files, one is written by the camera output and the other one is sent.
My problem is that the audio/video is getting out of sync after a while.
The first buffer sent is allways 100% sync but after awhile it get messed.
I belive its a DTS-PTS problem..
if(isVideo)
{
packet->stream_index = VIDEO_STREAM;
packet->dts = packet->pts = videoPosition;
videoPosition += packet->duration = FLV_TIMEBASE * packet->duration * videoCodec->ticks_per_frame * videoCodec->time_base.num / videoCodec->time_base.den;
}
else
{
packet->stream_index = AUDIO_STREAM;
packet->dts = packet->pts = audioPosition;
audioPosition += packet->duration = FLV_TIMEBASE * packet->duration / audioRate;
//NSLog(#"audio position = %lld", audioPosition);
}
packet->pos = -1;
packet->convergence_duration = AV_NOPTS_VALUE;
// This sometimes fails without being a critical error, so no exception is raised
if((code = av_interleaved_write_frame(file, packet)))
{
NSLog(#"Streamer::Couldn't write frame");
}
av_free_packet(packet);
You can research this sample: http://unick-soft.ru/art/files/ffmpegEncoder-vs2008.zip
But this sample is for Windows.
In this sample I use pts only for audio stream:
if (pVideoCodec->coded_frame->pts != AV_NOPTS_VALUE)
{
pkt.pts = av_rescale_q(pVideoCodec->coded_frame->pts,
pVideoCodec->time_base, pVideoStream->time_base);
}
I was experiencing a similar issue when switching out the AVAssetWriters, and noticed that it went way if I only started using the new AVAssetWriter when I got a video sample
https://medium.com/#brandon.kobel/ios-seamless-video-chunks-4383a5a3a874