headphone plug-in plug-out event when audio route doesn't change - iOS - objective-c

I'm working on iPad.
I would like to detect when user plug-out headphone. First I used a listener on the property kAudioSessionProperty_AudioRouteChange. So all was working well until I've decided to add a button to switch to speakers when headphone was still plugged. So I’m now facing a problem, maybe someone would have an idea to fix it.
Here is the scenario :
I plug a headphone -> my audio route change callback is called
then I switch sound to speakers (without unplugging my headphone) -> audio route change callback is called
then I unplug my headphone (when sound is still outputting to speakers) -> audio route change callback is NOT called, which seems logical.
But here is my problem ! So my question is : Do you see a way to detect that headphone was unplugged for this last case ?
Thanks for your help
EDIT :
Ok I found a workaround :
To detect whether or not headphones are plugged, I execute a test function all the times I need to know it (instead using a boolean), this might be less good for performances but it's working, here is my code for the ones who may need it :
//set back the default audio route
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
//check if this default audio route is Heaphone or Speaker
CFStringRef newAudioRoute;
UInt32 newAudioRouteSize = sizeof(newAudioRoute);
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &newAudioRouteSize, &newAudioRoute);
NSString *newAudioRouteString = (__bridge NSString *)newAudioRoute;
CFRelease(newAudioRoute);
//if this default audio route is not Headphone, it means no headphone is plugged
if ([newAudioRouteString rangeOfString:#"Headphones"].location != NSNotFound){
NSLog(#"Earphone available");
return true;
}
else {
NSLog(#"No Earphone available");
return false;
}
Hope it will help someone !

I imagine a solution it in the following way:
You create in the AppDelegate a boolean for the speakers, let's say:
BOOL isSpeakerOn. And every time the audio route callback is called you have to verify what the current situation with the speakers and what you want to do then.

This is the best tutorial dealing this issue:
http://www.techotopia.com/index.php/Detecting_when_an_iPhone_Headphone_or_Docking_Connector_is_Unplugged_(iOS_4)

Related

Mute iOS microphone input with ReactNative

I am using Agora's ReactNative library for a group calling project, I need the user to be able to mute/unmute his phone microphone. Currently there is only a function to muteLocalAudioStream which does mute the whole stream, that includes the background sounds, as the streamer can add background sound to the call.
For Android I managed to mute/unmute the microphone with the below hack:
#ReactMethod
public void muteMic(){
AudioManager audioManager = (AudioManager) mContext.getSystemService(Context.AUDIO_SERVICE);
audioManager.setMode(AudioManager.MODE_IN_CALL);
if (audioManager.isMicrophoneMute() == false) {
audioManager.setMicrophoneMute(true);
} else {
audioManager.setMicrophoneMute(false);
}
}
However, I couldn't do it for iOS, I appreciate your help on this.
You can change recording level with a call to
// volume: [0,100]
RtcEngine.adjustRecordingSignalVolume(volume)

NAudio: How to accurately get the current play location when changing play position using AudioFileReader and WaveOutEvent

I'm creating an application that needs to allow the user to select the play position of an audio file while the file is playing. However, once the position is changed, I'm having trouble identifying the current position.
Here's an example program that shows how I'm going about it.
using NAudio.Utils;
using NAudio.Wave;
using System;
namespace NAudioTest
{
class Program
{
static void Main()
{
var audioFileReader = new AudioFileReader("test.mp3");
var waveOutEvent = new WaveOutEvent();
waveOutEvent.Init(audioFileReader);
waveOutEvent.Play();
while (true)
{
var key = Console.ReadKey(true);
if (key.Key == ConsoleKey.Enter)
{
var playLocationSeconds =
waveOutEvent.GetPositionTimeSpan().TotalSeconds;
Console.WriteLine(
"Play location is " + playLocationSeconds + "seconds");
}
else if (key.Key == ConsoleKey.RightArrow)
{
audioFileReader.CurrentTime =
audioFileReader.CurrentTime.Add(TimeSpan.FromSeconds(30));
}
else
{
break;
}
}
}
}
}
Steps to reproduce the problem
Start the program: the audio file starts playing
Press the enter key: the current play time is written to the console
Press the right arrow key: the played audio jumps ahead (presumably) to the expected location
Press the Enter key again: a play time is written to the
console, but it looks to be the amount of time since the audio first
started playing, not the time of the current play position.
I have tried getting the value of AudioFileReader.CurrentTime instead of calling GetPositionTimeSpan on the WaveOutEvent. The problem with this approach is that the AudioFileReader.CurrentTime value proceeds in jumps, presumably because this underlying stream is buffered when used with WaveOutEvent so CurrentTime does not accurately reflect the play position, only the position in the underlying stream.
How do I support arbitrary play positioning yet continue to get an accurate play position current time?
The "CurrentTime" property of your audio file reader is good enough to tell the current position of playback, especially if your latency is not very high. I found the difference between it and waveOutEvent.GetPositionTimeSpan() to be 100-200 ms. at most.
You are indeed using the setter of the CurrentTime property to reposition within the stream. It would be consistent to use the getter to then query the current position as well. If you are concerned with precision, you can use lower latency.
The extension method "GetPositionTimeSpan()" does seem to return the total length of playback so far and not the position within the stream. Admittedly I do not know why this is so.

How to mute/unmute mic in webrtc

I have read from here that how i can mute/unmute mic for a localstream in webrtc:WebRTC Tips & Tricks
When i start my localstream mic is enable at that time by default so when i set audioTracks[0].enabled=false it muted a mic in my local stream but when i set it back true it enable to unmute. Here is my code mute/unmute for a localstream:
getLocalStream(function (stream,enable) {
if (stream) {
for (var i = 0; i < stream.getTracks().length; i++) {
var track = stream.getAudioTracks()[0];
if (track)
track.enabled = enable;
//track.stop();
}
}
});
Can someone suggest me how i can unmute mic back in a localstream.
I assume that your method getLocalStream is actually calling navigator.getUserMedia. In this case when you do this you'll get another stream, not the original one. Using the orignal stream you should do
mediaStream.getAudioTracks()[0].enabled = true; // or false to mute it.
Alternatively you can check https://stackoverflow.com/a/35363284/1210071
There are 2 properties enabled and muted.
enabled is for setting, and muted is read-only on the remote side (the other person) (I have tried, setting muted does not work, basically, value cannot be changed)
stream.getAudioTracks()[0].enabled = true; // remote one will get muted change
Ahhh there is a good way to do this:
mediaStream.getVideoTracks()[0].enabled = !(mediaStream.getVideoTracks()[0].enabled);
You should read and set the "enabled" value. The "enabled" value is for 'muting'. The "muted" value is a read-only value to do with whether the stream is currently unable to play.
The enabled property on the MediaStreamTrack interface is a Boolean value which is true if the track is allowed to render the source stream or false if it is not. This can be used to intentionally mute a track. When enabled, a track's data is output from the source to the destination; otherwise, empty frames are output.
In the case of audio, a disabled track generates frames of silence (that is, frames in which every sample's value is 0). For video tracks, every frame is filled entirely with black pixels.
The value of enabled, in essence, represents what a typical user would consider the muting state for a track, whereas the muted property indicates a state in which the track is temporarily unable to output data, such as a scenario in which frames have been lost in transit.
https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/enabled
Step 1) call jquery.min.js
Step 2) use below code ,
A) To Mute
$("video").prop('muted','true');
B) To unmute
$("video").prop('muted','');
single Icon mute and unmute like youtube
function enablemute(thisimag) {
if($(thisimag).attr('src')=='images/mute.png')
{
$("video").prop('muted','');
$(thisimag).prop('src','images/unmute.png');
}
else
{
//alert('her');
$("video").prop('muted','true');
$(thisimag).prop('src','images/mute.png');
}
}
above function enablemute should call from onclick

UWP custom toast notification sound doesn't play on mobile

So I have an xml as notification body witch includes audio element with source (src) attribute pointing to preset windows sound and it doesn't play the sound I want and instead plays the default system sound. My notification xml looks like this (I use this as a test message to send trough Azure notification hubs debug option)
<?xml version="1.0" encoding="utf-8"?>
<toast>
<visual>
<binding template="ToastText01">
<text id="1">Test message</text>
</binding>
</visual>
<audio src="ms-winsoundevent:Notification.Looping.Alarm" loop="false"/>
</toast>
I don't have any toast handling on my app (no background task is launched or anything). Funny thing is that my PC plays the sound it should when it recieves the notification, but phone plays default sound every time.
I need to at least play preset windows sound, but playing custom sound from local files would be ace (this doesn't work with custom sounds neither). Also if you know if there's a possibility to start playing music from background task triggered by toast notification let me know, I couldn't find any info on google on this matter.
This is the microsoft link that tells my xml is good (even though it doesn't work): https://msdn.microsoft.com/en-us/library/windows/apps/br230842.aspx
I don't have any toast handling on my app (no background task is launched or anything). Funny thing is that my PC plays the sound it should when it recieves the notification, but phone plays default sound every time.
Looks like all values which prefix is ms-winsoundevent:Notification.Looping will be replaced by system sound while set loop element to false. Based on my understanding, this should be an expected result, these values are for Looping audio, if you need to disable looping, use the first 5 values, for example: ms-winsoundevent:Notification.IM
but playing custom sound from local files would be ace (this doesn't work with custom sounds neither)
This is a known issue which was mentioned in this article
The reason is path parser has an issue to resolve ms-appx:/// path, so the audio src will be regarded as Invalid, then the default sound will be played.
The workaround is copying your wav audio file programmatically to LocalFolder and using the "ms-appdata:///local/" protocol, for example:
private async void Button_Click(object sender, RoutedEventArgs e)
{
Windows.Storage.StorageFile audioFile = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///Assets/sound.wav"));
Windows.Storage.StorageFolder localFolder = Windows.Storage.ApplicationData.Current.LocalFolder;
await audioFile.CopyAsync(localFolder);
AddNotification();
}
public void AddNotification()
{
ToastTemplateType toastTemplate = ToastTemplateType.ToastText02;
XmlDocument toastXml = ToastNotificationManager.GetTemplateContent(toastTemplate);
XmlNodeList toastTextElements = toastXml.GetElementsByTagName("text");
toastTextElements[0].AppendChild(toastXml.CreateTextNode("This is a Toast Message"));
IXmlNode toastNode = toastXml.SelectSingleNode("/toast");
((XmlElement)toastNode).SetAttribute("launch", "MainPage.xaml");
XmlElement audio = toastXml.CreateElement("audio");
audio.SetAttribute("src", "ms-appdata:///local/sound.wav"); //Here
toastNode.AppendChild(audio);
ToastNotification toast = new ToastNotification(toastXml);
ToastNotificationManager.CreateToastNotifier().Show(toast);
}

How to know if volume hardware control is up or down in objective-c

Is possible to detect if volume hardware control is down or up? i need to play sound on button touch in my application, and i want to send a message to user that volume is down and to use this app he need to change hardware
EDIT:
I need to do something like this:
BOOL VolumeHardwareControl = getHardwareInfo();
if(VolumeHardwareControl==NO){
message: "Attention! To play sound you need to turn hardware on!"
}else
playSound();
UInt32 routeSize = sizeof (CFStringRef);
CFStringRef route;
AudioSessionGetProperty (
kAudioSessionProperty_AudioRoute,
&routeSize,
&route
);
if (route == NULL) {
NSLog(#"Silent switch is on");
}