How can I fix local SDP and set rrtr by js? - webrtc

here is my code, I created the peerConnection and use createOffer and make the sdp object,but I want to set rrtr and 30% of packet loss rate
pc.createOffer(this.mediaConstraints).then(function(sessionDescription) {
let sdp = sessionDescription.sdp;
//fix the sdp set rrtr
let a = sdp.split('\r\n');
sessionDescription.sdp = a.map((v,i,aa)=>{
if(v.startsWith('a=rtpmap') && aa[i+1] && aa[i+1].endsWith('goog-remb')){
let arr = v.split(/\s+/);
return v+'\r\n'+arr[0]+' rrtr'
}
return v;
}).join('\r\n');
pc.setLocalDescription(sessionDescription);
//here went error
)}

Related

Detect if MediaStreamTrack is black/blank

I'm creating videochat with peerjs.
I'm toggling camera (on/off) with the following function:
function toggleCamera() {
localStream.getVideoTracks()[0].enabled = !(localStream.getVideoTracks()[0].enabled);
}
After calling this function, video goes black and receiver gets just black screen (which works as intended).
Now I want to detect black/blank screen so I can show user some message or icon that camera is disabled and there is no stream.
How do I do detect that?
The common approach is to send a signaling message (either via the normal path or a datachannel). Polling getStats to detect the black frames is a valid approach but more expensive in terms of computation.
After some time I've managed to get solution:
var previousBytes = 0;
var previousTS = 0;
var currentBytes = 0;
var currentTS = 0;
// peer - new Peer()
// stream - local camera stream (received from navigator.mediaDevices.getUserMedia(constraints))
let connection = peer.call(peerID, stream);
// peerConnection - reference to RTCPeerConnection (https://peerjs.com/docs.html#dataconnection-peerconnection)
connection.peerConnection.getStats(null).then(stats => {
stats.forEach(report => {
if (report.type === "inbound-rtp") {
currentBytes = report.bytesReceived;
currentTS = report.timestamp;
if (previousBytes == 0) {
previousBytes = currentBytes;
previousTS = currentTS;
return;
}
console.log({ previousBytes })
console.log({ currentBytes })
var deltaBytes = currentBytes - previousBytes;
var deltaTS = currentTS - previousTS;
console.log("Delta: " + (deltaBytes / deltaTS) + " kB/s")
previousBytes = currentBytes;
previousTS = currentTS;
}
});
});
This code is actually in function which gets called every second. When camera is turned on and it's not covered, deltaBytes is between 100 and 250, when camera is turned off (programmatically) or covered (with a napkin or something), so camera stream is black/blank, deltaBytes is med 1.5-3kbps. After you turn camera back on, there is a spike in deltaBytes, which reaches around 500kbps.
This is short console log:
124.52747252747253 kB/s
202.213 kB/s
194.64764764764766 kB/s
15.313 kB/s (this is where camera is covered)
11.823823823823824 kB/s
11.862137862137862 kB/s
2.164 kB/s
2.005 kB/s
2.078078078078078 kB/s
1.99 kB/s
2.059 kB/s
1.992992992992993 kB/s
159.89810189810188 kB/s (uncovered camera)
502.669 kB/s
314.7927927927928 kB/s
255.0909090909091 kB/s
220.042 kB/s
213.46353646353646 kB/s
EDIT:
So in the end I did as #Philipp Hancke said. I created master connection which is open from when the page loads until user closes it. Over this connection I'm sending commands for initiating video call, canceling video session, turning on/off camera,... Then on the other side I'm parsing these commands and executing functions.
function sendMutedMicCommand() { masterConnection.send(`${commands.MutedMic}`); }
function sendUnmutedMicCommand() { masterConnection.send(`${commands.UnmutedMic}`); }
function sendPromptVideoCallCommand() { masterConnection.send(`${commands.PromptVideoCall}`); }
function sendAcceptVideoCallCommand() { masterConnection.send(`${commands.AcceptVideoCall}`); }
function sendDeclineVideoCallCommand() { masterConnection.send(`${commands.DeclineVideoCall}`); }
Function which handles data:
function handleData(data) {
let actionType = data;
switch (actionType) {
case commands.MutedMic: ShowMuteIconOnReceivingVideo(true); break;
case commands.UnmutedMic: ShowMuteIconOnReceivingVideo(false); break;
case commands.PromptVideoCall: showVideoCallModal(); break;
case commands.AcceptVideoCall: startVideoConference(); break;
case commands.DeclineVideoCall: showDeclinedCallAlert(); break;
default: break;
}
}
const commands = {
MutedMic: "mutedMic",
UnmutedMic: "unmutedMic",
PromptVideoCall: "promptVideoCall",
AcceptVideoCall: "acceptVideoCall",
DeclineVideoCall: "declineVideoCall",
}
And then when I receive mutedMic command, I show icon with crossed mic. When I receive AcceptVideoCall command I create another peer, videoCallPeer with random ID, which is then then sent to other side. Other side then created another peer with random ID and initiated video session with received ID.

How to get record data with AudioKit without storing it in the file

I want to record audio with AudioKit. What I want to achieve is a buffer with 512 bytes if the buffer is full. It will translate the buffer info through TCP. And the current situation is that I only know how to restore the Audio data into a file like in the code below.
Can anyone provide some tips to achieve my goal?
Here is my code:
AKAudioFile.cleanTempDirectory()
AKSettings.bufferLength = .medium
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
AKLog("Could not set session category.")
}
AKSettings.defaultToSpeaker = true
micMixer = AKMixer(mic)
micBooster = AKBooster(micMixer)
micBooster.gain = 0
recorder = try?AKNodeRecorder.init(node: micMixer)
if let file = recorder.audioFile {
print(file.directoryPath)
player = AKPlayer(audioFile: file)
}
player.isLooping = true
moogLadder = AKMoogLadder(player)
mainMixer = AKMixer(moogLadder, micBooster)
AudioKit.output = mainMixer
let clipRecoder = AKClipRecorder(node: mainMixer)
do {
try AudioKit.start()
} catch {
AKLog("AudioKit did not start!")
}
try! recorder.record()

how to make 'i dont understant' story in wit.ai

I have few stories but if I try type something like blah, wit.ai use hello story.
But I need something like *wildcard story with reply I don't understant.
Easy with witbot and intences, but I don't know how to make in node-wit and stories.
I had a similar problem and decided to tweak wit.js code to call a special lowConfidence method if the confidence for the next step is lower than a pre-defined threshold I set.
In my wit actions file :
// Setting up our wit client
const witClient = new Wit({
accessToken: WIT_TOKEN,
actions: witActions,
lowConfidenceThreshold: LOW_CONFIDENCE_THRESHOLD,
logger: new log.Logger(log.DEBUG)
});
and later
lowConfidence({context}) {
console.log("lowConfidenceConversationResponse");
return new Promise(function(resolve) {
context.lowConfidence=true;
context.done=true;
// now create a low_confidence story in wit.ai
// and have the bot response triggered always
// when context.lowConfidence is set
return resolve(context);
});
}
And in wit.js
else if (json.type === 'action') {
let action = json.action;
// json.confidence is confidence of next step and NOT
// wit.ai intent identification confidence
const confidence = json.entities && json.entities.intent &&
Array.isArray(json.entities.intent) &&
json.entities.intent.length > 0 &&
json.entities.intent[0].confidence;
if ( confidence && confidence<lowConfidenceThreshold)
action = 'lowConfidence' ;

iOS 10 Local Notifications not showing

I've been working on setting up local notifications for my app in iOS 10, but when testing in the simulator I found that the notifications would be successfully scheduled, but would never actually appear when the time they were scheduled for came. Here's the code I've been using:
let UNcenter = UNUserNotificationCenter.current()
UNcenter.requestAuthorization(options: [.alert, .badge, .sound]) { (granted, error) in
// Enable or disable features based on authorization
if granted == true {
self.testNotification()
}
}
Then it runs this code (assume the date/time is in the future):
func testNotification () {
let date = NSDateComponents()
date.hour = 16
date.minute = 06
date.second = 00
date.day = 26
date.month = 1
date.year = 2017
let trigger = UNCalendarNotificationTrigger(dateMatching: date as DateComponents, repeats: false)
let content = UNMutableNotificationContent()
content.title = "TestTitle"
content.body = "TestBody"
content.subtitle = "TestSubtitle"
let request = UNNotificationRequest(identifier: "TestID", content: content, trigger: trigger)
UNUserNotificationCenter.current().add(request) {(error) in
if let error = error {
print("error: \(error)")
} else {
print("Scheduled Notification")
}
}
}
From this code, it will always print "Scheduled Notification", but when the notification is supposed to be triggered, it never triggers. I've been unable to find any fix for this.
Here are a few steps.
Make sure you have the permission. If not, use UNUserNotificationCenter.current().requestAuthorization to get that. Or follow the answer below if you want to show the request pop up more than once.
If you want to show the notification foreground, having to assign UNUserNotificationCenterDelegate to somewhere.
This answer might help.

How to to read input stream from Magnetic stripe reader with keyboard support in Windows 10 universal app

We have tried the approach suggested at:
https://msdn.microsoft.com/en-us/library/windows/hardware/dn312121(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/hardware/dn303343(v=vs.85).aspx
We are able to find out list of all the magneticDevices using below code snippet
var magneticDevices = await DeviceInformation.FindAllAsync(aqsFilter);
but we are not able to get HidDevice object from the below code. It is giving null.
HidDevice device = await HidDevice.FromIdAsync(magneticDevices[0].Id
We have also set device capabilities in the app manifest file like below.
<DeviceCapability Name="humaninterfacedevice">
<Device Id="vidpid:0ACD 0520">
<Function Type="usage:0001 0006"/>
</Device>
</DeviceCapability>
<DeviceCapability Name="usb">
<Device Id="vidpid:0ACD 0520">
<Function Type="winUsbId:4d1e55b2-f16f-11cf-88cb-001111000030"/>
</Device>
</DeviceCapability>
Code for the complete Function
private async Task<bool> HasCardReader()
{
bool hasCardReader = false;
ushort usagePage = 0x0001;
ushort usageId = 0x0006;
ushort vendorId = 0x0ACD;
ushort productId = 0x0520;
var aqsFilter = HidDevice.GetDeviceSelector(usagePage, usageId, vendorId, productId);
var magneticDevices = await DeviceInformation.FindAllAsync(aqsFilter);
try
{
if (magneticDevices != null && magneticDevices.Count > 0)
{
HidDevice device = await HidDevice.FromIdAsync(magneticDevices[0].Id, Windows.Storage.FileAccessMode.Read);
inputReportEventHandler = new TypedEventHandler<HidDevice, HidInputReportReceivedEventArgs>(this.OnInputReportEvent);
device.InputReportReceived += inputReportEventHandler;
var watcher = DeviceInformation.CreateWatcher(aqsFilter);
watcher.Added += WatcherAdded;
watcher.Removed += WatcherRemoved;
watcher.Start();
hasCardReader = true;
}
else
{
}
}
catch (Exception ex)
{
Logging.LoggingSessionScenario.LogMessageAsync(ex.Message, LoggingLevel.Error);
}
return hasCardReader;
}
There are several reasons for the null return value, but I don't think there is something wrong with your code, since you can find the device by calling FindAllAsync. I will suggest you to troubleshoot this issue using this official HIDDevice sample on GitHub.
I successfully connected to my external hid device with that sample by changing the vid & pid & usagepage & usageid to my device.
In EventHandlerForDevice.cs, find the function OpenDeviceAsync, and you will notice the following possible reasons when null is returned by FromIdAsync.
else
{
successfullyOpenedDevice = false;
notificationStatus = NotifyType.ErrorMessage;
var deviceAccessStatus = DeviceAccessInformation.CreateFromId(deviceInfo.Id).CurrentStatus;
if (deviceAccessStatus == DeviceAccessStatus.DeniedByUser)
{
notificationMessage = "Access to the device was blocked by the user : " + deviceInfo.Id;
}
else if (deviceAccessStatus == DeviceAccessStatus.DeniedBySystem)
{
// This status is most likely caused by app permissions (did not declare the device in the app's package.appxmanifest)
// This status does not cover the case where the device is already opened by another app.
notificationMessage = "Access to the device was blocked by the system : " + deviceInfo.Id;
}
else
{
// Most likely the device is opened by another app, but cannot be sure
notificationMessage = "Unknown error, possibly opened by another app : " + deviceInfo.Id;
}
}
Have a try with that sample(Scenario1) and change the ids in both appxmanifest and SampleConfiguration.cs(class Device). If you cannot see your device in the device list, that means the configuration is incorrect for your device.