I'm creating videochat with peerjs.
I'm toggling camera (on/off) with the following function:
function toggleCamera() {
localStream.getVideoTracks()[0].enabled = !(localStream.getVideoTracks()[0].enabled);
}
After calling this function, video goes black and receiver gets just black screen (which works as intended).
Now I want to detect black/blank screen so I can show user some message or icon that camera is disabled and there is no stream.
How do I do detect that?
The common approach is to send a signaling message (either via the normal path or a datachannel). Polling getStats to detect the black frames is a valid approach but more expensive in terms of computation.
After some time I've managed to get solution:
var previousBytes = 0;
var previousTS = 0;
var currentBytes = 0;
var currentTS = 0;
// peer - new Peer()
// stream - local camera stream (received from navigator.mediaDevices.getUserMedia(constraints))
let connection = peer.call(peerID, stream);
// peerConnection - reference to RTCPeerConnection (https://peerjs.com/docs.html#dataconnection-peerconnection)
connection.peerConnection.getStats(null).then(stats => {
stats.forEach(report => {
if (report.type === "inbound-rtp") {
currentBytes = report.bytesReceived;
currentTS = report.timestamp;
if (previousBytes == 0) {
previousBytes = currentBytes;
previousTS = currentTS;
return;
}
console.log({ previousBytes })
console.log({ currentBytes })
var deltaBytes = currentBytes - previousBytes;
var deltaTS = currentTS - previousTS;
console.log("Delta: " + (deltaBytes / deltaTS) + " kB/s")
previousBytes = currentBytes;
previousTS = currentTS;
}
});
});
This code is actually in function which gets called every second. When camera is turned on and it's not covered, deltaBytes is between 100 and 250, when camera is turned off (programmatically) or covered (with a napkin or something), so camera stream is black/blank, deltaBytes is med 1.5-3kbps. After you turn camera back on, there is a spike in deltaBytes, which reaches around 500kbps.
This is short console log:
124.52747252747253 kB/s
202.213 kB/s
194.64764764764766 kB/s
15.313 kB/s (this is where camera is covered)
11.823823823823824 kB/s
11.862137862137862 kB/s
2.164 kB/s
2.005 kB/s
2.078078078078078 kB/s
1.99 kB/s
2.059 kB/s
1.992992992992993 kB/s
159.89810189810188 kB/s (uncovered camera)
502.669 kB/s
314.7927927927928 kB/s
255.0909090909091 kB/s
220.042 kB/s
213.46353646353646 kB/s
EDIT:
So in the end I did as #Philipp Hancke said. I created master connection which is open from when the page loads until user closes it. Over this connection I'm sending commands for initiating video call, canceling video session, turning on/off camera,... Then on the other side I'm parsing these commands and executing functions.
function sendMutedMicCommand() { masterConnection.send(`${commands.MutedMic}`); }
function sendUnmutedMicCommand() { masterConnection.send(`${commands.UnmutedMic}`); }
function sendPromptVideoCallCommand() { masterConnection.send(`${commands.PromptVideoCall}`); }
function sendAcceptVideoCallCommand() { masterConnection.send(`${commands.AcceptVideoCall}`); }
function sendDeclineVideoCallCommand() { masterConnection.send(`${commands.DeclineVideoCall}`); }
Function which handles data:
function handleData(data) {
let actionType = data;
switch (actionType) {
case commands.MutedMic: ShowMuteIconOnReceivingVideo(true); break;
case commands.UnmutedMic: ShowMuteIconOnReceivingVideo(false); break;
case commands.PromptVideoCall: showVideoCallModal(); break;
case commands.AcceptVideoCall: startVideoConference(); break;
case commands.DeclineVideoCall: showDeclinedCallAlert(); break;
default: break;
}
}
const commands = {
MutedMic: "mutedMic",
UnmutedMic: "unmutedMic",
PromptVideoCall: "promptVideoCall",
AcceptVideoCall: "acceptVideoCall",
DeclineVideoCall: "declineVideoCall",
}
And then when I receive mutedMic command, I show icon with crossed mic. When I receive AcceptVideoCall command I create another peer, videoCallPeer with random ID, which is then then sent to other side. Other side then created another peer with random ID and initiated video session with received ID.
Related
I am trying to get Core Motion data from an Apple Watch 3 (WatchOS 5.1) but although the DeviceMotion is available (isDeviceMotionAvailable property is true), the handler is never triggered. I get the following message in the console right after parsing super.willActivate():
[Gyro] Manually set gyro-interrupt-calibration to 800
I am using the following function to get Device Motion updates:
func startQueuedUpdates() {
if motion.isDeviceMotionAvailable {
self.motion.deviceMotionUpdateInterval = 1.0 / 100.0
self.motion.showsDeviceMovementDisplay = true
self.motion.startDeviceMotionUpdates(using: .xMagneticNorthZVertical, to: self.queue, withHandler:{
(data, error) in
// Make sure the data is valid before accessing it.
if let validData = data {
print(String(validData.userAcceleration.x))
}
})
}
}
In the InterfaceController I have declared
let motion = CMMotionManager()
let queue : OperationQueue = OperationQueue.main
Has anyone met this message before and managed to resolve it?
Note: I have checked the isGyroAvailable property and it is false.
The trick here is to match the startDeviceMotionUpdates(using: CMAttitudeReferenceFrame parameter to your device's capabilities. If it has no magnetometer, it cannot relate to magnetic north, and even if it has a magnetometer, it cannot relate to true north unless it knows where you are (i.e. has latitude & longitude). If it hasn't got the capabilities to comply with the parameter you select, the update will be called, but the data will be nil.
If you start it up with the minimum .xArbitraryZVertical you will get updates from the accelerometer, but you won't get a meaningful heading, just a relative one, through the CMDeviceMotion.attitude property ...
if motion.isDeviceMotionAvailable {
print("Motion available")
print(motion.isGyroAvailable ? "Gyro available" : "Gyro NOT available")
print(motion.isAccelerometerAvailable ? "Accel available" : "Accel NOT available")
print(motion.isMagnetometerAvailable ? "Mag available" : "Mag NOT available")
motion.deviceMotionUpdateInterval = 1.0 / 60.0
motion.showsDeviceMovementDisplay = true
motion.startDeviceMotionUpdates(using: .xArbitraryZVertical) // *******
// Configure a timer to fetch the motion data.
self.timer = Timer.scheduledTimer(withTimeInterval: 1, repeats: true) { _ in
if let data = self.motion.deviceMotion {
print(data.attitude.yaw)
}
}
}
We're utilizing the V Cloud API to interact with virtual machines (create machines, perform actions, switch media, etc). One requested function is to be able to upload media (specifically ISO's) to a particular a catalog. The API guide (pg 67) is fairly straightforward, and our multi-part requests to the URL that is provided when the upload starts go off without a hitch.
Note: We have to declare the file size before starting the upload
The only thing that seems amiss during the upload itself is that the "transferred size" ends up being larger than the "file size" at the end of the process. This is somewhat odd because our content-range never exceeds the expected file size (we assume that the meta data is being included without us having a say). Once this transferred size exceeds the file size, the status of the file upload changes to "Error" but still returns a 200 OK
{
"name": "J Small 4",
"description": "",
"files": [{
"name": "file",
"totalSize": 50696192,
"status": "Error",
"link": "https://cloud01.cs2cloud.com/transfer/27b8f93c-8319-419e-9e8c-15622097670b/file",
"transferredSize": 54293177
}],
"id": "urn:vcloud:media:1cec68ef-f22e-4ec7-ae5d-dfbc4f7137d9",
"catalogId": "urn:vcloud:catalogitem:19dbfdd8-ea70-4355-abc7-96e34dccb869"
}
Not sure where to even start debugging this since all the API calls come back with 200 OK, the .ISO file seems to be fine, our content-range headers never go outside the established file size, and the meta-data seems to be out of our control in terms of editing or measuring it.
Hoping some soul has experienced this issue before and can provide some insight into working towards a solution
It turns out the issue wasn't with the vmware at all, but how we were chunking up the media file. We initially used FileReader() to chunk up the file and send it over to the VMware API.
Theoretically, we were choosing the chunk size and could then generate and set the content range, but in reality we were choosing the content-range but the content-length was different than the chunk size. We're still not entirely sure why it happened (maybe extra meta data being added on) but we found a solution.
The fix: We eliminated FileReader() altogether and just put the file slices directly into a blob (you can see below)
$scope.parseMediaFile = function(url, file, catalogId) {
$scope.uploadingMediaFile = true;
var fileSize = file.size;
var chunkSize = 1024 * 1024 * 5; // bytes
var offset = 0;
var self = this; // we need a reference to the current object
var chunkReaderBlock = null;
var chunkNum = 0;
if (fileSize < chunkSize) {
chunkSize = fileSize;
}
chunkReaderBlock = function(_offset, length, _file) {
var blob = _file.slice(_offset, length + _offset);
var beginRange = _offset;
var endRange = _offset + length;
if(endRange > _file.size) {
endRange = _file.size
}
var contentRange = beginRange + "-" + endRange;
vdcServices.uploadMediaFile(url, blob, fileSize, contentRange).then(
function(resp) {
vdcServices.getUploadStatus($scope.company, catalogId).then(function(resp) {
var uploaded = resp.data.files[0].transferredSize;
$scope.mediaPercentLoaded = $scope.trunc((uploaded / fileSize) * 100);
if (endRange == _file.size) {
$scope.closeModal();
return;
}
chunkReaderBlock(_offset+length, chunkSize, file);
}, function(err) {
$scope.errorMsg = err;
chunkReaderBlock(_offset-length, chunkSize, file);
})
},
function(err) {
$scope.errorMsg = err;
}
)
}
// Starts the read with the first block
if (offset < fileSize) {
chunkReaderBlock(offset, chunkSize, file)
}
}
Doing so allowed us to actually control the content-length, and since we can identify when the number of bytes transferred is equal to the file size we could then complete the process.
We have tried the approach suggested at:
https://msdn.microsoft.com/en-us/library/windows/hardware/dn312121(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/hardware/dn303343(v=vs.85).aspx
We are able to find out list of all the magneticDevices using below code snippet
var magneticDevices = await DeviceInformation.FindAllAsync(aqsFilter);
but we are not able to get HidDevice object from the below code. It is giving null.
HidDevice device = await HidDevice.FromIdAsync(magneticDevices[0].Id
We have also set device capabilities in the app manifest file like below.
<DeviceCapability Name="humaninterfacedevice">
<Device Id="vidpid:0ACD 0520">
<Function Type="usage:0001 0006"/>
</Device>
</DeviceCapability>
<DeviceCapability Name="usb">
<Device Id="vidpid:0ACD 0520">
<Function Type="winUsbId:4d1e55b2-f16f-11cf-88cb-001111000030"/>
</Device>
</DeviceCapability>
Code for the complete Function
private async Task<bool> HasCardReader()
{
bool hasCardReader = false;
ushort usagePage = 0x0001;
ushort usageId = 0x0006;
ushort vendorId = 0x0ACD;
ushort productId = 0x0520;
var aqsFilter = HidDevice.GetDeviceSelector(usagePage, usageId, vendorId, productId);
var magneticDevices = await DeviceInformation.FindAllAsync(aqsFilter);
try
{
if (magneticDevices != null && magneticDevices.Count > 0)
{
HidDevice device = await HidDevice.FromIdAsync(magneticDevices[0].Id, Windows.Storage.FileAccessMode.Read);
inputReportEventHandler = new TypedEventHandler<HidDevice, HidInputReportReceivedEventArgs>(this.OnInputReportEvent);
device.InputReportReceived += inputReportEventHandler;
var watcher = DeviceInformation.CreateWatcher(aqsFilter);
watcher.Added += WatcherAdded;
watcher.Removed += WatcherRemoved;
watcher.Start();
hasCardReader = true;
}
else
{
}
}
catch (Exception ex)
{
Logging.LoggingSessionScenario.LogMessageAsync(ex.Message, LoggingLevel.Error);
}
return hasCardReader;
}
There are several reasons for the null return value, but I don't think there is something wrong with your code, since you can find the device by calling FindAllAsync. I will suggest you to troubleshoot this issue using this official HIDDevice sample on GitHub.
I successfully connected to my external hid device with that sample by changing the vid & pid & usagepage & usageid to my device.
In EventHandlerForDevice.cs, find the function OpenDeviceAsync, and you will notice the following possible reasons when null is returned by FromIdAsync.
else
{
successfullyOpenedDevice = false;
notificationStatus = NotifyType.ErrorMessage;
var deviceAccessStatus = DeviceAccessInformation.CreateFromId(deviceInfo.Id).CurrentStatus;
if (deviceAccessStatus == DeviceAccessStatus.DeniedByUser)
{
notificationMessage = "Access to the device was blocked by the user : " + deviceInfo.Id;
}
else if (deviceAccessStatus == DeviceAccessStatus.DeniedBySystem)
{
// This status is most likely caused by app permissions (did not declare the device in the app's package.appxmanifest)
// This status does not cover the case where the device is already opened by another app.
notificationMessage = "Access to the device was blocked by the system : " + deviceInfo.Id;
}
else
{
// Most likely the device is opened by another app, but cannot be sure
notificationMessage = "Unknown error, possibly opened by another app : " + deviceInfo.Id;
}
}
Have a try with that sample(Scenario1) and change the ids in both appxmanifest and SampleConfiguration.cs(class Device). If you cannot see your device in the device list, that means the configuration is incorrect for your device.
When user A without camera call to user B with camera, hi will receive stream without video
tracks. In this case user B generate SDP with such string a=group:BUNDLE audio when normally it contain mentions about video like a=group:BUNDLE audio video and m=video 1 RTP/SAVPF 100 116 117 96
Here is my code in coffeescript for accepting offer:
acccept_offer: (sdp, success) ->
sdp = new _RTCSessionDescription sdp
#connection.setRemoteDescription sdp, =>
if #candidates.length
for candidate in #candidates
#connection.addIceCandidate candidate
#candidates = []
#connection.createAnswer (description) =>
description = new _RTCSessionDescription
sdp: #set_bandwidth description.sdp
type: description.type
#local_description = description
#connection.setLocalDescription #local_description, ->
success()
, (e) ->
console.log e
, (e) ->
console.log e
, (e) ->
console.log e
Why this strange behaviour and how can I avoid it?
You need to put constraints in your RTCPeerConnection creation to tell the SDP what media you are willing to send/receive.
Example:
var sdpConstraints = { 'mandatory': { 'OfferToReceiveAudio': true, 'OfferToReceiveVideo': false } };
The app saves the camera output into a mov. file, then turn it to flv format that sent by AVPacket to rtmp server.
It switch every time between two files, one is written by the camera output and the other one is sent.
My problem is that the audio/video is getting out of sync after a while.
The first buffer sent is allways 100% sync but after awhile it get messed.
I belive its a DTS-PTS problem..
if(isVideo)
{
packet->stream_index = VIDEO_STREAM;
packet->dts = packet->pts = videoPosition;
videoPosition += packet->duration = FLV_TIMEBASE * packet->duration * videoCodec->ticks_per_frame * videoCodec->time_base.num / videoCodec->time_base.den;
}
else
{
packet->stream_index = AUDIO_STREAM;
packet->dts = packet->pts = audioPosition;
audioPosition += packet->duration = FLV_TIMEBASE * packet->duration / audioRate;
//NSLog(#"audio position = %lld", audioPosition);
}
packet->pos = -1;
packet->convergence_duration = AV_NOPTS_VALUE;
// This sometimes fails without being a critical error, so no exception is raised
if((code = av_interleaved_write_frame(file, packet)))
{
NSLog(#"Streamer::Couldn't write frame");
}
av_free_packet(packet);
You can research this sample: http://unick-soft.ru/art/files/ffmpegEncoder-vs2008.zip
But this sample is for Windows.
In this sample I use pts only for audio stream:
if (pVideoCodec->coded_frame->pts != AV_NOPTS_VALUE)
{
pkt.pts = av_rescale_q(pVideoCodec->coded_frame->pts,
pVideoCodec->time_base, pVideoStream->time_base);
}
I was experiencing a similar issue when switching out the AVAssetWriters, and noticed that it went way if I only started using the new AVAssetWriter when I got a video sample
https://medium.com/#brandon.kobel/ios-seamless-video-chunks-4383a5a3a874