Using multiple USB cameras with Web RTC - camera

I want to use multiple USB camera with Web RTC.
For ex)
https://apprtc.appspot.com/?r=93443359
This application is web RTC sample.
I can connect to another machine, but I have to disconnect once to change the camera.
What I want to is,
1.Use two camera at the same time on the same screen.
2.(if 1 is not possible),I want to switch the camera without disconnecting current connection
Does anyone have information about how to use two camera on Web RTC?

call getUserMedia twice and change the camera input in between

You can use constraints to specify which camera to use and you can have both of them displayed in one page as well. To specify which camera to use take a look at the following snippet (only works on Chrome 30+):
getUserMedia({
video: {
mandatory: {
sourceId: webcamId,
...
}
},
successCallback,
failCallback);
The webcamId you can get by:
MediaStreamTrack.getSources(function(sources){
var cams = _.filter(sources, function(e){ //only return video elements
return e.kind === 'video';
});
var camIds = _.map(cams, function (e) { // return only ids
return e.id;
});
});
In the snippet above I've used underscore methods filter and map.
More information on:
WebRTC video sources
constraints

Related

Switch front/back camera on Android while on WebRTC call using Circuit SDK

I am able to make a direct call between a Circuit WebClient and the example SDK app at https://output.jsbin.com/posoko.
When running the SDK example on a PC with a second camera (USB), the switching between the built-in camera and the USB camera works fine. But trying the same on my Android device (Samsung Galaxy S6) the switching does not work.
My code uses navigator.mediaDevices.enumerateDevices() to get the cameras and then uses the Circuit SDK function setMediaDevices to switch to the other camera.
async function switchCam() {
let availDevices = await navigator.mediaDevices.enumerateDevices();
availDevices = availDevices.filter(si => si.kind === 'videoinput');
let newDevice = availDevices[1]; // secondary camera
await client.setMediaDevices({video: newDevice.deviceId})
}
Can somebody explain why this doesn’t work on an Android device?
We have seen Android devices that don't allow calling navigator.getUserMedia while a video track (and therefore a stream) is still active. I tried your example above with a Pixel 2 without any issues though.
If you remove the video track from the stream and stop the track before calling client.setMediaDevices, the switch should work.
async function switchCam() {
const stream = await client.getLocalAudioVideoStream();
const currTrack = stream.getVideoTracks()[0];
console.log(`Remove and stop current track: ${currTrack.label}`);
stream.removeTrack(currTrack);
currTrack.stop();
let availDevices = await navigator.mediaDevices.enumerateDevices();
availDevices = availDevices.filter(si => si.kind === 'videoinput');
let newDevice = availDevices[1]; // secondary camera
await client.setMediaDevices({video: newDevice.deviceId})
}
There is a complete switch camera example on JSBin at https://output.jsbin.com/wuniwec/

Custom Soundcloud Widget (api) Player

I'm trying to create a custom player for some Soundcloud tracks. The idea is to hide the Iframe and create a few players to play different tracks. The loading and playing all works fine but I have two challenges.
How do I create a progressbar (SC.Widget.Events.PLAY_PROGRESS)
How do I create a download link?
A snippet from the way I'm coding this:
(function(){
var widgetIframe = document.getElementById('sc-widget'),
widget = SC.Widget(widgetIframe);
widget.bind(SC.Widget.Events.READY, function() {
$('#play').click(function(){
widget.play();
});
}); }());
To bad the OPEN API is closed..
If you are trying to stream tracks using a custom player, I recommend you do not use the widget at all. Rather, use the streaming SDK directly. There are methods there that can do everything you need to load, play, pause, seek, get the current time of the song and more.
To initialize the streaming player, you can do something like:
SC.initialize({
client_id: "<client id>"
});
SC.stream("/tracks/" + song_id).then(function (player) {
player.play();
}
To build the actual progress bar, you can do something inside your stream function like the following (this example uses JQuery but you don't need to):
player.on("time", function () {
var current_time = player.currentTime();
var current_duration = player.options.duration;
$(".scrubber .scrubber_fill").css("width", ((current_time / current_duration) * 100) + "%");
});

Meteor(React) : GPS based tracking for checkout

Created a application(iOS) using Meteor-react framework for car rental parking. I wants to track the drivers mobile using GPS. So that I can detect his checkout status. I need to calculate distance between parking space and drivers location once check-in so that I can send push messages to driver whether he is about to check-out. Before going for a actual development for GPS based checkout need a suggestion is it really possible? What will be the pron and cons?
Right-now I am going through npm package "gps-tracking" which is a GPS listener.
Below code I have added in /server/gps.js
var gps = require("gps-tracking");
var options = {
'debug' : true, //We don't want to debug info
automatically. We are going to log everything manually so you can
check what happens everywhere
'port' : 8000,
'device_adapter' : "TK103"
}
var server = gps.server(options,function(device,connection){
console.log(options);
console.log(device);
console.log(connection);
device.on("connected",function(data){
console.log("I'm a new device connected");
return data;
});
device.on("login_request",function(device_id,msg_parts){
console.log('Hey! I want to start transmiting my position. Please accept me. My name is '+device_id);
this.login_authorized(true);
console.log("Ok, "+device_id+", you're accepted!");
});
device.on("ping",function(data){
//this = device
console.log("I'm here: "+data.latitude+", "+data.longitude+" ("+this.getUID()+")");
//Look what informations the device sends to you (maybe velocity, gas level, etc)
//console.log(data);
return data;
});
device.on("alarm",function(alarm_code,alarm_data,msg_data){
console.log("Help! Something happend: "+alarm_code+"
("+alarm_data.msg+")");
});
//Also, you can listen on the native connection object
connection.on('data',function(data){
//echo raw data package
//console.log(data.toString());
})
});
Thanks in advance!

Titanium - save remote image to filesystem

I'm building an app with titanium and I would like to save in the phone, the user's profile picture. In my login function, after the API response, I tried to do :
Ti.App.Properties.setString("user_picture_name", res.profil_picture);
var image_to_save = Ti.UI.createImageView({image:img_url}).toImage();
var picture = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory, res.profil_picture); //As name, the same as the one in DB
picture.write(image_to_save);
And in the view in which I want to display the image :
var f = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory,Ti.App.Properties.getString("user_picture_name") );
var image = Ti.UI.createImageView({
image:f.read(),
width:200,
height:100,
top:20
});
main_container.add(image);
But the image doesn't appears. Could someone help me ?
Thanks a lot :)
There are 2 issues with your code:
1 - You cannot use toImage() method unless your image view is rendered on UI stack or simply on display. Rather you should use toBlob() method.
2 - Point no. 1 will also not work the way you are using because you cannot directly use toBlob() method until or unless the image from the url is completely loaded, means until it's shown on image view. To check when the image is loaded, use Ti.UI.ImageView onload event
But, there's another better approach to do such type of tasks.
Since you have the image url from your Login API response, you can use this url to fetch image from http client call like this:
function fetchImage() {
var xhr = Ti.Network.createHTTPClient({
onerror : function() {
alert('Error fetching profile image');
},
onload : function() {
// this.responseData holds the binary data fetched from url
var image_to_save = this.responseData;
//As name, the same as the one in DB
var picture = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory, res.profil_picture);
picture.write(image_to_save);
Ti.App.Properties.setString("user_picture_name", res.profil_picture);
image_to_save = null;
}
});
xhr.open("GET", img_url);
xhr.send();
}
You don't need to manually cache remote images, because
Remote images are cached automatically on the iOS platform and, since
Release 3.1.0, on the Android platform.
[see docs here & credit to Fokke Zandbergen]
Just use the remote image url in your UI, at first access Titanium will download and cache it for you; next accesses to the same image url will actually be on the automatically cached version on local device (no code is best code)
Hth.

WebRTC: Switch from Video Sharing to Screen sharing during call

Initially, I had two different webpages:
One was to do Video Call and
Other was to do Screen Sharing
Now, I want to do both of them in one page.
Here is the scenario:
During Live call, a user wants to stop sharing his/her video and start sharing screen.
Afterwards, again he/she wishes to turn off screen sharing and start video sharing.
For clarity, here are some questions I want to ask:
On Caller Side:
1) How can I change my local stream from video to screen and vice versa?
2) Once it is done, how can I assign it to the local video element?
On Callee Side:
1) How do I handle if the current stream I am receiving is changed from video to screen?
2) How do I handle if the stream I am receiving has stopped? I mean, now I can receive neither video nor screen (just audio)
Kindly, help me in this regards. If there are any open source codes available, kindly share their links too.
Just for your reference, I was trying to handle it using following code. (i know this is naive and won't work)
function handleUserMedia(newStream){
var localvideo = document.getElementById("localvideo");
localvideo.src = URL.createObjectURL(newStream);
localStream = newStream;
sendMessage('got user media');
if (isInitiator) {
maybeStart();
}
}
function handleUserMediaError(error){
console.log(error);
}
var video_constraints = {video: true, audio: true};
var screen_constraints = {video: { mandatory: { chromeMediaSource: 'screen' } }};
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
//getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
$scope.toggleSelected = function () {
$scope.selected = !$scope.selected;
if($scope.selected)
{
getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Video';
}
else
{
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
}
};
Check this demo:
https://www.webrtc-experiment.com/demos/switch-streams.html
and the relevant tutorial:
https://www.webrtc-experiment.com/docs/how-to-switch-streams.html
simply renegotiate peer connections on both users' side!