I am trying to add an option of camera switch in video call handled by KMS (Kurento media server) and I am digging throw a lot to their documentation and other sources but I find nothing useful
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate: onIceCandidate,
mediaConstraints: {
audio: isAudio || call_settings.isAudio,
video: isVideo || call_settings.isVideo
}
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function (
this is my code which is connecting throw peer and all media permission is handled by kurento itself so that I am not able to change the source of media location.
and i am not sure how to do it with kurento any kind of help is appreciable thanks in advance
You can pass custom mediaConstraints to the options or create stream by yourself and send it as videoStream in the options and skip mediaConstraints as mentioned in kurento utils js docs.
For switching device / getting stream based on device, please refer below sample
https://webrtc.github.io/samples/src/content/devices/input-output/
You can refer below doc for videoStream usage
https://doc-kurento.readthedocs.io/en/stable/features/kurento_utils_js.html
Related
I am trying to play a video when developing locally with VueJS 2.
My code is the following :
<video class="back_video" :src="`../videos/Space${videoIndex}.mp4`" id="background-video"></video>
...
data :
function() {
return {
videoIndex:1
}
}
...
const vid = document.getElementById("background-video");
vid.crossOrigin = 'anonymous';
let playPromise = vid.play();
if (playPromise !== undefined) {
playPromise.then(function() {
console.log("video playing");
}).catch(function(error) {
console.error(error);
});
}
This code is causing the exception given in title. Tried in several browsers, always the same.
If I change the src by :
:src="require(`../videos/Space${videoIndex}.mp4`)"
it works.
But in that case building time is very long as I have many different videos in my videos directory, because adding require() will force to copy all videos in the running directory at build phase (vue-cli serve), and this is really annoying. In other words I want to refer videos that are outside the build directory to avoid this (but also to avoid having videos in my git).
It is interesting to note that when I deploy server side, it works perfectly with my original code
:src="`../videos/Space${videoIndex}.mp4`"
Note also that if i replace my code with simply
src="../videos/Space1.mp4"
it works too. So the video itself, or its location, are not the source of the problem.
Any clue ?
You can host your videos on a CDN to have something faster and easier to debug/work with.
Otherwise, it will need to bundle it locally and may take some time.
My RTC session was started with text only. And video is added by user when needed (renegotiation)
navigator.getUserMedia({ video: true, audio: false }, function (myStream) {
localVideo[0].srcObject = myStream;
myConn.addStream(myStream);
}, function (error) {
console.log(error);
});
When user do not need the video session anymore, I remove using:
var tracks = localVideo[0].srcObject.getTracks();
tracks.forEach(function (t) {
t.stop();
});
myConn.removeStream(localVideo[0].srcObject);
localVideo[0].srcObject = null;
Everything is working fine, until I try to add the video again I noticed that the createOffer() request size is getting larger and larger.
Seems to me that WebRTC didn't forget about the previous stream, and is adding to the offer again and again. Or maybe my way of removing a video stream / track is wrong?
This is a known issue see this thread on the W3C list.
The best way to get around this is to use replaceTrack and is suggested in the thread.
Note: It is still possible to prevent the list of transceivers from growing
by *manually* recycling them using transceiver.sender.replaceTrack() and
transceiver.direction, but that still wastes resources on transceivers
currently not used, and implies you probably shouldn't use
transceiver.stop() in most cases.
Also see the "Unified Plan" Transition Guide
Safari doesn't support MediaRecorder to listen to the stream from WebCam like the below code.
This works perfectly in Chrome and I'm able to convert the blob to a webm video file.
if(navigator.mediaDevices.getUserMedia)
{
navigator.mediaDevices.getUserMedia({video: true, audio: true}).then (stream => {
videoRef.srcObject = stream
mediaRecorder.value = new MediaRecorder(stream, {mimeType: 'video/webm; codecs=vp8,opus'})
mediaRecorder.value.addEventListener('dataavailable', function(e) {
blobs.push(e.data)
})
})
}
})
I need to save the video streamed from webcam in my server. What should be the approach to achieve the same in Safari?
I researched a lot, saw a similar question. But there was no proper solution given.
Could someone guide to a tutorial on how to achieve this using WebRTC if needed?
Does SimpleWebRTC has this feature to get data(video/audio) without giving permission to browser to use my camera/microphone?
// create our webrtc connection
var webrtc = new SimpleWebRTC({
// the id/element dom element that will hold "our" video
localVideoEl: 'localVideo',
// the id/element dom element that will hold remote videos
remoteVideosEl: '',
// immediately ask for camera access
**autoRequestMedia: true,**
debug: true,
detectSpeakingEvents: true,
autoAdjustMic: false,
media: {
video: false,
**audio: true**
},
});
When I change those parts surrounded by asterisks to true it works, otherwise it doesn't.
Have you tried setting autoRequestMedia to true and while having both video and audio of the media object set to false? You should receive the readyToCall event and can join the room as shown on the simplewebrtc homepage.
First negotiate (accept the call/join the room) with video and audio and then disable the video, somehting like webrtc.videoStreams.disable()
I'm starting with webRTC and am trying to access to my camera, however, the code doesn't work, although there is no mistakes in it.
The code is:
navigator.getUserMedia = ( navigator.getUserMedia ||
navigator.webkitGetUserMedia || navigator.mozGetUserMedia
|| navigator.msGetUserMedia);
if (navigator.getUserMedia){
var constrains ={video:true};
function successCallback(localMediaStream){
var video = document.querySelector("video");
window.stream = localMediaStream;
video.src = window.URL.createObjectURL(localMediaStream);
video.onloadedmetadata =function(e){
video.play();
}
}
function errorCallback(error){
console.log("Error: ",error);
}
navigator.getUserMedia(constrains,successCallback,errorCallback);
}else{
alert('Sorry, the browser you are using doesn\'t support getUserMedia');
}
Can you help me please?
i am guessing that the code above is put in a html file and accessed directly by clicking on file( and url being like file:///...), this way would work in firefox, but not in chrome, for camera capture to work on chrome, you need to host the file in some server.
also, on an unrelated note, you can replace
video.onloadedmetadata =function(e){
video.play();
}
with simply
video.play();
Its not obvious whether you have a valid HTML5 video element to set the stream on. If you do, you can use the developer tools to verify the stream has been set on the source.
If you have a web server on your development machine, you can host your code that way, and view it 'locally'.