Stop Video Capture programmatically in WinJS - windows-8

I have a winJS where I am recording video. While I can make it work, I want to stop the camera recording automatically after 15 seconds. Currently the cam records more than 15 secs then trims out 15 secs from the video. I want the camera turned off/stop recording after 15secs automatically. I have the following code:
function captureVideo() {
WinJS.log && WinJS.log("", "sample", "status");
// Using Windows.Media.Capture.CameraCaptureUI API to capture a video
var dialog = new Windows.Media.Capture.CameraCaptureUI();
dialog.videoSettings.allowTrimming = true;
dialog.videoSettings.format = Windows.Media.Capture.CameraCaptureUIVideoFormat.mp4;
dialog.videoSettings.maxDurationInSeconds = document.getElementById("txtDuration").value;
dialog.captureFileAsync(Windows.Media.Capture.CameraCaptureUIMode.video).done(function (file) {
if (file) {
var videoBlobUrl = URL.createObjectURL(file, {oneTimeOnly: true});
document.getElementById("capturedVideo").src = videoBlobUrl;
localSettings.values[videoKey] = file.path;
} else {
WinJS.log && WinJS.log("No video captured.", "sample", "status");
}
}, function (err) {
WinJS.log && WinJS.log(err, "sample", "error");
});
}

The CameraCaptureUI that you are using sacrifices power for the ease of use and standard interface. If you need more power such as the ability to start and stop the recording, you should use the MediaCapture object. See my mediacap demo in codeSHOW. In it I am using the MediaCapture for recording audio, but you can likely figure out how to record video instead and add your concept of timing.

Related

How to stop Microsoft Cognitive TTS audio playing?

I am using the JavaScript version of Microsoft Cognitive Services Speech SDK from https://github.com/Azure-Samples/cognitive-services-speech-sdk.
The audio is played by the browser when synthesizer.speakTextAsync is called. When the audio is too long I want to stop the audio play but I couldn't find any documentation on how to do that?
Any help is appreciated!
synthesizer = new SpeechSDK.SpeechSynthesizer(speechConfig,
SpeechSDK.AudioConfig.fromDefaultSpeakerOutput());
synthesizer.speakTextAsync(
inputText,
result => {
if (result) {
console.log(JSON.stringify(rssesult));
}
},
error => {
console.log(error);
}
);
Stopping audio playing is supported.
You need to create a SpeechSDK.SpeakerAudioDestination() object and use it to create audioConfig like this.
var player = new SpeechSDK.SpeakerAudioDestination();
var audioConfig = SpeechSDK.AudioConfig.fromSpeakerOutput(player);
var synthesizer = new SpeechSDK.SpeechSynthesizer(speechConfig, audioConfig);
synthesizer.speakTextAsync(
...
);
Then you can call player.pause() and player.resume() to pause and resume the playback.
You can find more info from the doc and sample.

MediaRecorder has a delay of multiple seconda

I'm trying to use a MediaRecorder to record a MediaStream and display it in a video element using a MediaSource. So the setup looks like:
Request a MediaStream from the browser
Add it to the MediaRecorder
Add the recorded blobs to the MediaSource Buffer
The result looks very good but there is one problem: There is a delay in the playback.
When displaying the MediaStream directly there is no delay so I sorted out the first bulletpoint as the problem.
Nevertheless, it seems like either the MediaRecorder or the MediaSource is adding a delay of about 3 seconds to the stream.
this.screenRecording = await mediaDevices.getDisplayMedia({ video: { frameRate: 60, resizeMode: 'none' } });
const mediaRecorder = new MediaRecorder(this.screenRecording);
mediaRecorder.ondataavailable = async (event: any) => {
if (this.screenReceiving.readyState === 'open') {
if (this.screenReceivingBuffer == null) {
this.screenReceivingBuffer = this.screenReceiving.addSourceBuffer('video/webm;codecs=vp8');
}
if (!this.screenReceivingBuffer.updating) {
this.screenReceivingBuffer.appendBuffer(await new Response(event.data).arrayBuffer());
}
}
};
mediaRecorder.start(16);
The above code is only copy & paste from the actual project so please don't expect it to work by copy & paste ;)
Does anyone have an idea why this delay exists?
Any ideas on how to tweak the browser to not add this delay?

Xamarin camera not on main navigation page

I've managed to get the camera going cross platform using xamarin and this tutorial:
Camera access with Xamarin.Forms
I'm now trying to get it working on a different navigation form (The camera functionality would be several forms away from the main page.) However the device specific code accesses many things wired up to the App instance which I'm struggling to wire up from another form. Does anyone know of a good camera example that isn't on the main page? I've been coding C# for years but I'm new to Xamarin and the camera stuff seems to be the hardest to get going. Thanks in advance.
Jeff
use the Media plugin
takePhoto.Clicked += async (sender, args) =>
{
await CrossMedia.Current.Initialize();
if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported)
{
DisplayAlert("No Camera", ":( No camera available.", "OK");
return;
}
var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
{
Directory = "Sample",
Name = "test.jpg"
});
if (file == null)
return;
await DisplayAlert("File Location", file.Path, "OK");
image.Source = ImageSource.FromStream(() =>
{
var stream = file.GetStream();
file.Dispose();
return stream;
});
};

WebRTC: Switch from Video Sharing to Screen sharing during call

Initially, I had two different webpages:
One was to do Video Call and
Other was to do Screen Sharing
Now, I want to do both of them in one page.
Here is the scenario:
During Live call, a user wants to stop sharing his/her video and start sharing screen.
Afterwards, again he/she wishes to turn off screen sharing and start video sharing.
For clarity, here are some questions I want to ask:
On Caller Side:
1) How can I change my local stream from video to screen and vice versa?
2) Once it is done, how can I assign it to the local video element?
On Callee Side:
1) How do I handle if the current stream I am receiving is changed from video to screen?
2) How do I handle if the stream I am receiving has stopped? I mean, now I can receive neither video nor screen (just audio)
Kindly, help me in this regards. If there are any open source codes available, kindly share their links too.
Just for your reference, I was trying to handle it using following code. (i know this is naive and won't work)
function handleUserMedia(newStream){
var localvideo = document.getElementById("localvideo");
localvideo.src = URL.createObjectURL(newStream);
localStream = newStream;
sendMessage('got user media');
if (isInitiator) {
maybeStart();
}
}
function handleUserMediaError(error){
console.log(error);
}
var video_constraints = {video: true, audio: true};
var screen_constraints = {video: { mandatory: { chromeMediaSource: 'screen' } }};
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
//getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
$scope.toggleSelected = function () {
$scope.selected = !$scope.selected;
if($scope.selected)
{
getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Video';
}
else
{
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
}
};
Check this demo:
https://www.webrtc-experiment.com/demos/switch-streams.html
and the relevant tutorial:
https://www.webrtc-experiment.com/docs/how-to-switch-streams.html
simply renegotiate peer connections on both users' side!

Youtube API event on a specified time

Is it possible through the YouTube API to fire an event when the video reaches a specified time? e.g. when the video reaches 20 sec. fire the event.
Thanks,
Mauro
not sure if you still need an answer to this (as I'm finding it 4 months later) but here's how I accomplished this with youtube's iframe embed api. It's ugly in that it requires a setInterval, but there really isn't any kind of "timeupdate" event in the YouTube API (at least not in the iframe API), so you have to kind of fake it by checking the video time every so often. It seems to run just fine.
Let's say you want to start up your player as shown here with YT.Player(), and you want to implement your own onProgress() function that is called whenever the video time changes:
In HTML:
<div id="myPlayer"></div>
In JS:
// first, load the YouTube Iframe API:
var tag = document.createElement('script');
tag.src = "//www.youtube.com/iframe_api";
var firstScriptTag = document.getElementsByTagName('script')[0];
firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);
// some variables (global here, but they don't have to be)
var player;
var videoId = 'SomeYoutubeIdHere';
var videotime = 0;
var timeupdater = null;
// load your player when the API becomes ready
function onYoutubeIframeAPIReady() {
player = new YT.Player('myPlayer', {
width: '640',
height: '390',
videoId: videoId,
events: {
'onReady': onPlayerReady
}
});
}
// when the player is ready, start checking the current time every 100 ms.
function onPlayerReady() {
function updateTime() {
var oldTime = videotime;
if(player && player.getCurrentTime) {
videotime = player.getCurrentTime();
}
if(videotime !== oldTime) {
onProgress(videotime);
}
}
timeupdater = setInterval(updateTime, 100);
}
// when the time changes, this will be called.
function onProgress(currentTime) {
if(currentTime > 20) {
console.log("the video reached 20 seconds!");
}
}
It's a little sloppy, requiring a few global variables, but you could easily refactor it into a closure and/or make the interval stop and start itself on play/pause by also including the onStateChange event when you initialize the player, and writing an onPlayerStateChange function that checks for play and pause. You'd just need to seperate the updateTime() function from onPlayerReady, and strategically call timeupdater = setInterval(updateTime, 100); and clearInterval(timeupdater); in the right event handlers. See here for more on using events with YouTube.
I think you may have a look at popcorn.js.
It's an interesting mozilla project and it seems to solve your problem.
Not sure if anyone agrees here but I might prefer to use setTimeout() over setInterval() as it avoids using event listeners, eg:
function checkTime(){
setTimeout(function(){
videotime = getVideoTime();
if(videotime !== requiredTime) {
checkTime();//Recursive call.
}else{
//run stuff you want to at a particular time.
//do not call checkTime() again.
}
},100);
}
Its a basic pseudocode but hopefully conveys the idea.