How to auto capture image using device camera in xamarin forms - xaml

I am trying to implement face detection for authentication in my xamarin application. i want to capture image automatically without user interaction, i am able to capture image using capture button but my requirement is image capture should be automatically please help me to achieve this.

There's 2 parts to doing it. You will need to open a Camera activity:
There are multiple tutorials about this :
Android
App.Instance.ShouldTakePicture += () => {
var intent = new Intent(MediaStore.ActionImageCapture);
intent.PutExtra(MediaStore.ExtraOutput, Uri.FromFile(file));
StartActivityForResult(intent, 0);
};
IOS
imagePicker.FinishedPickingMedia += (sender, e) => {
var filepath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "tmp.png");
var image = (UIImage)e.Info.ObjectForKey(new NSString("UIImagePickerControllerOriginalImage"));
InvokeOnMainThread(() => {
image.AsPNG().Save(filepath, false);
App.Instance.ShowImage(filepath);
});
DismissViewController(true, null);
};
Next step is to use facial recognition , once you have the users face you can store the image.
using (var stream = photo.GetStream())
{
var faceServiceClient = new FaceServiceClient("{FACE_API_SUBSCRIPTION_KEY}");
// Step 4a - Detect the faces in this photo.
var faces = await faceServiceClient.DetectAsync(stream);
var faceIds = faces.Select(face => face.FaceId).ToArray();
// Step 4b - Identify the person in the photo, based on the face.
var results = await faceServiceClient.IdentifyAsync(personGroupId, faceIds);
var result = results[0].Candidates[0].PersonId;
// Step 4c - Fetch the person from the PersonId and display their name.
var person = await faceServiceClient.GetPersonAsync(personGroupId, result);
UserDialogs.Instance.ShowSuccess($"Person identified is {person.Name}.");
}

Related

Unable to record again after stopping record in Kurento

I am working on this Kurento application and there is a strange problem i am facing where once I start recording the video and stop the recording, I cant start another recording again. The event goes to the server but nothing seems to be happening! PFB the code:
room.pipeline.create('WebRtcEndpoint', function (error, outgoingMedia) {
if (error) {
console.error('no participant in room');
// no participants in room yet release pipeline
if (Object.keys(room.participants).length == 0) {
room.pipeline.release();
}
return callback(error);
}
outgoingMedia.setMaxVideoRecvBandwidth(256);
userSession.outgoingMedia = outgoingMedia;
// add ice candidate the get sent before endpoint is established
var iceCandidateQueue = userSession.iceCandidateQueue[socket.id];
if (iceCandidateQueue) {
while (iceCandidateQueue.length) {
var message = iceCandidateQueue.shift();
console.error('user : ' + userSession.id + ' collect candidate for outgoing media');
userSession.outgoingMedia.addIceCandidate(message.candidate);
}
}
userSession.outgoingMedia.on('OnIceCandidate', function (event) {
console.log("generate outgoing candidate : " + userSession.id);
var candidate = kurento.register.complexTypes.IceCandidate(event.candidate);
userSession.sendMessage({
id: 'iceCandidate',
sessionId: userSession.id,
candidate: candidate
});
});
// notify other user that new user is joining
var usersInRoom = room.participants;
var data = {
id: 'newParticipantArrived',
new_user_id: userSession.id,
receiveVid: receiveVid
};
// notify existing user
for (var i in usersInRoom) {
usersInRoom[i].sendMessage(data);
}
var existingUserIds = [];
for (var i in room.participants) {
existingUserIds.push({id: usersInRoom[i].id, receiveVid: usersInRoom[i].receiveVid});
}
// send list of current user in the room to current participant
userSession.sendMessage({
id: 'existingParticipants',
data: existingUserIds,
roomName: room.name,
receiveVid: receiveVid
});
// register user to room
room.participants[userSession.id] = userSession;
var recorderParams = {
mediaProfile: 'WEBM',
uri: "file:///tmp/Room_"+room.name+"_file"+userSession.id +".webm"
};
//make recorder endpoint
room.pipeline.create('RecorderEndpoint', recorderParams, function(error, recorderEndpoint){
userSession.outgoingMedia.recorderEndpoint = recorderEndpoint;
outgoingMedia.connect(recorderEndpoint);
});
On the screen when I click the record button, the function on the server is:
function startRecord(socket) {
console.log("in func");
var userSession = userRegistry.getById(socket.id);
if (!userSession) {
return;
}
var room = rooms[userSession.roomName];
if(!room){
return;
}
var usersInRoom = room.participants;
var data = {
id: 'startRecording'
};
for (var i in usersInRoom) {
console.log("in loop");
var user = usersInRoom[i];
// release viewer from this
user.outgoingMedia.recorderEndpoint.record();
// notify all user in the room
user.sendMessage(data);
console.log(user.id);
}}
The thing is, for the very 1st time, it records properly i.e. file created on server and video&audio recorded properly.
When I press stop to stop recording, the intended effect is seen i.e. recording stops.
NOW, when I press record again, a video file is not made. The event reaches the server properly (console.log says so)
Can anyone help me pls?!
Thanks.

Titanium - Get image file from filesystem on Android

I have a problem getting an image from filesystem. On iOS works fine.
First of all, I save a remote image in the filesystem with this function:
img.imagen = url from the remote image (e.g. http://onesite.es/img2.jpeg)
function descargarImagen(img, callback){
var path = img.imagen;
var filename = path.split("/").pop();
var xhr = Titanium.Network.createHTTPClient({
onload: function() {
// first, grab a "handle" to the file where you'll store the downloaded data
var f = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory, filename);
f.write(this.responseData); // write to the file
Ti.API.debug("-- Imagen guardada: " + f.nativePath);
callback({path: f.nativePath});
},
timeout: 10000
});
xhr.open('GET', path);
xhr.send();
}
Now, I want to share this image creating an Android Intent:
args.image = f.nativePath(in the previous function)
var intent = null;
var intentType = null;
intent = Ti.Android.createIntent({
action: Ti.Android.ACTION_SEND
});
// add text status
if (args.status){
intent.putExtra(Ti.Android.EXTRA_TEXT, args.status);
}
// change type according to the content
if (args.image){
intent.type = "image/*";
intent.putExtraUri(Ti.Android.EXTRA_STREAM, args.image);
}else{
intent.type = "text/plain";
intent.addCategory(Ti.Android.CATEGORY_DEFAULT);
}
// launch intent
Ti.Android.currentActivity.startActivity(Ti.Android.createIntentChooser(intent, args.androidDialogTitle));
What I'm doing wrong?

Triggering clearIconTap callback on dynamically created fields

I am creating a simple app using Sencha Touch where I'm dynamically creating container with textfields, textareafields etc. when the user needs to add new container with components. The problem now is when the clear icon on the textareafield is tapped it clears the text, but I would like to know which textareafield has been cleared. Can anyone help me in this please?
This is how I created container .
var childObj2 = {};
childObj2.xtype = 'container';
var type = 'vbox';
var layout = {}
layout.type = type;
childObj1.layout = layout;
var txtarea= {};
txtarea.xtype = 'textareafield';
txtarea.id = "txt51";
txtarea.flex = 3;
txtarea.maxRows = 7;
txtarea.placeHolder = 'Type here';
txtarea.value = value['notes'];
txtarea.inputCls = 'txtareaStyle'
txtarea.clearicontap = "clearText";
How to add clearicontap listener to this?
When you you create a textfield simply add a listener to it for the clearicontap event and that callback will get executed for each of the fields.
For example:
var container = Ext.create('Ext.Container', {});
for (var i=1; i<=3; i++) {
var field = Ext.create('Ext.field.Text', {
id: 'textfieldnumber' + i,
listeners: {
clearicontap: function() {
alert("Tapped clear icon on text field number: " + i + "!");
}
}
});
container.add(field);
}
[EDIT]
I answer the question you make after your edit:
I am using the standard way of creating Sencha components through Ext.create(), and I would suggest you to switch to the same way. It is not clear by the code you posted how those Javascript objects are actually transformed into Ext components. Anyway, they are very likely components configurations, so I guess you could try:
txtarea.listeners = {
clearicontap: function() {
alert("Tapped clear icon on text field");
}
}

Cannot view MicrophoneLevel and VolumeEvent session info using OpenTok

I have set up a basic test page for OpenTok using API Key, Session and Token (for publisher). Is based on the QuickStart with code added to track the microphoneLevelChanged event. Page code is available here. The important lines are:
var apiKey = "API KEY HERE";
var sessionId = "SESSION ID HERE";
var token = "TOKEN HERE";
function sessionConnectedHandler(event) {
session.publish(publisher);
subscribeToStreams(event.streams);
}
function subscribeToStreams(streams) {
for (var i = 0; i < streams.length; i++) {
var stream = streams[i];
if (stream.connection.connectionId != session.connection.connectionId) {
session.subscribe(stream);
}
}
}
function streamCreatedHandler(event) {
subscribeToStreams(event.streams);
TB.log("test log stream created: " + event);
}
var pubProps = { reportMicLevels: true };
var publisher = TB.initPublisher(apiKey, null, pubProps);
var session = TB.initSession(sessionId);
session.publish(publisher);
session.addEventListener("sessionConnected", sessionConnectedHandler);
session.addEventListener("streamCreated", streamCreatedHandler);
session.addEventListener("microphoneLevelChanged", microphoneLevelChangedHandler);
session.connect(apiKey, token);
function microphoneLevelChangedHandler(event) {
TB.log("The microphone level for stream " + event.streamId + " is: " + event.volume);
}
I know that the logging works, as the logs show up from streamCreatedHandler. However, I am not getting any events logged in the microphoneLevelChangedHandler function. I have tried this with both one and two clients loading the pages (videos show up just fine).
What do I need to do to get the microphoneLevelChanged events to show up?
OpenTok's WebRTC js library does not have a microphoneLevelChanged event so there is nothing you can do, sorry.

Cannot dynamically change a caption 'track' in Video.js

I'm coding a basic video marquee and one of the key requirements is that the videos need to be able to advance while keeping the player in full screen.
Using Video.js (4.1.0) I have been able to get everything work correctly except that I cannot get the captions to change when switching to another video.
Either inserting a "track" tag when the player HTML is first created or adding a track to the 'options' object when the player is initialized are the only ways I can get the player to display the "CC" button and show captions. However, I cannot re-initialize the player while in full screen so changing the track that way will not work.
I have tried addTextTrack and addTextTracks and both show that the tracks have been added - using something like console.log(videoObject.textTracks()) - but the player never shows them or the "CC" button.
Here is my code, any help is greatly appreciated:
;(function(window,undefined) {
// VIDEOS OBJECT
var videos = [
{"volume":"70","title":"TEST 1","url":"test1.mp4","type":"mp4"},
{"volume":"80","title":"TEST 2","url":"test2.mp4","type":"mp4"},
{"volume":"90","title":"TEST 3","url":"test3.mp4","type":"mp4"}
];
// CONSTANTS
var VIDEO_BOX_ID = "jbunow_marquee_video_box", NAV_TEXT_ID = "jbunow_marquee_nav_text", NAV_ARROWS_ID = "jbunow_marquee_nav_arrows", VIDEO_OBJ_ID = "jbunow_marquee_video", NAV_PREV_ID = "jbunow_nav_prev", NAV_NEXT_ID = "jbunow_nav_next";
// GLOBAL VARIABLS
var videoObject;
var currentTrack = 0;
var videoObjectCreated = false;
var controlBarHideTimeout;
jQuery(document).ready(function(){
// CREATE NAV ARROWS AND LISTENERS, THEN START MARQUEE
var navArrowsHtml = "<div id='" + NAV_PREV_ID + "' title='Play Previous Video'></div>";
navArrowsHtml += "<div id='" + NAV_NEXT_ID + "' title='Play Next Video'></div>";
jQuery('#' + NAV_ARROWS_ID).html(navArrowsHtml);
jQuery('#' + NAV_PREV_ID).on('click',function() { ChangeVideo(GetPrevVideo()); });
jQuery('#' + NAV_NEXT_ID).on('click',function() { ChangeVideo(GetNextVideo()); });
ChangeVideo(currentTrack);
});
var ChangeVideo = function(newIndex) {
var videoBox = jQuery('#' + VIDEO_BOX_ID);
if (!videoObjectCreated) {
// LOAD PLAYER HTML
videoBox.html(GetPlayerHtml());
// INITIALIZE VIDEO-JS
videojs(VIDEO_OBJ_ID, {}, function(){
videoObject = this;
// LISTENERS
videoObject.on("ended", function() { ChangeVideo(GetNextVideo()); });
videoObject.on("loadeddata", function () { videoObject.play(); });
videoObjectCreated = true;
PlayVideo(newIndex);
});
} else { PlayVideo(newIndex); }
}
var PlayVideo = function(newIndex) {
// TRY ADDING MULTIPLE TRACKS
videoObject.addTextTracks([{ kind: 'captions', label: 'English2', language: 'en', srclang: 'en', src: 'track2.vtt' }]);
// TRY ADDING HTML
//jQuery('#' + VIDEO_OBJ_ID + ' video').eq(0).append("<track kind='captions' src='track2.vtt' srclang='en' label='English' default />");
// TRY ADDING SINGLE TRACK THEN SHOWING USING RETURNED ID
//var newTrack = videoObject.addTextTrack('captions', 'English2', 'en', { kind: 'captions', label: 'English2', language: 'en', srclang: 'en', src: 'track2.vtt' });
//videoObject.showTextTrack(newTrack.id_, newTrack.kind_);
videoObject.volume(parseFloat(videos[newIndex]["volume"]) / 100); // SET START VOLUME
videoObject.src({ type: "video/" + videos[newIndex]["type"], src: videos[newIndex]["url"] }); // SET NEW SRC
videoObject.load();
videoObject.ready(function () {
videoObject.play();
clearTimeout(controlBarHideTimeout);
controlBarHideTimeout = setTimeout(function() { videoObject.controlBar.fadeOut(); }, 2000);
jQuery('#' + NAV_TEXT_ID).fadeOut(150, function() {
currentTrack = newIndex;
var navHtml = "";
navHtml += "<h1>Now Playing</h1><h2>" + videos[newIndex]["title"] + "</h2>";
if (videos.length > 1) { navHtml += "<h1>Up Next</h1><h2>" + videos[GetNextVideo()]["title"] + "</h2>"; }
jQuery('#' + NAV_TEXT_ID).html(navHtml).fadeIn(250);
});
});
}
var GetPlayerHtml = function() {
var playerHtml = "";
playerHtml += "<video id='" + VIDEO_OBJ_ID + "' class='video-js vjs-default-skin' controls='controls' preload='auto' width='560' height='315'>";
playerHtml += "<source src='' type='video/mp4' />";
//playerHtml += "<track kind='captions' src='track.vtt' srclang='en' label='English' default='default' />";
playerHtml += "</video>";
return playerHtml;
}
var GetNextVideo = function() {
if (currentTrack >= videos.length - 1) { return 0; }
else { return (currentTrack + 1); }
}
var GetPrevVideo = function() {
if (currentTrack <= 0) { return videos.length - 1; }
else { return (currentTrack - 1); }
}
})(window);
The current VideoJS implementation (4.4.2) loads every kind of text tracks (subtitles, captions, chapters) on initialization time of the player itself, so it grabs correctly only those, which are defined between the <video> tags.
EDIT: I meant it does load them when calling addTextTrack, but the player UI will never update after initialization time, and will always show the initialization time text tracks.
One possible workaround is if you destroy the complete videojs player and re-create it on video source change after you have refreshed the content between the <video> tags. So this way you don't update the source via the videojs player, but via dynamically adding the required DOM elements and initializing a new player on them. Probably this solution will cause some UI flashes, and is quite non-optimal for the problem. Here is a link about destroying the videojs player
Second option is to add the dynamic text track handling to the existing code, which is not as hard as it sounds if one knows where to look (I did it for only chapters, but could be similar for other text tracks as well). The code below works with the latest official build 4.4.2. Note that I'm using jQuery for removing the text track elements, so if anyone applies these changes as is, jQuery needs to be loaded before videojs.
Edit the video.dev.js file as follows:
1: Add a clearTextTracks function to the Player
vjs.Player.prototype.clearTextTracks = function() {
var tracks = this.textTracks_ = this.textTracks_ || [];
for (var i = 0; i != tracks.length; ++i)
$(tracks[i].el()).remove();
tracks.splice(0, tracks.length);
this.trigger("textTracksChanged");
};
2: Add the new 'textTracksChanged' event trigger to the end of the existing addTextTrack method
vjs.Player.prototype.addTextTrack = function(kind, label, language, options) {
...
this.trigger("textTracksChanged");
}
3: Handle the new event in the TextTrackButton constructor function
vjs.TextTrackButton = vjs.MenuButton.extend({
/** #constructor */
init: function(player, options) {
vjs.MenuButton.call(this, player, options);
if (this.items.length <= 1) {
this.hide();
}
player.on('textTracksChanged', vjs.bind(this, this.refresh));
}
});
4: Implement the refresh method on the TextTrackButton
// removes and recreates the texttrack menu
vjs.TextTrackButton.prototype.refresh = function () {
this.removeChild(this.menu);
this.menu = this.createMenu();
this.addChild(this.menu);
if (this.items && this.items.length <= this.kind_ == "chapters" ? 0 : 1) {
this.hide();
} else
this.show();
};
Sorry, but for now I cannot link to a real working example, I hope the snippets above will be enough as a starting point to anyone intrested in this.
You can use this code when you update the source to a new video. Just call the clearTextTracks method, and add the new text tracks with the addTextTrack method, and the menus now should update themselves.
Doing the exact same thing (or rather NOT doing the exact same thing)... really need to figure out how to dynamically change / add a caption track.
This works to get it playing via the underlying HTML5, but it does not show the videojs CC button:
document.getElementById("HtmlFiveMediaPlayer_html5_api").innerHTML = '<track label="English Captions" srclang="en" kind="captions" src="http://localhost/media/captiontest/demo_Brian/demo_h264_1.vtt" type="text/vtt" default />';