face recognition with pepper robot with simulator - robot

How can I configure pepper robot in choregraphe to detect face and recognise for the next time in simulator?
Getting Error as bellow.[ERROR] behavior.box :init:8 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1139710186718928:/Learn Face_1/Learn Face_2: ALProxy::ALProxy Can't find service: ALFaceDetection

Unfortunately choregraphe and its simulator don't include all the features, including face recognition or speech recognition. You will need a real robot to use this feature.

Related

Pepper robot - Date_Dance.crg

http://doc.aldebaran.com/2-5/getting_started/samples/sample_dance1.html
Hello,
I am a newbie in Pepper Robot programming. As a programming exercise, I imported a project from Date_Dance.crg which I found from the above link, and followed the directions described in “Try it!” section.
I can run the app by pressing the green right arrow button or selecting the app from App Launcher on the tablet, however, the app cannot be triggered by any of the triggering sentences: “date dance”, “Pepper’s date dance”, “dance I’m fresh you’re pretty”, “Fresh and pretty dance”. I also tried by using Application title by saying: “start Date Dance”. However, it doesn’t work either.
Could anyone guess or explain why it fails to my voice command?
Some possibilities:
Autonomous Life isn't activated on Pepper
Autonomous Life is activated, but the robot isn't listening (blue eyes), maybe because it hasn't seen you
Pepper is just in the wrong language
Or you don't have the special dialog applauncher package that is required for that to work (it's usually installed on robots along with other apps like the tablet app launcher, but it may be missing on yours ?) - maybe do an app update, or ask whoever provided you with the robot (there can be several different configurations depending of the robot use case, though I believe most of them have the dialog applauncher)

Error Occured While Recording the User Voice In IOS APP

I am developing an Voice App .I am using Nuance Speech kit for voice Processing .What i am doing is ,i am reading the text and play it in voice .and then recording user voice for the input....So while recording i am getting the Error like .
NMT CloudRecognizer Error [Session ID:035141cf-4d5e-45ba-b81a-e8121b657cd8] [Type:2] [Code:5] [Message:(null)] [ErrorParam:AUDIO_INFO] [Prompt:Sorry, speech not recognized. Please try again.]
2017-06-20 03:56:10.171307 Link[573:131453] CloudRecognizer Error [Session ID:035141cf-4d5e-45ba-b81a-e8121b657cd8] [Type:2] [Code:5] [Message:(null)] [ErrorParam:AUDIO_INFO] [Prompt:Sorry, speech not recognized. Please try again.]
I want know to why this error message is occurring .if it included the background noise means.How can i reduce or remove the background noise while recording ..Please help me to do this. I want to remove the background noise while recording the User voice.
Thanks in
Advance!!!

Android TTS initializing slowly

We are developing an update to French English Dictionary + for Android and TTS is giving us grief. The first time a user taps a speaker to hear pronunciation, it's taking upwards of 14 seconds to initialize TTS and deliver a result. This only happens for the secondary language (not the device's default language).
We have tried writing a 2 line code snippet to test in isolation, same result. We also tried initializing TTS on startup and pronouncing a blank string but still no luck.

Verifying toasts using Appium , and get server reaction to button clicks

i'm new to android automation testing and i recently started to work with Appium.
I use it for android native app automation.
I have 2 question -
is there a way to verify toasts?
the latest posts i saw which referred to this issue are from the mid of 2014
and i wanted to know if there's something new in this subject before i will find another tool to run my tests with (as i understand selendroid can verify toasts).
is there a way to catch the http request which my app sends to the server when i'm pressing a button, during the automatic test?
something like a listener which works in the backround and wait for clicks?
tnx
To verify toast messages you can use one of image recognition libraries. For ruby I use RTesseract together with image processor RMgick
Idea is simple:
make toast message to appear
take few screenshots
iterate over taken screenshots and look for text
You may play with image processing, to increase recognition quality.
Here you can find code snippet which I use in my framework: link

Disable speak Now prompt in Voice Recognition in Android

When i click on Voice Search in android it gives a pop up screen SPEAK NOW and after i speak say Hello then a WORKING screen comes.
Can i know how to disable the default Speak Now and Working screen in Voice Recognition in Android.
These screens are displayed as i use the API RecognizerIntent.ACTION_RECOGNIZE_SPEECH.
How can i give my screens and know where actual processing takes place?
I'm guessing the user has solved their problem by now. For anyone seeing this today, the answer is to use the SpeechRecognizer http://developer.android.com/reference/android/speech/SpeechRecognizer.html
The SpeechRecognizer lets you use speech recognition from within your own Activity, defining custom UI.