Voice command to Nao Webots simulator - voice-recognition

Am trying to do an Android app to control nao robot using a webots simulator
and am trying to send voice command to choreograph by my pc microphone so any ideas how to do that

Virtual robots do not run the actual speech recognition because of licensing issues. Therefore even though you can simulate audio input on virtual robots, they cannot recognize the speech.
In theory you can replace ALSpeechRecognition by your own service and get the ALDialog APIs work with it. You could try with Vosk or Mozilla DeepSpeech, but it is quite some work!

Related

Appium test shows "Automation Running" overlay on iOS15 - How can I get rid of it?

I am using a tool to automate tests on iPhones. The tool uses Appium as the test framework. In iOS version 15, the screen shows a dark overlay with the text "Automation Running". I am aware that this does not affect the test at all.
However, my problem is that I use a camera placed in front of the mobile phone to capture the screen and do some checks on the captured video. I have to use the camera itself since I run tests on OTT applications and there is no way to capture the video using software mechanisms because of DRM protection. This "Automation Running" overlay messes up with the checks that I am running on the video captured through the camera.
Is there any way to get rid of this overlay in iOS15 when we run Appium based tests?
After extensive exploration, I have come to the conclusion that there is no way this overlay can be gotten rid of unless Apple wants to.

Buildfire: How to use the Barcode Scanner

I have gotten to the point where I can launch the barcode mock mode.
I am trying to figure out how I can start scanning test codes in development.
Or would an app that I am trying to test this on, need to have camera permissions on the app?
In that case, how do you overcome the use of the Camera Hardware on the PWA?
Or is there a way that I can scan QR-Codes using a different javascript API that would work in all cases?
Also, I had to move the camera and barcode service Javascript files into my Widget folder because when I was trying to reference them as you do on the instructions, the files wouldn't load.
Yes when you are in web it will mock the functionality because you're not on a device. There are HTML5 Camera API's see (https://www.html5rocks.com/en/tutorials/getusermedia/intro/)
You can use the BuildFire Previewer App (contact customer support if you havent downloaded it yet)
While you can take a copy of the services into your plugin it is risky. If we issue any updates or bug fixes you will not receive them. Also if it breaks compatibility at any time (rare but possible) your plugin stop working

Android - Things Raspberry Pi - Google Mobile Vision Support

I'm trying to run an android app on Android Things OS.
The app uses facial detection as a first step filter for reaching facial recognition.
The recognition process is made by a third party (remote) API, so there is nothing to worry about it, but the detection is being carried out by the Google Mobile Vision API for Android. The problem I'm facing is that the app crashes every time a camera preview is about to start.
The code of the app is derived from this example: https://github.com/googlesamples/android-vision (Face tracking). So if this code runs, my app runs.
I also know that there is a known issue with the raspberry pi and the camera trying to create more than one output surface.
My questions are:
(1) Is there a way to successfully run the code in the example https://github.com/googlesamples/android-vision (Face tracking)?
(2) When is going to resolved that known issue?
Thanks in advance.
Attentive,
Gersain CastaƱeda.
The latest version of Android Things(DP6) which is with API 27, supports the new camera API Camera2 as its explained here.
Camera2 API supports more than one output surface and its runnable on Android Things.
For getting more inspirations of how doing this check this tutorial(how to use camera2) and this very useful sample(how to use Google Vision).

Microphone input to raspberry pi 3 to work in browser

I attached a PlayStationEye USB mic/camera to my pi 3 and was able to get the microphone working with arecord with commands such as arecord -D plughw:0,0 -f cd ./test.wav. My ~/.asoundrc is identical to the file shown here here.
However, when I try to use the mic through the browser it will not work. I tried various websites with no luck. My end goal is to use this mic to do speech recognition via some platform but one step at a time. Any suggestions as to how and begin to start troubleshooting this? Chromium does show the correct default devices in the settings.
I'm at a total loss and there is little to nothing I can find out there. For examples this post does not apply because the browser certainly sees it. Could it be a latency problem of some sort?
If it works with "arecord" command, you should be able to capture the audio on the web browser as well.
You should check your Microphone setting under the Chromium Content Privacy Setting and make sure you select your USB Microphone in the drop-down options.
All the best.

Calabash-ios - Put the device is Airplane Mode

Is there any way to interact with an iPhone's settings when testing a device using calabash-ios?
Using calabash for Android (calabash-android) I can make system calls in my step-definitions using adb, Android Device Bridge. For example: system(adb shell am broadcast -a android.intent.action.AIRPLANE_MODE)
This will make a call right into the Android OS.
I don't want to have to manually set up a device and then run tests. I would like to automate it. Is this possible?
Examples: I want to see if all my tests pass with airplane mode on. Then I would like to programmatically set airplane mode to off and see if all my tests pass. I would prefer not to have to change a setting like this manually and then run all tests.
Thanks
It is not possible to put your device into Airplane mode with Calabash iOS.
You could write a backdoor method in your app that simulates Airplane mode.
Be aware however, that Calabash iOS embeds an http server in your application; that is how the client gem communicates with your app.
http://calabashapi.xamarin.com/ios/Calabash/Cucumber/Core.html#backdoor-instance_method
The Xamarin Test Cloud has some options for testing apps in Airplane mode.