Android TTS initializing slowly - text-to-speech

We are developing an update to French English Dictionary + for Android and TTS is giving us grief. The first time a user taps a speaker to hear pronunciation, it's taking upwards of 14 seconds to initialize TTS and deliver a result. This only happens for the secondary language (not the device's default language).
We have tried writing a 2 line code snippet to test in isolation, same result. We also tried initializing TTS on startup and pronouncing a blank string but still no luck.

Related

OS Data Hub not working on iPad or iPhone

I look after a website which displays UK Ordnance Survey maps for people going on walks in SW England, and it relies on displaying a marker on an OS map, in response either to an alphanumeric OSGB grid reference, appended as a query string to the name of the HTML file which displays the map, or to a click on the OS map produced by the HTML file, in the absence of a query string. This has all worked well for several years, on both Windows PCs and mobile devices, using OS OpenSpace, which is based on OpenLayers 2, but now Ordnance Survey have decided to enforce a new system based on Open Layers 6, and requiring a complete re-code of the JavaScript. OS provide example code for various scenarios, from which I have re-written my code to run perfectly on a Windows PC with a mouse, but it fails on an iPad or an iPhone, particularly if the iPad is several years old.
Using a current Apple device, I cannot display a scaled vector graphic on an OS map, either at hard-coded co-ordinates, or in response to a tap on a map. A tap will however bring up a pop-up at the tapped point, and a swipe on the map will pan it. Using an iPad several years old, in addition to the above problems, opening two fingers will not zoom the map, and a swipe will not pan it. The only things that work are the + and - buttons on the map.
I have read that I may need to include a Controls section within my Map command, along the lines of:
controls: [
new OpenLayers.Control.TouchNavigation({
dragPanOptions: {
enableKinetic: true
}
}),
new OpenLayers.Control.Zoom()
],
but if I include this code, my JavaScript crashes, and the JavaScript Error Console within MS Edge gives error message 'OpenLayers is not defined'
Ordnance Survey tell me that my problem 'is an issue relating to the mapping library, OpenLayers, rather than our APIs' so my question is, how do I get code based on the OS examples to run on mobile devices of any age, in the same way my existing code based on Open Layers 2 has run for several years?
I will gladly supply a copy of my code which works on a Windows PC, but not on an Apple device, on request.
OpenLayers needs polyfills on older devices/browsers. Try adding
<script src="https://unpkg.com/elm-pep"></script>
<script src="https://cdn.polyfill.io/v2/polyfill.min.js?features=fetch,requestAnimationFrame,Element.prototype.classList,URL"></script>
before the ol.js script. The default controls should be sufficient (the code you have used is Openlayers 2 format and will not work with OpenLayers 6).

Strange behavior from VideoView on AndroidTV

Be ready for a really strange and unusual behavior!! So basically I have an application that plays different videos in a slideshow format. Each video/image has a duration and once its finished the next media file is queued up and played. When I run the application on any mobile phone or Android TV BOX (Chinese TVbox) it works fine. However, and here's the unusual part... When I run it on an official Android TV, the content plays correctly the first time, when its finished it loops back to the first media file (all good so far) but if the content has a video and its located before the last media file I only see a black screen for that file.
Here's an example just to explain it a bit more.
Files:
- Vid-1 (Duration 5sec)
- Vid-2 (Duration 10sec)
- Vid-3 (Duration 7sec)
- Img-4 (Duration 5sec)
The first time I start the App, All media listed above will play normally. After that once it loops back to Vid-1, all media plays normally except Vid-3. I get black screen on Vid-3.
Anyone has ever faced a similar issue? Sadly, I got no errors or logs to share as there is nothing there.

How to implement the seekto functionality in ionic 4?

How can i track till where the user have watched the video in order to resume the video from that position when the user returns?
Like youtube,
it actually knows which video is played till where.
Have checked the AndroidExoPlayer but the only supported platform is Android.
I need in both Android and iOS.
link for AndroidExoPlayer in ionic 4
https://ionicframework.com/docs/native/android-exoplayer/
To check Instance members for AndroidExoPlayer
https://ionicframework.com/docs/v3/native/android-exoplayer/
Have even checked the plugin Media in ionic 4 but it is only for audio
https://ionicframework.com/docs/native/media
Videogular might be a good option.
Here a couple of tips:
vgUpdateTime($currentTime, $duration): Called when progress time has been updated. Returns the current time and the total duration.
seekTime(value, byPercent): Seeks to a specified time position. Param value must be an integer representing the target position in seconds or a percentage. By default seekTime seeks by seconds, if you want to seek by percentage just pass byPercent to true.

Pepper robot - Date_Dance.crg

http://doc.aldebaran.com/2-5/getting_started/samples/sample_dance1.html
Hello,
I am a newbie in Pepper Robot programming. As a programming exercise, I imported a project from Date_Dance.crg which I found from the above link, and followed the directions described in “Try it!” section.
I can run the app by pressing the green right arrow button or selecting the app from App Launcher on the tablet, however, the app cannot be triggered by any of the triggering sentences: “date dance”, “Pepper’s date dance”, “dance I’m fresh you’re pretty”, “Fresh and pretty dance”. I also tried by using Application title by saying: “start Date Dance”. However, it doesn’t work either.
Could anyone guess or explain why it fails to my voice command?
Some possibilities:
Autonomous Life isn't activated on Pepper
Autonomous Life is activated, but the robot isn't listening (blue eyes), maybe because it hasn't seen you
Pepper is just in the wrong language
Or you don't have the special dialog applauncher package that is required for that to work (it's usually installed on robots along with other apps like the tablet app launcher, but it may be missing on yours ?) - maybe do an app update, or ask whoever provided you with the robot (there can be several different configurations depending of the robot use case, though I believe most of them have the dialog applauncher)

English and Arabic Language Switching in Single BB 10 Cascades

I have developed a BB 10 Application where I need to provide English and Arabic Language Switching.Now App is working good in English,when the user selects Arabic Language,Total Application is need to be converted into Arabic Language. My App Contains more than 18 Qmls.
each Qml has more than 10 containers.
If it is English, we provide containers Orientation from LeftToRight,If it is Arabic RightToLeft and Label Alignments etc.I have tried to change the Orientation based on User Selection.But I think, it is not a good method to do so.
now my App should switch b/w English and Arabic Languages.What is the best way to do it?
Guide me,
Thanks !!!
I think you are going on right way...
You can save current language in persistant storage and use this value in all QML.
Get current language from persistant storage in every QML and applied orientation according to language..
Some days ago I given you this answer. Just follow this for all QML..