I use react-native with expo (both latest) and i try to use speech to text.
For the moment i have found two solutions:
react-native-voice, who need to eject the app but I do not want to.
Google api call, which is not free.
Both of these solutions are not ideal in my case.
The thing is that I can write in an input with the speech-to-text integrated into the native keyboard. But I would like to be able to trigger it at the press of a button, without having to open the keyboard and have to click on the microphone.
Is there a way to do this? In the react native keyboard doc they don't talk about this
Thanks
Related
I am currently evaluating the possibility to emit keypresses (like left / right arrow or page up / page down) programmatically to the system with React Native in Windows or macOS.
The use case would be a remote clicker for Powerpoint presentations.
All I find online is the possibility to send keypress events to the RN app, not the system.
Is that something that is possible at all or would I need to build this in e.g. Java?
Thanks for your ideas in advance!
I'm trying to use Siri Shortcuts in a react-native app I am developing. I am using react-native-siri-shortcuts and I have it allowing me to donate shortcuts and suggest shortcuts. If I then go to the shortcuts app and add it, I am able to use Siri and the shortcut. I can even use the shortcut to print different stuff to the phone screen or put in data to my apps database.
I am running into an issue with getting Siri to speak. If I want Siri to fetch data and then say a string to the User can I do that in React Native or do I need to use swift to do this?
I want to create a functionality in react-native app such that on pressing a particular button, it should get me to the "open with" window, and that window should fetch me the list of all the installed applications in the phone which can run audio files. How can i get such list?
On a Android system, that "Open with" screen pops up when:
The user have more than one application that can play that specific audio format(Because it's not every app that can run every audio format).
The user did not set-up a default app to open the audio format.
So, You should just use a library like rn-fetch-blob to open the file for you, like as follow:
RNFetchBlob.android.actionViewIntent(fileLocation, mimeType)
And android will take care of picking the right app for you.
In case this answer does not satisfy you, you can get a list of apps using react-native-android-installed-apps-unblocking, but there's no information about wich ones can run an audio file, and maybe IOS will have some restrictions about what can be done, so your best option is to let the system do that for you!
First, thank you for taking the time to read and help.
We have a third party barcode scanner on an android device. The device reads the barcode and sends the data back as keyboard input. I have been looking for a way to just capture all text input, but I haven't been able to find a global text input listener.
Does anyone know a way I can do this without forcing the user to click into the input box (ideally I would just capture and never present the input to the user), and then scanning the barcode?
Thank you!
What you want is to capture the raw keyboard events. It's the same as getting input from an external usb/bluetooth keyboard. You are correct that this won't work without a native module to capture those system-level events.
This react native library can do the trick:
https://github.com/kevinejohn/react-native-keyevent
I am trying to automate android application with Appium. I have to perform click operation on tooltip but appium does not identify the tooltip. I have attached screenshot below in which i have to perform click event on "Margaritaville-Grand Turk" but i am not able to perform this operation. How way we can perform action on tooltip.
I'm new to Appium, and mobile automation, but I could guess that tooltip is could not be found in activity structure (you could check with Appium inspector, or UIAutomatorViewer), same problem with toast notifications and AutocompleteTextView. I solve this with OCR image recognition engine.
Here you can find my implementation in Ruby: gist
I use 2 OCR libs, because of limitation for both:
'rtesseract' can't find text coordinates
'tesseract' have problems with text recognition
Idea is simple:
get required screen state(i.e. tooltip is shown)
make screenshot or few
process these screenshots, and look for required text
get text coordinates, and click on it.
Hope it helps