Get FirebaseAuthPlatform for Flutter web RecaptchaVerifier - firebase-authentication

SITUATION:
I'm in the process of implementing Firebase phone sign-in on flutter web. In doing so, I want to customize the reCAPTCHA that is called by the signInWithPhoneNumber function as outlined in the Firebase documentation.
final ConfirmationResult confirmationResult = await auth.signInWithPhoneNumber(
phoneNumber, verifier_should_go_here
);
COMPLICATION:
I am trying to implement the RecaptchaVerifier, but it has a required parameter called FirebaseAuthPlatform, and I can't figure out how to generate this parameter for my app.
QUESTION:
How can I create a RecaptchaVerifier to pass to the signInWithPhoneNumber function on flutter web?

3 easy steps:
You add the firebase_auth_platform_interface dependency to your pubspec.yaml file with flutter pub add firebase_auth_platform_interface
You import the package like this: import 'package:firebase_auth_platform_interface/firebase_auth_platform_interface.dart' show FirebaseAuthPlatform;
Inside the constructor for RecaptchaVerifier, you use FirebaseAuthPlatform.instance

Related

Can you specify your own attribute used to name a tap/click action in DataDog React Native SDK?

The DataDog docs states that
Starting with version 2.16.0, with the actionNameAttribute initialization parameter, you can specify your own attribute that is used to name the action.
Thus you can have something like this in the DD configuration constructor:
DD_RUM.init({
...
trackInteractions: true,
actionNameAttribute: 'itemID',
...
})
Now both attributes itemID and data-dd-action-name can be used to name tap/click actions; data-dd-action-name is favored when both attributes are present on an element.
However that seems to be the case only for the browser SDK, and not the React Native SDK. Is this really the case?
data-dd-action-name is currently the only way to specify the target name of an action using the RUM React Native SDK.
Depending on your use case you may can also use the DdRum.addAction() and DdRum.startAction() methods for manual action tracking and provide the desired target name. There is an example of calling these methods in the Manual Instrumentation section of the RUM React Native SDK documentation.

I need to creat a multi language chatbot in react native with dialogFlow

I need to put the multilanguage support in my chatBot.
here some code and link of tutorial that I followed.
https://blog.jscrambler.com/build-a-chatbot-with-dialogflow-and-react-native/
Here, is especified --> Dialogflow_V2.LANG_ENGLISH_US,
But i need the multilanguage...
componentDidMount() {
Dialogflow_V2.setConfiguration(
dialogflowConfig.client_email,
dialogflowConfig.private_key,
Dialogflow_V2.LANG_ENGLISH_US,
dialogflowConfig.project_id
);
}
Use react-native-localize to add the ability of multiple language support.
You can use react-native-localize with I18n-js (but also with react-intl, react-i18next, etc. The choice is yours!)
⚠️ Deprecated:
We can use an internationalization module named react-native-i18n to add many languages in our React Native projects.
Install the following module to link with your project.
npm i react-native-i18n --save
For more details, please go through How to add localization (i18n, g11n) and RTL support to a React Native project.
reference by
Set the language in your configuration: that you can set your own language suport in chatbot:
componentDidMount() {
Dialogflow_V2.setConfiguration(
dialogflowConfig.client_email,
dialogflowConfig.private_key,
Dialogflow_V2**.LANG_ENGLISH_US,**
dialogflowConfig.project_id
);
}
LANG_CHINESE_CHINA
LANG_CHINESE_HONGKONG
LANG_CHINESE_TAIWAN
LANG_DUTCH
LANG_ENGLISH
LANG_ENGLISH_GB
LANG_ENGLISH_US
LANG_FRENCH
LANG_GERMAN
LANG_ITALIAN
LANG_JAPANESE
LANG_KOREAN
LANG_PORTUGUESE
LANG_PORTUGUESE_BRAZIL
LANG_RUSSIAN
LANG_SPANISH
LANG_UKRAINIAN

What is the best way to integrate a react-native application into another react-native application?

I've implemented a react-native application and now I want to enhance it by adding along side it another different react-native application.
What i want to achieve is to keep the two application separated in order to continue to implement them as two separate application, and avoid to rewrite them completely as a single application.
Both application are using react-redux to handle their states. The first brutal approach which I have tried is to wrap one of the two application into a npm package and add it as a dependence of the other one. Then I've just added a tab to the main application which when clicked navigate to the second application. This approach seems to work, but I don't think is the best way to do it.
Do you think there could be any sort of problem doing so? Is there a more intelligent and elegant way to do it? I know it is kinda a generic question, so I would accept also an article/link about this argument.
You can create a git tag of your second application and can add it as a dependency in your first application.
You can also add it as a git sub-module.
P.S. i prefer the first one.
I think the best way to do it would be Linking as described here: Basic usage. So, you can easily pass needed parameters to the other app you want to open and also read them as app opens. Check this simple example:
Caller app:
Linking.openURL('calleeApp://app?param1=test&param2=test2')
Callee app:
componentDidMount() {
Linking.getInitialURL().then((url) => {
if (url) {
console.log('Initial url is: ' + url);
}
}).catch(err => console.error('An error occurred', err));
}
do not forget to import it first:
import { Linking } from "react-native";
Let me know if it worked for you!

Is there a way I can detect text from an Image using Expo React Native?

I am working with Expo, React Native and I want to to be able to detect text from images. Is there an package i can work with to achieve this?
I am using Expo camera module to snap the picture and supply the URI to the text detector
I have tried using react-native-text-detector but I am getting the error that the function detectFromUri is not defined. I have also tried with tesserect.js but it fails on import with "unable to resolve variable location".
await this.camera.takePictureAsync(options).then(photo => {
photo.exif.Orientation = 1;
//console.log(photo.uri);
const visionResp = await RNTextDetector.detectFromUri(photo.uri);
if (!(visionResp && visionResp.length > 0)) {
throw "UNMATCHED";
}
console.log(visionResp);
});
I am expecting the visionResp to log the results returned from the detection but instead i get undefined is not an object (evaluating '_reactNativeTextDetector.default.detectFromUri')
Is your project created with expo-cli?
If yes, Expo is not supporting OCR currently. There is a feature request on canny.io but you can't know for sure when it will become available.
Your only choice is to use an OCR service like this one.Internet connectivity will be required.
If not, (and the project is created with react-native-cli) you should be able to successfully use react-native-text-detector. Just make sure you link the package correctly. Docs here

Add element to Firestore array

Currently, I have an empty array in Firestore dashboard. And I'm trying to add some item to it. I've followed this, but no result. I don't want to sore and rewrite this element.
My gradle contains:
implementation 'com.google.firebase:firebase-firestore:17.0.4'
and my code:
import com.google.firebase.firestore.FieldValue
...
val documentReference = firestore.collection("events")
.document(event.firebaseUserUid + "-" + event.title)
documentReference
.update("participants", (FieldValue.arrayUnion(FirebaseAuth.getInstance().currentUser!!.uid)) )
But FieldValue.arrayUnion doesn't exists.
I don't think support for that has actually been added to the Android API, despite its documentation as such on the page you linked.
It's missing from the Android API documentation, but is present in the Web API documentation as well as the Web changelog.
Newest version of firestore now supports it:
implementation 'com.google.firebase:firebase-firestore:17.1.0'