How to transform the image from cloudinay cloud using Meteor react? - cloudinary

I am using this code:
import cloudinary from 'cloudinary';
{cloudinary.Image(
'http://res.cloudinary.com/dtkptr7rt/image/upload/v1512735944/kb4gvy7d4jskexkstcti.jpg'
)}
Which raises this error:
Cannot read property 'Image' of undefined

Please see Cloudinary's React package and follow the instructions there - https://github.com/cloudinary/cloudinary-react
Regardless, cloudinary.Image should be provided with a public Id (i.e., image Id) and not a URL.
Also, the URL is not accessible.

Related

Get FirebaseAuthPlatform for Flutter web RecaptchaVerifier

SITUATION:
I'm in the process of implementing Firebase phone sign-in on flutter web. In doing so, I want to customize the reCAPTCHA that is called by the signInWithPhoneNumber function as outlined in the Firebase documentation.
final ConfirmationResult confirmationResult = await auth.signInWithPhoneNumber(
phoneNumber, verifier_should_go_here
);
COMPLICATION:
I am trying to implement the RecaptchaVerifier, but it has a required parameter called FirebaseAuthPlatform, and I can't figure out how to generate this parameter for my app.
QUESTION:
How can I create a RecaptchaVerifier to pass to the signInWithPhoneNumber function on flutter web?
3 easy steps:
You add the firebase_auth_platform_interface dependency to your pubspec.yaml file with flutter pub add firebase_auth_platform_interface
You import the package like this: import 'package:firebase_auth_platform_interface/firebase_auth_platform_interface.dart' show FirebaseAuthPlatform;
Inside the constructor for RecaptchaVerifier, you use FirebaseAuthPlatform.instance

How to show Images with source as the file path stored in Database

I saved an image in a SQLite database in the smartphone as TEXT with the name Path.
I tried saving it in to ways:
require('../assets/icon.png')
'../assets/icon.png'
For the first method, when calling it on an Image component I use it like this:
<Image source={item.Path} />
Its value is for example 10.0 when using console.log to visualize it.
And I get the following error:
Warning: Failed prop type: Invalid prop `source` supplied to `Image`.
But after adding another image, to the database, all the images will load. It doesn't work only when I start the app.
For the second method:
<Image source={require(`${item.Path}`)}
And I get the following error:
Invalid call at line 130: require("" + item.Path)
I have searched but can't find a optimal way to store an image path in the database and later use it in a Image component.
For what I understand here you're trying to use local path of your images from your database inside your react native project. If I'm correct it can't work, you can't use local path of a distant directory, the api only serve the path not the file itself.
What I would do it's using amazon S3 bucket to upload static files, images in your case and you save the url of the uploaded images in your database. Then you call images in your code like this :
source={{ uri: "http://file.jpg" }}

React Kepler Mapbox API key

I am new to React and have used an example to put together a map to show data using Kepler. I am using a default token in my .env.local file. I created that file under my project folder using new file and giving it a name and then I only have the following line in it.
REACT_APP_MAPBOX_API = pk.someletters.hello
// Keep in mind this token is the default token that I got when I created my account with mapbox. Is it ok to use the default token?
The way I am using it in my app.js iss
<KeplerGl
id="Test"
mapboxApiAccessToken={process.env.REACT_APP_MAPBOX_API}
width={window.innerWidth}
height={window.innerHeight}
/>
The page renders but with this error.
Its clearly not getting the mapbox identifier from the .env.local file.
index.js:1 Warning: Failed prop type: The prop `mapboxApiAccessToken` is marked as required in `MapContainer`, but its value is `undefined`.
What am I not doing correctly here?
Also there is a warning coming that says
kepler-gl.js:179 Mapbox Token not valid. [Click here](https://github.com/keplergl/kepler.gl#mapboxapiaccesstoken-string-required)
When I click on that it just takes me back creating an account with mapbox and accessing token. Do I need to make a token with any consideration? or default is good.
Anyone?

React Native : Alternate way to get the Image require to work when array of image paths got dynamically from fetchapi

React Native : Alternate way to get the Image require to work when array of image paths got dynamically from fetchapi
Tried the below and aware that require always need the fixed path in quotes('') which works well when static image path is known. Thee should be alternate way to map the images which is received from fetchapi.
source={require('./assets/images/'+{item.image_path})}
Example fetch api json response is: [{user_name: 'DynamicUser1', image_path: 'DynamicUser1.jpg'},{user_name: 'TestUser1', image_path: 'TestUser1.jpg'}... ]
Need alternate way to get the images rendered with the dynamic image path rightly picked from the fetch api.
userData = this.state.resultsUserData.map((item) => {
return (
          <View key={item.user_name} style={ styles.container }>
            <Text style={styles.title}>
              {item.user_name}
            </Text> 
<TouchableOpacity goBack={()=>this.backAction} onPress={()=>this.homepage()}>
              <Image key={item.image_path} 
                source={require('./assets/images/'+{item.image_path})}
            </TouchableOpacity> 
</View>
)
}
require doesn't work dynamically like that. Since you already have the images in the assets, you should write the require's for each of them in advance and call the appropriate one when needed. You can read more about this limitation here.

Is there a way I can detect text from an Image using Expo React Native?

I am working with Expo, React Native and I want to to be able to detect text from images. Is there an package i can work with to achieve this?
I am using Expo camera module to snap the picture and supply the URI to the text detector
I have tried using react-native-text-detector but I am getting the error that the function detectFromUri is not defined. I have also tried with tesserect.js but it fails on import with "unable to resolve variable location".
await this.camera.takePictureAsync(options).then(photo => {
photo.exif.Orientation = 1;
//console.log(photo.uri);
const visionResp = await RNTextDetector.detectFromUri(photo.uri);
if (!(visionResp && visionResp.length > 0)) {
throw "UNMATCHED";
}
console.log(visionResp);
});
I am expecting the visionResp to log the results returned from the detection but instead i get undefined is not an object (evaluating '_reactNativeTextDetector.default.detectFromUri')
Is your project created with expo-cli?
If yes, Expo is not supporting OCR currently. There is a feature request on canny.io but you can't know for sure when it will become available.
Your only choice is to use an OCR service like this one.Internet connectivity will be required.
If not, (and the project is created with react-native-cli) you should be able to successfully use react-native-text-detector. Just make sure you link the package correctly. Docs here