Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 12 days ago.
The community is reviewing whether to reopen this question as of 11 days ago.
Improve this question
I have used <Location.Address> in my expo react-native app as it was showing by an youtuber. Then I have used it. But it is showing now unexpected token and the property "Address" does not exist in the module. Can anyone solve this problem?
Difficulties:
unexpected token and the property "Address" does not exist in the module
Tried: 10 times
By adding my own api. But in that i was unable to show all location details
Status: Unsolved
Want Help
import React, {useState, useReducer,useEffect} from "react"
import {View, Text, StyleSheet, Image, Dimensions} from "react-native"
import * as Location from "expo-location"
const screenWidth = Dimensions.get("screen").width
export const LandingScreen = () =>{
const [errorMsg, setErrorMsg] = useState("")
const [address, setAddress] = useState<Location.Address>()
const [displayAddress, setDisplayAddress] = useState("Waiting for current location")
useEffect(() =>{
(async () =>{
let { status } = await Location.requestPermissionsAsync();
if (status !== "granted"){
setErrorMsg("Permissions to access location is denied")
}
let location: any = await Location.getCurrentPositionAsync()
location.setGoogleApiKey(apiKeys)
console.log(status)
const {coords} = location
if(coords){
const {longitude, latitude} = coords
let addressResponse: any = await Location.reverseGeocodeAsync({longitude, latitude})
for(let item of addressResponse){
setAddress(item)
let currentAddress = `${item.name}, ${item.street}, ${item.postalCode}, ${item.country}`
setDisplayAddress(currentAddress)
return;
}
}else{
}
})
},)
return(
<View style={styles.container}>
<View style={styles.navigation}/>
<View style={styles.body}>
<Image source={require("../images/delivery.png")} style={styles.deliveryIcon}/>
<View style={styles.addressContainer}>
<Text style={styles.addressTitle}>Your deliver address</Text>
</View>
<Text style={styles.addressText}>{displayAddress}</Text>
</View>
<View style={styles.footer}/>
</View>
)
}
const styles = StyleSheet.create({
container:{
flex:1,
backgroundColor:'rgba(242,242,242,1)'
},
navigation:{
flex:2,
},
body:{
flex:9,
justifyContent:"center",
alignItems:"center",
},
footer:{
flex:1,
},
deliveryIcon:{
width:120,
height:120
},
addressContainer:{
width:screenWidth - 100,
borderBottomColor:"red",
borderBottomWidth:0.5,
padding:5,
marginBottom:10,
alignItems:"center"
},
addressTitle:{
fontSize:24,
fontWeight:"700",
color:"#7D7D7D"
},
addressText:{
fontSize:20,
fontWeight:"200",
color:"#4F4F4F"
}
})
Related
I want to make lower under the bar.But I don't know how to make. Press the lower bar to help me seyeo can get a page, I want I want to make.And I want to put the image that I want in the bottom bar Tell me what to do first I need your help I searched hard on Google, but there is no code similar to mine, so I keep getting it wrong or weird
import React, { useRef, useState, useCallback, useEffect } from 'react';
import {
View,
BackHandler,
Platform,
StyleSheet,
ActivityIndicator,
} from 'react-native';
import { WebView } from 'react-native-webview';
import { Image } from "react-native";
const DELAY_BEFORE_WEBVIEW = 10; // <--- seconds before webview load
export default function App() {
// ref
const webView = useRef();
const [canGoBack, setCanGoBack] = useState(false);
const handleBack = useCallback(() => {
if (canGoBack && webView.current) {
webView.current.goBack();
return true;
}
return false;
}, [canGoBack]);
// effects
useEffect(() => {
BackHandler.addEventListener('hardwareBackPress', handleBack);
return () => {
BackHandler.removeEventListener('hardwareBackPress', handleBack);
};
}, [handleBack]);
useEffect(() => {
setTimeout(() => {
setIsLoading(false);
}, 1000 * DELAY_BEFORE_WEBVIEW);
}, []);
// states
const [isLoading, setIsLoading] = useState(true);
return (
<View style={styles.container}>
<WebView
ref={webView}
source={{ uri: 'https://www.talesrunnerbestguild.co.kr/' }}
style={styles.webView}
onLoadProgress={(event) => setCanGoBack(event.nativeEvent.canGoBack)}
/>
{isLoading && <CenterLoader />}
</View>
);
}
const CenterLoader = () => (
<View style={styles.loaderContainer}>
<Image source={require('/workspace/talesrunner23/assets/js34.png/')}
style={{height:115,width:90}}/>
</View>
);
const styles = StyleSheet.create({
container: { flex: 1 },
loaderContainer: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
position: 'absolute',
width: '100%',
height: '100%',
backgroundColor:'white' // <-- comment this to show webview while loading
},
webView:
Platform.OS === 'ios'
? { marginTop: 30, marginBottom: 40 }
: { marginTop: 30 },
});
I have connected my app to google places API and I was working until a certain point (I'm not sure what changed) but the API is no longer receiving my request when I go check on the google developers console I neither shows 4XX or 2XX(previously showing)
this is my code
import React,{useState,useEffect} from "react";
import { View ,Text, TextInput ,SafeAreaView, ListView } from "react-native";
import { GooglePlacesAutocomplete } from 'react-native-google-places-autocomplete';
import { useNavigation } from "#react-navigation/core";
import Styles from "./style";
import Place from "./PlaceRow";
const HomePlace={
description:"Home",
geometry:{location:{lat:48.8152937,lng:2.4597668}}
};
const WorkPlace={
description:"Work",
geometry:{location:{lat:48.8152837,lng:2.4597659}}
};
const DestinationSearch=(props)=>{
const navigation= useNavigation();
const [fromText , setFromText] = useState("");
const [destinationText , setDestinationText] = useState("");
useEffect(() => {
if(fromText && destinationText) {
navigation.navigate("SearchResults",{
fromText,
destinationText
})
}
}, [fromText,destinationText]);
return (
<SafeAreaView>
<View style={Styles.container}>
<GooglePlacesAutocomplete
placeholder="From"
onPress={(data, details = null) => {
// 'details' is provided when fetchDetails = true
setFromText(data, details);
}}
currentLocation={true}
currentLocationLabel='Current location'
styles={{
textInput:Styles.TextInput,
container:{
position:"absolute",
top:0,
left:10,
right:10,
},
listView:{
position:"absolute",
top:100,
}
}}
query={{
key: 'API CREDENTIALS',
language: 'en',
}}
predefinedPlaces={[HomePlace,WorkPlace]}
renderRow={(data)=><Place data={data}/>}
/>
<GooglePlacesAutocomplete
placeholder="Where to?"
onPress={(data, details = null) => {
// 'details' is provided when fetchDetails = true
setDestinationText(data, details);
}}
styles={{
textInput:Styles.TextInput,
container:{
position:"absolute",
top:55,
left:10,
right:10,
}
}}
query={{
key: 'API CREDENTIALS',
language: 'en',
}}
predefinedPlaces={[HomePlace,WorkPlace]}
renderRow={(data)=><Place data={data}/>}
/>
<View style={Styles.circle}/>
<View style={Styles.line}/>
<View style={Styles.square}/>
</View>
</SafeAreaView>
);
};
export default DestinationSearch;
I have tried
using the testing code provided by
react-native-google-places-autocomplete
creating a new API credential
waited for several days in case the server is down
reinstalling the NPM package
reenabled Google Places API
I solved the problem, turns out my android studio emulator disconnected to it's wifi
(dumb reason to get stuck on)
I am using react-native-image-crop-tools for cropping the image but CropView not showing the image to be cropped only a blank screen is showing. any solution regarding this?
import { CropView } from 'react-native-image-crop-tools';
const [uri, setUri] = useState('https://cdn.pixabay.com/photo/2015/04/23/22/00/tree-736885__480.jpg');
{uri !== undefined && <CropView
sourceUrl={uri}
style={{flex:1}}
ref={cropViewRef}
onImageCrop={(res) => console.warn(res)}
keepAspectRatio
aspectRatio={{ width: 16, height: 9 }}
/>}
Try this, I hope it will help you.
app.js
import React, { useState, useRef } from 'react';
import { Button, StyleSheet, View} from 'react-native';
import { CropView } from 'react-native-image-crop-tools';
import { launchImageLibrary } from 'react-native-image-picker';
export default function app() {
const [uri, setUri] = useState();
const cropViewRef = useRef();
let options = {
mediaType: 'photo',
quality: 1,
};
return (
<>
<View style={styles.container}>
<Button
title={'Pick Image'}
onPress={() => {
launchImageLibrary(options, response => {
setUri(response.assets[0].uri);
});
}}
/>
{uri !== undefined && <CropView
sourceUrl={uri}
style={styles.cropView}
ref={cropViewRef}
onImageCrop={(res) => console.log(res)}
keepAspectRatio
aspectRatio={{ width: 16, height: 9 }}
/>}
<Button
title={'Get Cropped View'}
onPress={() => {
cropViewRef.current.saveImage(true, 100);
}}
/>
</View>
</>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
},
cropView: {
flex: 1,
backgroundColor: '#000'
},
});
The library doesn't support remote images. It working in iOS is merely coincidental as might the native ios library supports for the cropping of network images.
If you want to crop a remote image please download it first using RNFetchBlob and then pass the local file path to it.
Supporting remote images directly is a somewhat complicated task and out of scope for this project.
You also can check out the closed issue in the library.
https://github.com/hhunaid/react-native-image-crop-tools/issues/16
You can have a try with the below example to crop the network images in the android platform:
For ex:
import React, {useCallback, useEffect, useState} from 'react';
import {View, Text} from 'react-native';
import {CropView} from 'react-native-image-crop-tools';
import RNFetchBlob from 'rn-fetch-blob';
export default () => {
const [uri, setUri] = useState('');
const getImage = useCallback(() => {
try {
RNFetchBlob.config({
fileCache: true,
// by adding this option, the temp files will have a file extension
appendExt: 'png',
})
.fetch(
'GET',
'https://cdn.pixabay.com/photo/2015/04/23/22/00/tree-736885__480.jpg',
)
.then(res => {
let status = res.info().status;
if (status === 200) {
setUri('file://' + res.path());
} else {
console.log(status);
}
})
// Something went wrong:
.catch((errorMessage, statusCode) => {
// error handling
console.log('Error : ', errorMessage);
});
} catch (err) {
console.log('Error : ', err.message);
}
}, []);
useEffect(() => {
getImage();
}, [getImage]);
if (uri === '') {
return (
<View style={{flex: 1, alignItems: 'center', justifyContent: 'center'}}>
<Text>{'processing...'}</Text>
</View>
);
}
return (
<CropView
sourceUrl={uri}
style={{flex: 1, height: '100%', width: '100%'}}
// ref={cropViewRef}
onImageCrop={res => console.warn(res)}
keepAspectRatio
aspectRatio={{width: 16, height: 9}}
/>
);
};
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a basic component like this:
const BasicComponent = (props) => {
return <Text style={styles.text}>Hi!</Text>
}
const styles = StyleSheet.create({
text: {
fontSize: 20
}
})
I want to be able to take a prop that changes the color of the text. I could either change the return to
<Text style={[styles.text, {color: props.color}]}>Hi!</Text>
or move const styles inside the BasicComponent function and set color: props.color there, which in my opinion looks cleaner.
I have never seen the const styles inside the function/class before. But it works just fine. Is one method preferred over the other? Which is best practice? Or is it up to individual opinion?
This is helpful when it comes to handling multiple themes, what I usually do is use a function for the StyleSheet, something like this:
import React, { useState } from 'react';
import { Text, Button, View, StyleSheet } from 'react-native';
const Demo = ({ theme = 'light', toggleTheme }) => (
<View style={styles.container(theme)}>
<Text style={styles.title(theme)}>{theme}</Text>
<Button title="Change Theme" onPress={toggleTheme} />
</View>
);
export default () => {
const [theme, setTheme] = useState('light');
const toggleTheme = () => setTheme(theme === 'light' ? 'dark' : 'light');
return <Demo theme={theme} toggleTheme={toggleTheme} />;
};
const styles = StyleSheet.create({
container: theme => ({
flex: 1,
justifyContent: 'space-around',
alignItems: 'center',
backgroundColor: theme === 'light' ? '#fff' : '#000',
}),
title: theme => ({
fontSize: 25,
color: theme === 'light' ? '#000' : '#fff',
}),
});
See a working snack here https://snack.expo.io/#abranhe/61e11d
I have an idea like this hope it helps you
const BasicComponent = (props) => {
const {color} = props
return <Text style={styles.text(color)}>Hi!</Text>
}
const styles = StyleSheet.create({
text: (color = "black") => ({
fontSize: 20,
color: color
})
})
I’m new to React Native and still learning React and JavaScript. I’m practicing on Expo snack with Expo's FaceDetector (SDK 37) and managed to generate data about faces. However, I couldn't (or don't know how to) extract these data. My goal for now is to render the rollAngle data in a Text component.
Here is the code I used in Expo Snack and tested with my Android cellphone:
import React, { useState, useEffect } from 'react';
import { Text, View } from 'react-native';
import { Camera } from 'expo-camera';
import * as FaceDetector from 'expo-face-detector'
export default function App() {
const [hasPermission, setHasPermission] = useState(null);
const [faces, setFaces] = useState([])
const faceDetected = ({faces}) => {
setFaces({faces})
console.log({faces})
}
useEffect(() => {
(async () => {
const { status } = await Camera.requestPermissionsAsync();
setHasPermission(status === 'granted');
})();
}, []);
if (hasPermission !== true) {
return <Text>No access to camera</Text>
}
return (
//<View style={{ flex: 1 }}>
<Camera
style={{ flex: 1 }}
type='front'
onFacesDetected = {faceDetected}
FaceDetectorSettings = {{
mode: FaceDetector.Constants.Mode.fast,
detectLandmarks: FaceDetector.Constants.Landmarks.all,
runClassifications: FaceDetector.Constants.Classifications.none,
minDetectionInterval: 5000,
tracking: false
}}
>
<View
style={{
flex: 1,
backgroundColor: 'transparent',
flexDirection: 'row',
}}>
<Text style= {{top:200}}> is {faces[0].rollAngle} </Text>
</View>
</Camera>
//</View>
);
}
In the snack console, I see results like this:
Results in the Snack console
I tried to replace the faceDetected function with the following code:
const faceDetected = (faces) => {
setFaces(faces)
console.log(faces)
}
Then, the console shows slightly different results: Results in Snack console
I tried both ways to render rollAngle, but an error message showed up and said face[0].rollAngle is undefined and is not an object.
Please help and any suggestion is appreciated.
Thank you.
You may have resolved this problem.
"faces.faces" worked for me..
const faceDetected = (faces) => {
setFaces(faces.faces)
}
I am new to react-native..
So if you resolved it by some other way please let us know.
I believe I have fixed your problem:
import React, { useState, useEffect } from 'react';
import { Text, View } from 'react-native';
import { Camera } from 'expo-camera';
import * as FaceDetector from 'expo-face-detector'
export default function App() {
const [hasPermission, setHasPermission] = useState(null);
const [faces, setFaces] = useState([])
const faceDetected = ({faces}) => {
setFaces(faces) // instead of setFaces({faces})
console.log({faces})
}
useEffect(() => {
(async () => {
const { status } = await Camera.requestPermissionsAsync();
setHasPermission(status === 'granted');
})();
}, []);
if (hasPermission !== true) {
return <Text>No access to camera</Text>
}
return (
//<View style={{ flex: 1 }}>
<Camera
style={{ flex: 1 }}
type='front'
onFacesDetected = {faceDetected}
FaceDetectorSettings = {{
mode: FaceDetector.Constants.Mode.fast,
detectLandmarks: FaceDetector.Constants.Landmarks.all,
runClassifications: FaceDetector.Constants.Classifications.none,
minDetectionInterval: 5000,
tracking: false
}}
>
<View
style={{
flex: 1,
backgroundColor: 'transparent',
flexDirection: 'row',
}}>
{faces[0] && <Text style= {{top:200}}> is {faces[0].rollAngle} </Text>} // only render text if faces[0] exists
</View>
</Camera>
//</View>
);
}
I think your main problem was you were using
setFaces({faces})
instead of
setFaces(faces)