I want to convert voice to text and get the texts as a result with the help of voice recognition.
I'm using react-native-community/voice (Example)
After building the project and install apk on my phone and press record button, I get the following error :
Error: {"message":"5/Client side error"}
NOTE: I added the following permission in AndroidManifest:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
this is my code ():
import React, { Component, useState, useEffect } from 'react';
import { StyleSheet, Text, View, Image, TouchableHighlight } from 'react-native';
import Voice from '#react-native-community/voice';
const App = (props) => {
const [voiceState, setVoiceState] = useState({
recognized: '',
pitch: '',
error: '',
end: '',
started: '',
results: [],
partialResults: [],
})
useEffect(() => {
Voice.onSpeechStart = onSpeechStart;
Voice.onSpeechRecognized = onSpeechRecognized;
Voice.onSpeechEnd = onSpeechEnd;
Voice.onSpeechError = onSpeechError;
Voice.onSpeechResults = onSpeechResults;
Voice.onSpeechPartialResults = onSpeechPartialResults;
Voice.onSpeechVolumeChanged = onSpeechVolumeChanged;
Voice.destroy().then(Voice.removeAllListeners);
}, [])
const onSpeechStart = (e) => {
console.log('onSpeechStart: ', e);
setVoiceState({ ...voiceState, started: '√', })
};
const onSpeechRecognized = (e) => {
console.log('onSpeechRecognized: ', e);
setVoiceState({
...voiceState, recognized: '√',
})
};
const onSpeechEnd = (e) => {
console.log('onSpeechEnd: ', e);
setVoiceState({
...voiceState, end: '√',
})
};
const onSpeechError = (e) => {
console.log('onSpeechError: ', e);
setVoiceState({
...voiceState, error: JSON.stringify(e.error)
})
};
const onSpeechResults = (e) => {
console.log('onSpeechResults: ', e);
setVoiceState({
...voiceState, results: e.value,
})
};
const onSpeechPartialResults = (e) => {
console.log('onSpeechPartialResults: ', e);
setVoiceState({
...voiceState, partialResults: e.value,
})
};
const onSpeechVolumeChanged = (e) => {
console.log('onSpeechVolumeChanged: ', e);
setVoiceState({
...voiceState, pitch: e.value,
})
};
const _startRecognizing = async () => {
setVoiceState({
...voiceState,
recognized: '',
pitch: '',
error: '',
started: '',
results: [],
partialResults: [],
end: '',
})
try {
await Voice.start('en-US');
} catch (e) {
console.error(e);
}
};
const _stopRecognizing = async () => {
try {
await Voice.stop();
} catch (e) {
console.error(e);
}
};
const _cancelRecognizing = async () => {
try {
await Voice.cancel();
} catch (e) {
console.error(e);
}
};
const _destroyRecognizer = async () => {
try {
await Voice.destroy();
} catch (e) {
console.error(e);
}
setVoiceState({
...voiceState,
recognized: '',
pitch: '',
error: '',
started: '',
results: [],
partialResults: [],
end: '',
})
};
return (
<View style={styles.container}>
<Text style={styles.welcome}>Welcome to React Native Voice!</Text>
<Text style={styles.instructions}>
Press the button and start speaking.
</Text>
<Text style={styles.stat}>{`Started: ${voiceState.started}`}</Text>
<Text style={styles.stat}>{`Recognized: ${
voiceState.recognized
}`}</Text>
<Text style={styles.stat}>{`Pitch: ${voiceState.pitch}`}</Text>
<Text style={styles.stat}>{`Error: ${voiceState.error}`}</Text>
<Text style={styles.stat}>Results</Text>
{voiceState.results.map((result, index) => {
return (
<Text key={`result-${index}`} style={styles.stat}>
{result}
</Text>
);
})}
<Text style={styles.stat}>Partial Results</Text>
{voiceState.partialResults.map((result, index) => {
return (
<Text key={`partial-result-${index}`} style={styles.stat}>
{result}
</Text>
);
})}
<Text style={styles.stat}>{`End: ${voiceState.end}`}</Text>
<TouchableHighlight onPress={_startRecognizing}>
<Image style={styles.button} source={require('../assets/voice-recording.png')} />
</TouchableHighlight>
<TouchableHighlight onPress={_stopRecognizing}>
<Text style={styles.action}>Stop Recognizing</Text>
</TouchableHighlight>
<TouchableHighlight onPress={_cancelRecognizing}>
<Text style={styles.action}>Cancel</Text>
</TouchableHighlight>
<TouchableHighlight onPress={_destroyRecognizer}>
<Text style={styles.action}>Destroy</Text>
</TouchableHighlight>
</View>
);
}
I've seen this happen on an android emulator before, however it didn't happen on a real device. It could be that the emulator/device you're using doesn't have speech recognition support.
Before starting the listener, you should use the isAvailable method from the library to make sure the device is able to handle speech recognition.
Related
I am building a react-native app on windows and testing it on physical android device. The basic purpose of the app is to convert speech into text. I am using #react-native-voice library. First I created the app with
npx react-native init myapp
But on pressing the record button, it gives an error/warning of
cannot read properties of undefined( reading 'startSpeech)
Similarly I got the same warning for stopSpeech, destroySpeech etc.
After hectic search, I moved to expo-cli for creating the react-native app. But facing the same error again. Expo community expert says that expo has added the #react-native-voice into its expo-SDK > 41. You just have to add the plugins section into your app.json. plugin thing is explained in this stackOverflow post
I have tried everything but nothing is working.
The code is attached below
import React, { Component } from 'react';
import {
StyleSheet,
Text,
View,
Image,
TouchableHighlight,
} from 'react-native';
import Voice, {
SpeechRecognizedEvent,
SpeechResultsEvent,
SpeechErrorEvent,
} from '#react-native-voice/voice';
type Props = {};
type State = {
recognized: string;
pitch: string;
error: string;
end: string;
started: string;
results: string[];
partialResults: string[];
};
class App extends Component<Props, State> {
state = {
recognized: '',
pitch: '',
error: '',
end: '',
started: '',
results: [],
partialResults: [],
};
constructor(props: Props) {
super(props);
Voice.onSpeechStart = this.onSpeechStart.bind(this);
Voice.onSpeechRecognized = this.onSpeechRecognized;
Voice.onSpeechEnd = this.onSpeechEnd;
Voice.onSpeechError = this.onSpeechError;
Voice.onSpeechResults = this.onSpeechResults;
Voice.onSpeechPartialResults = this.onSpeechPartialResults;
Voice.onSpeechVolumeChanged = this.onSpeechVolumeChanged;
}
componentWillUnmount() {
Voice.destroy().then(Voice.removeAllListeners);
}
onSpeechStart = (e: any) => {
console.log('onSpeechStart: ', e);
this.setState({
started: '√',
});
};
onSpeechRecognized = (e: SpeechRecognizedEvent) => {
console.log('onSpeechRecognized: ', e);
this.setState({
recognized: '√',
});
};
onSpeechEnd = (e: any) => {
console.log('onSpeechEnd: ', e);
this.setState({
end: '√',
});
};
onSpeechError = (e: SpeechErrorEvent) => {
console.log('onSpeechError: ', e);
this.setState({
error: JSON.stringify(e.error),
});
};
onSpeechResults = (e: SpeechResultsEvent) => {
console.log('onSpeechResults: ', e);
this.setState({
results: e.value,
});
};
onSpeechPartialResults = (e: SpeechResultsEvent) => {
console.log('onSpeechPartialResults: ', e);
this.setState({
partialResults: e.value,
});
};
onSpeechVolumeChanged = (e: any) => {
console.log('onSpeechVolumeChanged: ', e);
this.setState({
pitch: e.value,
});
};
_startRecognizing = async () => {
this.setState({
recognized: '',
pitch: '',
error: '',
started: '',
results: [],
partialResults: [],
end: '',
});
try {
await Voice.start('en-US');
} catch (e) {
console.error(e);
}
};
_stopRecognizing = async () => {
try {
await Voice.stop();
} catch (e) {
console.error(e);
}
};
_cancelRecognizing = async () => {
try {
await Voice.cancel();
} catch (e) {
console.error(e);
}
};
_destroyRecognizer = async () => {
try {
await Voice.destroy();
} catch (e) {
console.error(e);
}
this.setState({
recognized: '',
pitch: '',
error: '',
started: '',
results: [],
partialResults: [],
end: '',
});
};
render() {
return (
<View style={styles.container}>
<Text style={styles.welcome}>Welcome to React Native Voice!</Text>
<Text style={styles.instructions}>
Press the button and start speaking.
</Text>
<Text style={styles.stat}>{`Started: ${this.state.started}`}</Text>
<Text style={styles.stat}>{`Recognized: ${
this.state.recognized
}`}</Text>
<Text style={styles.stat}>{`Pitch: ${this.state.pitch}`}</Text>
<Text style={styles.stat}>{`Error: ${this.state.error}`}</Text>
<Text style={styles.stat}>Results</Text>
{this.state.results.map((result, index) => {
return (
<Text key={`result-${index}`} style={styles.stat}>
{result}
</Text>
);
})}
<Text style={styles.stat}>Partial Results</Text>
{this.state.partialResults.map((result, index) => {
return (
<Text key={`partial-result-${index}`} style={styles.stat}>
{result}
</Text>
);
})}
<Text style={styles.stat}>{`End: ${this.state.end}`}</Text>
<TouchableHighlight onPress={this._startRecognizing}>
<Image style={styles.button} source={require('./button.png')} />
</TouchableHighlight>
<TouchableHighlight onPress={this._stopRecognizing}>
<Text style={styles.action}>Stop Recognizing</Text>
</TouchableHighlight>
<TouchableHighlight onPress={this._cancelRecognizing}>
<Text style={styles.action}>Cancel</Text>
</TouchableHighlight>
<TouchableHighlight onPress={this._destroyRecognizer}>
<Text style={styles.action}>Destroy</Text>
</TouchableHighlight>
</View>
);
}
}
const styles = StyleSheet.create({
button: {
width: 50,
height: 50,
},
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
backgroundColor: '#F5FCFF',
},
welcome: {
fontSize: 20,
textAlign: 'center',
margin: 10,
},
action: {
textAlign: 'center',
color: '#0000FF',
marginVertical: 5,
fontWeight: 'bold',
},
instructions: {
textAlign: 'center',
color: '#333333',
marginBottom: 5,
},
stat: {
textAlign: 'center',
color: '#B0171F',
marginBottom: 1,
},
});
export default App;
Thank you in advance for the help :)
Does anyone know what could be the issue?
Previously, I was using #react-native-community/voice and it was working. But as I continue developing the application and come back to test the function, it is not working anymore. When I click on the button which initiates Voice.Start(), nothing happened. I thought it could be because it is obsolete. So I changed to #react-native-voice/voice but it is still not working. I tried reinstalling node-module, cleaning the gradle ./gradlew clean but it still doesn't work.
Below is the code I used which was working, but now it isn't. I got this code from the example shown in the #react-native-community/voice example code:
import Voice, {
SpeechRecognizedEvent,
SpeechResultsEvent,
SpeechErrorEvent,}
from '#react-native-voice/voice';
class VoiceTest extends Component{
constructor(props) {
super(props);
this.state = {
pitch: '',
error: '',
end: '',
started: '',
results: [],
partialResults: [],
recognized: '',
}
Voice.onSpeechStart = this.onSpeechStart;
Voice.onSpeechEnd = this.onSpeechEnd;
Voice.onSpeechError = this.onSpeechError;
Voice.onSpeechResults = this.onSpeechResults;
Voice.onSpeechPartialResults = this.onSpeechPartialResults;
Voice.onSpeechVolumeChanged = this.onSpeechVolumeChanged;
}
componentWillUnmount() {
Voice.destroy().then(Voice.removeAllListeners);
}
onSpeechStart = (e) => {
console.log('onSpeechStart: ', e);
};
onSpeechEnd = (e) => {
console.log('onSpeechEnd: ', e);
};
onSpeechError = (e) => {
console.log('onSpeechError: ', e);
};
onSpeechResults = (e) => {
console.log('onSpeechResults: ', e);
};
onSpeechPartialResults = (e) => {
console.log('onSpeechPartialResults: ', e);
};
onSpeechVolumeChanged = (e) => {
console.log('onSpeechVolumeChanged: ', e);
};
_startRecognizing = async () => {
this.setState({
recognized: '',
pitch: '',
error: '',
started: '',
results: [],
partialResults: [],
end: '',
});
try {
await Voice.start('en-US');
};
_stopRecognizing = async () => {
await Voice.stop();
};
_cancelRecognizing = async () =>{
await Voice.cancel();
};
_destroyRecognizer = async () => {
await Voice.destroy();
this.setState({
recognized: '',
pitch: '',
error: '',
started: '',
results: [],
partialResults: [],
end: '',
});
};
render() {
return (
<View style={styles.container}>
<Text style={styles.welcome}>Welcome to React Native Voice!</Text>
<Text style={styles.instructions}>
Press the button and start speaking.
</Text>
<Text style={styles.stat}>{`Started: ${this.state.started}`}</Text>
<Text style={styles.stat}>{`Recognized: ${
this.state.recognized
}`}</Text>
<Text style={styles.stat}>{`Pitch: ${this.state.pitch}`}</Text>
<Text style={styles.stat}>{`Error: ${this.state.error}`}</Text>
<Text style={styles.stat}>Results</Text>
{this.state.results.map((result, index) => {
return (
<Text key={`result-${index}`} style={styles.stat}>
{result}
</Text>
);
})}
<Text style={styles.stat}>Partial Results</Text>
{this.state.partialResults.map((result, index) => {
return (
<Text key={`partial-result-${index}`} style={styles.stat}>
{result}
</Text>
);
})}
<Text style={styles.stat}>{`End: ${this.state.end}`}</Text>
<TouchableHighlight onPress={this._startRecognizing}>
<Text style={{fontSize:36, backgroundColor: '#ccc'}}>Button</Text>
{/* <Image style={styles.button} source={require('./button.png')} /> */}
</TouchableHighlight>
<TouchableHighlight onPress={this._stopRecognizing}>
<Text style={styles.action}>Stop Recognizing</Text>
</TouchableHighlight>
<TouchableHighlight onPress={this._cancelRecognizing}>
<Text style={styles.action}>Cancel</Text>
</TouchableHighlight>
<TouchableHighlight onPress={this._destroyRecognizer}>
<Text style={styles.action}>Destroy</Text>
</TouchableHighlight>
</View>
);
}
}
Versions Used:
"react-native": "0.63.4",
buildToolsVersion = "29.0.2"
minSdkVersion = 19
compileSdkVersion = 29
targetSdkVersion = 31
kotlinVersion = "1.6.0"
What im trying to do is to implement voice recognition to the project.
Im using expo.
To do that im using the https://github.com/react-native-voice/voice library.
I made research about this error but it seems nothing works.
When I have installed it it showed me this error:
Voice.js
import React, { Component } from 'react';
import {
StyleSheet,
Text,
View,
Image,
TouchableHighlight,
} from 'react-native';
import Voice, {
SpeechRecognizedEvent,
SpeechResultsEvent,
SpeechErrorEvent,
} from '#react-native-voice/voice';
type Props = {};
type State = {
recognized: string;
pitch: string;
error: string;
end: string;
started: string;
results: string[];
partialResults: string[];
};
class VoiceTest extends Component<Props, State> {
state = {
recognized: '',
pitch: '',
error: '',
end: '',
started: '',
results: [],
partialResults: [],
};
constructor(props: Props) {
super(props);
Voice.onSpeechStart = this.onSpeechStart;
Voice.onSpeechRecognized = this.onSpeechRecognized;
Voice.onSpeechEnd = this.onSpeechEnd;
Voice.onSpeechError = this.onSpeechError;
Voice.onSpeechResults = this.onSpeechResults;
Voice.onSpeechPartialResults = this.onSpeechPartialResults;
Voice.onSpeechVolumeChanged = this.onSpeechVolumeChanged;
}
componentWillUnmount() {
Voice.destroy().then(Voice.removeAllListeners);
}
onSpeechStart = (e: any) => {
console.log('onSpeechStart: ', e);
this.setState({
started: '√',
});
};
onSpeechRecognized = (e: SpeechRecognizedEvent) => {
console.log('onSpeechRecognized: ', e);
this.setState({
recognized: '√',
});
};
onSpeechEnd = (e: any) => {
console.log('onSpeechEnd: ', e);
this.setState({
end: '√',
});
};
onSpeechError = (e: SpeechErrorEvent) => {
console.log('onSpeechError: ', e);
this.setState({
error: JSON.stringify(e.error),
});
};
onSpeechResults = (e: SpeechResultsEvent) => {
console.log('onSpeechResults: ', e);
this.setState({
results: e.value,
});
};
onSpeechPartialResults = (e: SpeechResultsEvent) => {
console.log('onSpeechPartialResults: ', e);
this.setState({
partialResults: e.value,
});
};
onSpeechVolumeChanged = (e: any) => {
console.log('onSpeechVolumeChanged: ', e);
this.setState({
pitch: e.value,
});
};
_startRecognizing = async () => {
this.setState({
recognized: '',
pitch: '',
error: '',
started: '',
results: [],
partialResults: [],
end: '',
});
try {
await Voice.start('en-US');
} catch (e) {
console.error(e);
}
};
_stopRecognizing = async () => {
try {
await Voice.stop();
} catch (e) {
console.error(e);
}
};
_cancelRecognizing = async () => {
try {
await Voice.cancel();
} catch (e) {
console.error(e);
}
};
_destroyRecognizer = async () => {
try {
await Voice.destroy();
} catch (e) {
console.error(e);
}
this.setState({
recognized: '',
pitch: '',
error: '',
started: '',
results: [],
partialResults: [],
end: '',
});
};
render() {
return (
<View style={styles.container}>
<Text style={styles.welcome}>Welcome to React Native Voice!</Text>
<Text style={styles.instructions}>
Press the button and start speaking.
</Text>
<Text style={styles.stat}>{`Started: ${this.state.started}`}</Text>
<Text style={styles.stat}>{`Recognized: ${
this.state.recognized
}`}</Text>
<Text style={styles.stat}>{`Pitch: ${this.state.pitch}`}</Text>
<Text style={styles.stat}>{`Error: ${this.state.error}`}</Text>
<Text style={styles.stat}>Results</Text>
{this.state.results.map((result, index) => {
return (
<Text key={`result-${index}`} style={styles.stat}>
{result}
</Text>
);
})}
<Text style={styles.stat}>Partial Results</Text>
{this.state.partialResults.map((result, index) => {
return (
<Text key={`partial-result-${index}`} style={styles.stat}>
{result}
</Text>
);
})}
<Text style={styles.stat}>{`End: ${this.state.end}`}</Text>
<TouchableHighlight onPress={this._startRecognizing}>
<Text>START</Text>
</TouchableHighlight>
<TouchableHighlight onPress={this._stopRecognizing}>
<Text style={styles.action}>Stop Recognizing</Text>
</TouchableHighlight>
<TouchableHighlight onPress={this._cancelRecognizing}>
<Text style={styles.action}>Cancel</Text>
</TouchableHighlight>
<TouchableHighlight onPress={this._destroyRecognizer}>
<Text style={styles.action}>Destroy</Text>
</TouchableHighlight>
</View>
);
}
}
const styles = StyleSheet.create({
button: {
width: 50,
height: 50,
},
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
backgroundColor: '#F5FCFF',
},
welcome: {
fontSize: 20,
textAlign: 'center',
margin: 10,
},
action: {
textAlign: 'center',
color: '#0000FF',
marginVertical: 5,
fontWeight: 'bold',
},
instructions: {
textAlign: 'center',
color: '#333333',
marginBottom: 5,
},
stat: {
textAlign: 'center',
color: '#B0171F',
marginBottom: 1,
},
});
export default VoiceTest;
app.json
{
"expo": {
"plugins": [
[
"#react-native-voice/voice",
{
"microphonePermission": "CUSTOM: Allow $(PRODUCT_NAME) to access the microphone",
"speechRecognitionPermission": "CUSTOM: Allow $(PRODUCT_NAME) to securely recognize user speech"
}
]
],
How can I solve this?
Problem:
I have created an image and video picker with react-native-image-picker. This is how my code looks like.
import React, {useState, createRef} from 'react';
import {
View,
TouchableOpacity,
Image,
TouchableWithoutFeedback,
} from 'react-native';
import AppText from '_components/appText';
import Icon from 'react-native-vector-icons/AntDesign';
import {strings} from '_translations/i18n';
import styles from './newpatientassetmentstyle';
import PlayerControls from '_components/playerControls';
import Video from 'react-native-video';
import ImagePicker from 'react-native-image-picker';
const showControls = (state, setState) => {
state.showControls
? setState({...state, showControls: false})
: setState({...state, showControls: true});
};
const handlePlayPause = (state, setState) => {
if (state.play) {
setState({...state, play: false, showControls: true});
return;
}
setState({...state, play: true});
setTimeout(() => setState((s) => ({...s, showControls: false})), 2000);
};
function onLoadEnd(data, state, setState) {
setState({
...state,
duration: data.duration,
currentTime: data.currentTime,
});
}
function onProgress(data, state, setState) {
setState({
...state,
currentTime: data.currentTime,
});
}
const onEnd = (state, setState, player) => {
setState({
...state,
showControls: false,
play: false,
currentTime: 0,
duration: 0,
});
player.current.seek(0);
};
const openPicker = async (type, setFileObject) => {
let options;
if (type === 4) {
options = {
title: 'Upload Image',
quality: 1,
mediaType: 'photo',
storageOptions: {
skipBackup: true,
path: 'images',
},
};
} else if (type === 5) {
options = {
title: 'Upload Video',
videoQuality: 'high',
mediaType: 'video',
storageOptions: {
skipBackup: true,
path: 'images',
},
};
}
ImagePicker.showImagePicker(options, (response) => {
if (response.didCancel) {
console.log('User cancelled image picker');
} else if (response.error) {
console.log('ImagePicker Error: ', response.error);
} else if (response.customButton) {
console.log('User tapped custom button: ', response.customButton);
} else {
setFileObject(response);
}
});
};
const DocumentUpload = (props) => {
const {type} = props;
const [fileObject, setFileObject] = useState(null);
const [state, setState] = useState({
fullscreen: false,
play: false,
currentTime: 0,
duration: 0,
showControls: true,
});
const player = createRef();
return (
<View>
{type === 5 && (
<View style={styles.videoContainer}>
<View style={styles.videoInnerContainer}>
<TouchableWithoutFeedback
onPress={() => showControls(state, setState)}>
<View style={{flex: 1}}>
<Video
source={{
uri: fileObject.uri,
}}
controls={false}
style={styles.backgroundVideo}
ref={player}
resizeMode={'contain'}
paused={!state.play}
onEnd={() => onEnd(state, setState, player)}
onLoad={(data) => onLoadEnd(data, state, setState)}
onProgress={(data) => onProgress(data, state, setState)}
/>
{state.showControls && (
<View style={styles.controlsOverlay}>
<PlayerControls
play={state.play}
playVideo={handlePlayPause}
state={state}
setState={setState}
pauseVideo={handlePlayPause}
/>
</View>
)}
</View>
</TouchableWithoutFeedback>
</View>
</View>
)}
{fileObject
? console.log(`data:${fileObject.type},${fileObject.data}`, 'fileOb')
: null}
{type === 4 && fileObject && (
<View>
<Image
source={{uri: 'data:' + fileObject.type + ',' + fileObject.data}}
/>
</View>
)}
{!fileObject ? (
<>
<TouchableOpacity onPress={() => openPicker(type, setFileObject)}>
<Image
source={require('_assets/img/cmerap.png')}
resizeMode="center"
style={styles.camPImage}
/>
</TouchableOpacity>
<AppText styles={styles.camPText}>
{strings('assetsment.capture')}
</AppText>
</>
) : (
<View style={styles.videoBottomText}>
<TouchableOpacity onPress={() => openPicker(type, setFileObject)}>
<View style={styles.updateAgainContainer}>
<Icon name="reload1" style={styles.reloadIcon} />
<AppText styles={styles.reloadText}>
{strings('assetsment.reload')}
</AppText>
</View>
</TouchableOpacity>
</View>
)}
</View>
);
};
export default DocumentUpload;
But when I try to show the picked image in the view but it is showing nothing in there. I tried a lot to find out what is wrong with this but I was unable to do so. Can someone help me to find out a solution to this problem? Thank you very much.
Once you have the image selected you'll have a response object, in which you'll have the uri for the image chosen (you can console.log the response object to make sure what the key for the uri is). Once you have this object with the response, use its uri to set a state and make sure you use that state in your image tag, like this: source={{ uri: imageUriFromState }}
I am trying to create a searchable flatlist on this new app I was working on over quarantine. I followed an article on the internet on how to create it and my code so far looks like this:
import React, { Component } from 'react';
import { View, Text, FlatList, ActivityIndicator } from 'react-native';
import { ListItem, SearchBar } from 'react-native-elements';
class FlatListDemo extends Component {
constructor(props) {
super(props);
this.state = {
loading: false,
data: [],
error: null,
};
this.arrayholder = [];
}
componentDidMount() {
this.makeRemoteRequest();
}
makeRemoteRequest = () => {
const url = `https://randomuser.me/api/?&results=20`;
this.setState({ loading: true });
fetch(url)
.then(res => res.json())
.then(res => {
this.setState({
data: res.results,
error: res.error || null,
loading: false,
});
this.arrayholder = res.results;
})
.catch(error => {
this.setState({ error, loading: false });
});
};
renderSeparator = () => {
return (
<View
style={{
height: 1,
width: '86%',
backgroundColor: '#CED0CE',
marginLeft: '14%',
}}
/>
);
};
searchFilterFunction = text => {
this.setState({
value: text,
});
const newData = this.arrayholder.filter(item => {
const itemData = `${item.name.title.toUpperCase()} ${item.name.first.toUpperCase()} ${item.name.last.toUpperCase()}`;
const textData = text.toUpperCase();
return itemData.indexOf(textData) > -1;
});
this.setState({
data: newData,
});
};
renderHeader = () => {
return (
<SearchBar
placeholder="Type Here..."
lightTheme
round
onChangeText={text => this.searchFilterFunction(text)}
autoCorrect={false}
value={this.state.value}
/>
);
};
render() {
if (this.state.loading) {
return (
<View style={{ flex: 1, alignItems: 'center', justifyContent: 'center' }}>
<ActivityIndicator />
</View>
);
}
return (
<View style={{ flex: 1 }}>
<FlatList
data={this.state.data}
renderItem={({ item }) => (
<ListItem
leftAvatar={{ source: { uri: item.picture.thumbnail } }}
title={`${item.name.first} ${item.name.last}`}
subtitle={item.email}
/>
)}
keyExtractor={item => item.email}
ItemSeparatorComponent={this.renderSeparator}
ListHeaderComponent={this.renderHeader}
/>
</View>
);
}
}
export default FlatListDemo;
This works and all, but how could I alter the makeRemoteRequest function so that I got data from a local json file instead of a url? For example:
makeRemoteRequest = () => {
const url = `../data/userData.json`;
this.setState({ loading: true });
fetch(url)
.then(res => res.json())
.then(res => {
this.setState({
data: res.results,
error: res.error || null,
loading: false,
});
this.arrayholder = res.results;
})
.catch(error => {
this.setState({ error, loading: false });
});
};
I tried this with no success as none of the json data rendered and appeared in the flatlist
The Fetch API is a promise-based JavaScript API for making asynchronous HTTP requests in the browser similar to XMLHttpRequest. If you want to load data from a local JSON file.
First import that file
import Data from '../data/userData.json'
Then assign imported value to your state inside your makeRemoteRequest function
makeRemoteRequest = () => {
this.setState({
data: Data,
});
}
Hope this helps you. Feel free for doubts.