React Native how to play a sound after an "onPress"? - react-native

Here is some of my code:
<TouchableOpacity
style={styles.button}
onPress={this.onPress}
>
<Text> Play Sound </Text>
</TouchableOpacity>
I want to write a function "onPress" which will play an .mp3 sound.
I have already imported react-native-sound and have my .mp3 file ready to go, I just don't know how to play the sound once the onPress function is called.

In my opinion, if you want to listen the sound, you can try this.
Syntax = react.
Import Sound from "react-native-sound";
Sound.setCategory('Playback');
const whoosh = new Sound('whoosh.mp3', Sound.MAIN_BUNDLE, (error) => {
if (error) {
console.log('failed to load the sound', error);
return;
};
whoosh.play((success) => {
if (success) {
console.log('successfully finished playing');
} else {
console.log('playback failed due to audio decoding errors');
reset the player to its uninitialized state (android only)
whoosh.reset();
}

The easiest way is First to create a new instance like Following.
create this in a constructor to load it when the component mount
Note: please put your mp3 or wav file in android/app/src/main/res/raw
const whoosh = new Sound('swoosh.mp3', Sound.MAIN_BUNDLE);
just call it in your function
whoosh.play()

npm install react-native-sound and link with your project.
Import sound from react-native-sound
// or
import Sound from 'react-native-sound';
Put your mp3 file inside the folder and mention the path:
const requireAudio = require('./xyz.mp3');
Place this code inside your "onPress" function.
const s = new Sound(requireAudio, (e) => {
if (e) {
console.log('Error in SOUND', e);
return;
}
s.play(() => s.release());
});

Related

How to load local audio files dynamically with expo av

i have a bunch of audio files local to my app and i want to load them dynamically based on a component's state, the only way i found to load the audio with expo av is to use "require", but this method keeps returning "invalid call" whenever i try to use a variable of any sort or any template literals in the path string in it.
i tried even storing the paths in a json file and then referrirng to the path directly there and still got the invalid call error.
const { sound } = await Audio.Sound.createAsync(require(audioPaths['paths'][fileKey]), {}, playbackStatusUpdate);
how do you guys go about this issue? my files are local so i can't take advantage of streaming/loading them from network. does expo av offer any alternative to the require method? i need any tips or advice you might have
PS: if you need any more details about the situation please ask me and i will fill you in
Edit: this is how my paths json looks like
{
"paths": [
"../assets/Records/1.mp3",
"../assets/Records/2.mp3",
"../assets/Records/3.mp3",
"../assets/Records/4.mp3"
]
}
The issue is related to audio paths not being declared as System.registerDynamic.
you should define paths in JSON like this
"paths": [
require('./assets/one.mp3'),
require('./assets/two.mp3'),
require('./assets/three.mp3'),
]
}
and call this without require,
const { sound } = await Audio.Sound.createAsync(audioPaths['paths'][fileKey], {}, playbackStatusUpdate);
here is a snack I used
Theoretically when you want to upload files in a react native app, you will use either formData, or fileupload or react-native-fs or expo-file-system.
I recommend you the expo-file-system since you use expo.
See complete implementation here
But saying i have a bunch of audio files local to my app means that your audio files are already uploaded into a directory in your project folder and just you want those audio to be played dynamically using the expo-av Audio.Sound.createAsync() with require(). This is how I would do that:
import * as React from 'react';
import { Text, View, StyleSheet, Button } from 'react-native';
import { Audio } from 'expo-av';
export default function App() {
const [sound, setSound] = React.useState();
async function playSound() {
console.log('Loading Sound');
const { sound } = await Audio.Sound.createAsync( require('./assets/Hello.mp3')
);
setSound(sound);
console.log('Playing Sound');
await sound.playAsync();
}
React.useEffect(() => {
return sound
? () => {
console.log('Unloading Sound');
sound.unloadAsync();
}
: undefined;
}, [sound]);
return (
<View style={styles.container}>
<Button title="Play Sound" onPress={playSound} />
</View>
);
}
This sample is for playing one audio, but in your question you want the audio to be played dynamically. For that you can only use react-native useEffect hook to create a kind of repeatable actions. I would first create a method playSound like this:
playSound = async () => {
await Audio.Sound.createAsync( require('' + source);
};
Here source is the path to an audio sent as variable and you may want to use function goToNext() and resumePlayList() to change the path of source variable like:
const goToNext = () => {
for(let i = 0; i < noGuest; i++){
source = JsonPath[i];
}

React Native - Cannot use Speech To Text and Text To Speech together

I am using both react-native-voice and expo-speech libraries to transcript my voice and to convert a text to a speech. The problem is, when i end registering my voice and start a speech with expo-voice, there is no sound. It seems like react-native-voice completly mutes the audio when the voice recording is ended. The speech starts, but i must press on the mic button (activating the voice recognition) to hear it.
The only way i found to make everything work together is by stopping the voice recording after the text to speech has ended. Here is a part of the code :
const startRecognizing = async () => {
setButtonColor('#38b000');
// Starts listening for speech for a specific locale
try {
await Voice.start('en-EN');
} catch (e) {
console.error(e);
}
};
const destroyRecognizer = async () => {
//Destroys the current SpeechRecognizer instance
try {
await Voice.destroy();
} catch (e) {
console.error(e);
}
};
const stopRecognizing = async () => {
setButtonColor('#344E41');
setTimeout(() => {
Speech.speak("I did not understand, can you repeat please ?", {
language: 'en-EN',
onDone: () => {
setTimeout(
async() => {
await destroyRecognizer();
}, 1000);
},
});
}, 1000);
};
return (
<View style={styles.microphoneButtonContainer}>
<TouchableOpacity
onPressIn={startRecognizing}
onPressOut={stopRecognizing}>
<Image
style={styles.microphoneButton}
source={require('../img/microphone-icon.png')}
/>
</TouchableOpacity>
</View>
);
This solution brings a lot of edgecases so i can't work with it. None of the methods given to stop the recording solve the issue. And i did not found any help in the libraries docs. Is there a solution to this ? Thank you for your help !
Because there was no way to solve the issue with react-native-voice methods, i had the idea to go search directly in the library if i can modify the native code. For ios the code is in ios/voice/Voice.m. I found this :
- (void) teardown {
self.isTearingDown = YES;
[self.recognitionTask cancel];
self.recognitionTask = nil;
// Set back audio session category
[self resetAudioSession];
So i tried to comment out [self resetAudioSession];, i then rebuilt the packages with npx-pod-install (i use cocoapod), and it worked!
Doing this may cause edgecases, i did not fully test the methods yet, and i did not try for android.

How to upload recording file from Expo-av via Axios?

I am trying to upload a file recorded using expo-av via an Axios post request to my server. Forgive me, I am a Python backend dev so I am not that familiar with JS or Axios. I have tried various solutions but can't get anything working.
The code I am using for the recording is this, mostly straight from the Expo docs (the recording element) and it is working fine (the audio file gets created by the simulator, e.g. a .caf file for iOS simulator)
import * as React from 'react';
import { Text, View, StyleSheet, Button } from 'react-native';
import { Audio } from 'expo-av';
export default function App() {
const [recording, setRecording] = React.useState();
async function startRecording() {
try {
console.log('Requesting permissions..');
await Audio.requestPermissionsAsync();
await Audio.setAudioModeAsync({
allowsRecordingIOS: true,
playsInSilentModeIOS: true,
});
console.log('Starting recording..');
const { recording } = await Audio.Recording.createAsync(
Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY
);
setRecording(recording);
console.log('Recording started');
} catch (err) {
console.error('Failed to start recording', err);
}
}
async function stopRecording() {
console.log('Stopping recording..');
setRecording(undefined);
await recording.stopAndUnloadAsync();
const uri = recording.getURI();
console.log('Recording stopped and stored at', uri);
//My code for the upload
const formData = new FormData();
formData.append("attribute_x", "123");
formData.append("attribute_y", "456");
let soundFile = //NOT SURE WHAT THE ACTUAL SOUND FILE IS. The uri is just a string
formData.append("recording", {uri: uri, name: 'soundfile.caf', type: 'audio/x-caf'});
const response = await axios.post(my_url, formData, {
headers: {
'Content-Type': 'multipart/form-data',
'X-CSRFToken': 'csrftoken',
}
})
... deal with the response...
}
return (
<View style={styles.container}>
<Button
title={recording ? 'Stop Recording' : 'Start Recording'}
onPress={recording ? stopRecording : startRecording}
/>
</View>
);
}
The request body I need to send should be:
{
"attribute_x": 333,
"attribute_y": 291,
"recording": THE SOUND FILE
}
It's .caf for iOS and will be .mp4 for Android (defaults) but I can figure out how to deal with that myself. Just really pulling my hair out now with the upload/post request.
Thanks in advance.
I was unable to get it work with react-native, main cause is that multipart/form-data header wasn't set correctly and formdata.getHeaders() isn't defined in react-native.
Tried all that was tested here, but without success: https://github.com/axios/axios/issues/318
Solution: use https://github.com/sindresorhus/ky
const uri = recording.getURI();
const filetype = uri.split(".").pop();
const filename = uri.split("/").pop();
const fd = new FormData();
fd.append("audio-record", {
uri,
type: `audio/${filetype}`,
name: filename,
});
await ky.post("audio/upload", {
body: fd,
});
the headers will set automatically like with fetch api, and the boudary part will be provided !

how can i play sound when we get new data in react native

I want to play a sound when I get new data from firebase firestore. It doesn't matter whether screen is on or off, but sound must play in the background when I get data (like a telephone ring) (app will be on). Which library will be the best to achieve it ? plz help me to choose.. My project is in react native
Firestore's collection provides an event listener when new data is added, updated, or deleted.
collection.onSnapshot(
(querySnapshot) => {
querySnapshot.docChanges().forEach((change) => {
if (change.type === "added") {
// Handle new added data
// Play sound logic here
}
if (change.type === "modified") {
}
if (change.type === "removed") {
}
});
},
(err) => {
console.log(`Encountered error: ${err}`);
}
);

Is it possible to download an audio file and play it using React Native Expo?

I have audio files hosted on a server that I'd like my app to access after authenticating. Users send a GET request which includes an authentication token, and the server returns the binary audio data.
As far as I can see there is no way to save this 'blob' as an audio file to the filesystem. The current implementation of fetch in react-native doesn't support blobs: link
... and the ideally-suited react-native-fetch-blob library isn't supported in expo either: link
Additionally I can see no way of streaming the audio file from the server. The included audio library with expo allows streaming of audio from a url (e.g. http://example.com/myaudio.mp3) however I can't see any way to attach an authorisation header to the request (e.g. "Authorization": "Bearer [my-token]").
Is there a way of achieving this, either by downloading and saving the audio blob, or streaming from a url with an authorisation header included in the request? I could detach my project from Expo but I'd like to leave that as a last-resort.
Yes, it is. You need to use the Audio module exposed by expo to do it. Below are the steps that you have to follow to load and play an audio file from a given URL. I've also copied over the code for my component that is doing the same for me.
Load Audio module exposed by expo
import { Audio } from 'expo'
Create a new sound Object from it
soundObject = new Audio.Sound()
Asynchronously load your file
await this.soundObject.loadAsync({ uri: this.props.source })
Once loaded play the loaded file using
this.soundObject.playAsync()
Below is a simple component that I wrote for doing it -
import React, { Component } from 'react';
import { View, TouchableNativeFeedback } from 'react-native';
import { Audio } from 'expo';
class AudioPlayer extends Component {
constructor(props) {
super(props);
this.state = { isPlaying: false };
this.loadAudio = this.loadAudio.bind(this);
this.toggleAudioPlayback = this.toggleAudioPlayback.bind(this);
}
componentDidMount() {
this.loadAudio();
}
componentWillUnmount() {
this.soundObject.stopAsync();
}
async loadAudio() {
this.soundObject = new Audio.Sound();
try {
await this.soundObject.loadAsync({ uri: this.props.source /* url for your audio file */ });
} catch (e) {
console.log('ERROR Loading Audio', e);
}
}
toggleAudioPlayback() {
this.setState({
isPlaying: !this.state.isPlaying,
}, () => (this.state.isPlaying
? this.soundObject.playAsync()
: this.soundObject.stopAsync()));
}
render() {
return (
<TouchableNativeFeedback onPress={this.toggleAudioPlayback}>
<View style={this.props.style}>
{this.props.children}
</View>
</TouchableNativeFeedback>
);
}
}
export default AudioPlayer;
i figured it out. I should've load the sound in componentdidmount using async. In case someone met this problem
componentDidMount() {
this.loadAudio();
}
//async function to load the audio
async loadAudio() {
const { navigation } = this.props;
const id = navigation.getParam("id");
this.sound = new Audio.Sound();
for (let i = 0; i < soundArray.length; i++) {
if (soundArray[i].id === id) {
this.currentSound = soundArray[i];
console.log(this.currentSound);
break;
}
}
try {
await this.sound.loadAsync({
uri: this.currentSound.sound /* url for your audio file */
});
await this.sound.setOnPlaybackStatusUpdate(
this._updateScreenForSoundStatus
);
} catch (e) {
console.log("ERROR Loading Audio", e);
}
}