how expo audio recording extension .mp3 - react-native

I am using the audio component of Expo and need to export it in MP3 format。encountering such an error Error Domain=NSOSStatusErrorDomain Code=1718449215 "(null)"]
This is my code
const recording = new Audio.Recording();
await recording.prepareToRecordAsync({
isMeteringEnabled: true,
android: {
extension: '.m4a',
outputFormat: RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_MPEG_4,
audioEncoder: RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AAC,
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
},
ios: {
extension: '.mp3',
outputFormat: Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEGLAYER3,
audioQuality: Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_MEDIUM,
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
linearPCMBitDepth: 16,
linearPCMIsBigEndian: false,
linearPCMIsFloat: false,
},
})

Based on this documentation (found this link via expo docs here) iOS does not support MP3 recording.
...neither MP3 nor AAC recording is available. This is due to the high CPU overhead, and consequent battery drain, of these formats.
If you're curious about the original error, you can find more info about the corresponding native code which may have produced this error here

Related

imagemin-pngquant cannot execute binary

I am trying to use imagemin with imagemin-pngquant to reduce images in an AWS Lambda function. Here is the code that I've made using the docs as a reference:
(image.data is the Buffer I'm creating from an axios .get() with arrayBuffer as the responseType)
// We want to minify the image before we upload it
let converted_image = await imagemin.buffer(await image.data, {
plugins: [
imageminPngquant({
quality: [.55, .65],
speed: 8,
strip: true
})
],
})
However, when I try to use the pngquant plugin to compress pngs, it does not work. Here is the exact error that I'm getting:
Error: Error: /var/task/node_modules/pngquant-bin/vendor/pngquant: /var/task/node_modules/pngquant-bin/vendor/pngquant: cannot execute binary file"}%

Expo-AV not loading audio from server on iOS

I am building an audio player for my react native expo project. I am using expo-av to play the sound that I am pulling from my AWS s3 bucket. I am using expo SDK 44. Everything works fine on Android, but I am receiving an error on iOS:
[Unhandled promise rejection: Error: This media format is not supported. - The AVPlayerItem instance has failed with the error code -11828 and domain "AVFoundationErrorDomain".]
-I have tried this with both .mp3 and .m4a files
-everything works on android
-local files work on iOS (require('./myaudiotrack.mp3') but not from my bucket
-this seems to be a problem with the URL that is being returned from the s3bucket and it is working with a URL with an audio file extension.
async function PlayPause() {
await Audio.setAudioModeAsync({
staysActiveInBackground: true,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
shouldDuckAndroid: false,
playThroughEarpieceAndroid: false,
allowsRecordingIOS: false,
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
playsInSilentModeIOS: true,
});
const { sound } = await Audio.Sound.createAsync(
{uri: AudioUri}, <------ this works on android, not ios
//require('../assets/zelda.mp3'), <--this works
{
shouldPlay: true,
rate: 1.0,
shouldCorrectPitch: false,
volume: 1.0,
isMuted: false,
isLooping: false,
},
);
setSound(sound);
await sound.playAsync();
}
I was able to fix the problem by declaring a file type using contentType: "audio/mp3" during the bucket upload.

video Duration Limit is not working in iOS 14 and image editing also not working in react native

Plugin Url
this below code doesn't allow durationLimit filter.
Image Picker version: 4.6.0
React Native version: 0.66.3
Platform: iOS
Development Operating System: MacOS
Dev tools: Xcode, iOS
mediaType: 'video',
videoQuality: 'medium',
durationLimit: 30,
allowsEditing: true,
};
launchImageLibrary(options, async res => {
});
durationLimit only works for videos taken from camera
For video Duration Limit in iOS
ImagePicker.launchCamera(
{
mediaType: 'video',
durationLimit: 30,
},

electron: app's icon resolution goes down compared to after having run electron-builder

I'm building an electron app, and the above image on the left is how my icon looks in the app when i'm testing it with electron and on the right is after I've compiled it into an executable using "yarn dist".
(these are screenshots of them from my windows bar at the bottom of the screen).
It seems like the resolution of the icon in the executable is worse than in the raw electron app. The file itself is quite high resolution:
The icon is called during development by the "main.js" file:
mainWindow = new BrowserWindow({
// frame: false,
title: "Collector: Kitten " + app.getVersion(),
icon: __dirname + "/logos/collector.png", //<--- This line
webPreferences: {
//contextIsolation: true, //has to be false with the way I've designed this
enableRemoteModule: true,
preload: path.join(__dirname, 'preload.js'),
worldSafeExecuteJavaScript: true
}
})
And is identified by the builder in the package.json "win":
"build":{
"win": {
"target": "nsis",
"icon": "logos/collector.png"
}
}
Is there a way to prevent this loss in resolution when using the electron-builder?
I solved this by converting the .png into a .ico file and using that instead.

Google Speech API Empty Answer

For tests I used the Google Example of the speech api (https://cloud.google.com/speech-to-text/docs/reference/rest/v1/speech/recognize)
There I tried a .ogg file
This one (https://www.dropbox.com/s/lw66x3g143mtnsl/SpeechToText.ogg?dl=0)
I converted the audio file to 16000Hz
Here is the full request
{
"audio": {
"content": " content "
},
"config": {
"encoding": "OGG_OPUS",
"languageCode": "de-DE",
"sampleRateHertz": 16000
}
}
I converted then the aduio file with an Base64 Encoder (https://www.giftofspeed.com/base64-encoder/) So the content was too long for here.
Now my problem I get just an empty answer. I get the code 200 but nothing else
Thanks for all answers !
The .ogg file URL you referenced was encoded with codec Vorbis not Opus. You can use opus-tools to encode your audio file to an Opus file before you provide it to Google's service
Here's the debugging I used to identify your file as Vorbis:
opusinfo
$ opusinfo SpeechToText.ogg
Processing file "SpeechToText.ogg"...
Use ogginfo for more information on this file.
New logical stream (#1, serial: ffe6c0ca): type Vorbis
Logical stream 1 ended
ffmpeg
$ ffmpeg -i SpeechToText.ogg
ffmpeg version 3.4.2 Copyright (c) 2000-2018 the FFmpeg developers
Input #0, ogg, from 'SpeechToText.ogg':
Duration: 00:00:03.41, start: 0.000000, bitrate: 116 kb/s
Stream #0:0: Audio: vorbis, 16000 Hz, stereo, fltp, 160 kb/s
Metadata:
ENCODER : Lavc58.18.100 libvorbis