Agora.io Record Audio from LiveStream - react-native

I try to make a group Audio recorder with Agora.io, so I first need to create an empty .aac audio file so that I can record the audio on this File.
I use the react-native-fetch-blob library to handle the File System.
Here is my code for Recording:
const handleAudio = async () => {
const fs = RNFetchBlob.fs;
const dirs = fs.dirs;
if (!startAudio) {
fs.createFile(dirs.DocumentDir + '/record.aac', 'foo', 'utf8').then(() => {
_engine?.startAudioRecording(
dirs.DocumentDir + '/record.aac',
AudioSampleRateType.Type44100,
AudioRecordingQuality.Medium,
);
setStartAudio(true);
});
} else {
_engine?.stopAudioRecording();
}
};
The problem is that the file 'record.aac' always stays the same and the Agora.io recorder does not update this new file, it remains with 'foo'...

The startAudioRecording function expect a directory instead of a file.
Example: /sdcard/emulated/0/audio/aac.
It also returns a promise that you can check for the result.

Related

How to use the sharp library to resize a Parse file img?

I have a Parse Cloud afterSave trigger from where I can access the obj and inside the obj a field that has a store parse file img.
I want to use sharp to resize it and save it in another field but I'm struggling and getting an error when I use sharp. Here is a summary of the code I already have inside the cloud trigger:
let file = obj.get("photo");
sharp(file)
.resize(250, 250)
.then((data) => {
console.log("img-----", data);
})
.catch((err) => {
console.log("--Error--", err);
});
After some research, I managed to figure out how to create Parse Cloud afterSave trigger which resizes and then saves the img, I couldn't find much information on it so ill post my solution so others can use it if it's helpful.
Parse.Cloud.afterSave("Landmarks", async (req) => {
const obj = req.object;
const objOriginal = req.original;
const file = obj.get("photo");
const condition = file && !file.equals(objOriginal.get("photo"));
if (condition) {
Parse.Cloud.httpRequest({ url: file.url() })
.then((res) => {
sharp(res.buffer)
.resize(250, 250, {
fit: "fill",
})
.toBuffer()
.then(async (dataBuffer) => {
const data = { base64: dataBuffer.toString("base64") };
const parseFile = new Parse.File(
"photo_thumbnail",
data
);
await parseFile.save();
await obj.save({ photo_thumb: parseFile });
})
.catch((err) => {
console.log("--Sharp-Error--", err);
});
})
.catch((err) => {
console.log("--HTTP-Request-Error--", err);
});
} else {
console.log("--Photo was deleted or did not change--");
}
});
So to break this down a bit, what i did first was get the obj and the objOriginal so i can compare them and check for a change in a specific field. This condition is necessery since in my case i wanted to save the resized img in parse which would cause an infinite loop otherwise.
After that i did a Parse.Cloud.httpRequest({ url: file.url()}).then() which is the way i found to get the buffer from the photo. The buffer is stored inside res.buffer and we need it for sharp.
Next i use sharp(res.buffer) since sharp also accepts buffers and resize it to the desired dimensions (i used the fit config for it). Then we turn the resulted img into another buffer using .toBuffer(). Furthermore, i use a .then().catch() blocks and if sharp is succesful i turned the outputed buffer into a base64 and passed it in Parse.File(), note that the specific syntax { base64: 'insert buffer here' } is important.
And finally i just save the file and the obj. Is this the best way to do it, absolytely not, but its the one i found that works. Another possible solution is instead of using buffers and base64 is to create a temporary dir which you save the images there, use them and then delete the directory. I tried this as well but had issues making it work.

A better way to handle async saving to backend server and cloud storage from React Native app

In my React Native 0.63.2 app, after user uploads images of artwork, the app will do 2 things:
1. save artwork record and image records on backend server
2. save the images into cloud storage
Those 2 things are related and have to be done successfully all together. Here is the code:
const clickSave = async () => {
console.log("save art work");
try {
//save artwork to backend server
let art_obj = {
_device_id,
name,
description,
tag: (tagSelected.map((it) => it.name)),
note:'',
};
let img_array=[], oneImg;
imgs.forEach(ele => {
oneImg = {
fileName:"f"+helper.genRandomstring(8)+"_"+ele.fileName,
path: ele.path,
width: ele.width,
height: ele.height,
size_kb:Math.ceil(ele.size/1024),
image_data: ele.image_data,
};
img_array.push(oneImg);
});
art_obj.img_array = [...img_array];
art_obj = JSON.stringify(art_obj);
//assemble images
let url = `${GLOBAL.BASE_URL}/api/artworks/new`;
await helper.getAPI(url, _result, "POST", art_obj); //<<==#1. send artwork and image record to backend server
//save image to cloud storage
var storageAccessInfo = await helper.getStorageAccessInfo(stateVal.storageAccessInfo);
if (storageAccessInfo && storageAccessInfo !== "upToDate")
//update the context value
stateVal.updateStorageAccessInfo(storageAccessInfo);
//
let bucket_name = "oss-hz-1"; //<<<
const configuration = {
maxRetryCount: 3,
timeoutIntervalForRequest: 30,
timeoutIntervalForResource: 24 * 60 * 60
};
const STSConfig = {
AccessKeyId:accessInfo.accessKeyId,
SecretKeyId:accessInfo.accessKeySecret,
SecurityToken:accessInfo.securityToken
}
const endPoint = 'oss-cn-hangzhou.aliyuncs.com'; //<<<
const last_5_cell_number = _myself.cell.substring(myself.cell.length - 5);
let filePath, objkey;
img_array.forEach(item => {
console.log("init sts");
AliyunOSS.initWithSecurityToken(STSConfig.SecurityToken,STSConfig.AccessKeyId,STSConfig.SecretKeyId,endPoint,configuration)
//console.log("before upload", AliyunOSS);
objkey = `${last_5_cell_number}/${item.fileName}`; //virtual subdir and file name
filePath = item.path;
AliyunOSS.asyncUpload(bucket_name, objkey, filePath).then( (res) => { //<<==#2 send images to cloud storage with callback. But no action required after success.
console.log("Success : ", res) //<<==not really necessary to have console output
}).catch((error)=>{
console.log(error)
})
})
} catch(err) {
console.log(err);
return false;
};
};
The concern with the code above is that those 2 async calls may take long time to finish while user may be waiting for too long. After clicking saving button, user may just want to move to next page on user interface and leaves those everything behind. Is there a way to do so? is removing await (#1) and callback (#2) able to do that?
if you want to do both tasks in the background, then you can't use await. I see that you are using await on sending the images to the backend, so remove that and use .then().catch(); you don't need to remove the callback on #2.
If you need to make sure #1 finishes before doing #2, then you will need to move the code for #2 intp #1's promise resolving code (inside the .then()).
Now, for catching error. You will need some sort of error handling that alerts the user that an error had occurred and the user should trigger another upload. One thing you can do is a red banner. I'm sure there are packages out there that can do that for you.

MediaRecorder has a delay of multiple seconda

I'm trying to use a MediaRecorder to record a MediaStream and display it in a video element using a MediaSource. So the setup looks like:
Request a MediaStream from the browser
Add it to the MediaRecorder
Add the recorded blobs to the MediaSource Buffer
The result looks very good but there is one problem: There is a delay in the playback.
When displaying the MediaStream directly there is no delay so I sorted out the first bulletpoint as the problem.
Nevertheless, it seems like either the MediaRecorder or the MediaSource is adding a delay of about 3 seconds to the stream.
this.screenRecording = await mediaDevices.getDisplayMedia({ video: { frameRate: 60, resizeMode: 'none' } });
const mediaRecorder = new MediaRecorder(this.screenRecording);
mediaRecorder.ondataavailable = async (event: any) => {
if (this.screenReceiving.readyState === 'open') {
if (this.screenReceivingBuffer == null) {
this.screenReceivingBuffer = this.screenReceiving.addSourceBuffer('video/webm;codecs=vp8');
}
if (!this.screenReceivingBuffer.updating) {
this.screenReceivingBuffer.appendBuffer(await new Response(event.data).arrayBuffer());
}
}
};
mediaRecorder.start(16);
The above code is only copy & paste from the actual project so please don't expect it to work by copy & paste ;)
Does anyone have an idea why this delay exists?
Any ideas on how to tweak the browser to not add this delay?

setSinkId change muliple audio ouputs

Here is the problem,
First I enumerate all the devices that I have available with in select elements:
navigator.mediaDevices.enumerateDevices()
When I change one output, it sounds in the device that I choose.
HTMLMediaElement.setSinkId(deviceId)
After if I play another audio and change the device output (setSinkId), it changes also the first one to the last deviceId. So I have both sounds in the same device.
Do I need to have the last adapter.js version to implement properly that problem?
********* EDITED **********
Following the above comment, it try the web audio, but not success. With getUserMedia everything is fine.
navigator.getUserMedia( { audio: true, video: false },
function (mediaStream) {
// Create an audio context for the audio
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
//var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(mediaStream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
},
function (error) {
console.log(error);
}
);
But using my remote stream, I cannot get any noise
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(stream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
// Element is my HTMLMediaElement
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
this is most likely caused by how Chrome renders audio. See here for a description which also suggests using webaudio to workaround the problem.
adapter.js can not fix this.

WebRTC mix local and remote audio steams and record

So far i've found a way only to record either local or remote using MediaRecorder API but is it possible to mix and record both steams and get a blob?
Please note its audio steam only and i don't want to mix/record in server side.
I've a RTCPeerConnection as pc.
var local_stream = pc.getLocalStreams()[0];
var remote_stream = pc.getRemoteStreams()[0];
var audioChunks = [];
var rec = new MediaRecorder(local_stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive")
// Play audio using new blob
}
rec.start();
Even i tried adding multiple tracks in MediaStream API but it still gives only first track audio. Any help or insight 'd be appreciated!
The WebAudio API can do mixing for you. Consider this code if you want to record all the audio tracks in the array audioTracks:
const ac = new AudioContext();
// WebAudio MediaStream sources only use the first track.
const sources = audioTracks.map(t => ac.createMediaStreamSource(new MediaStream([t])));
// The destination will output one track of mixed audio.
const dest = ac.createMediaStreamDestination();
// Mixing
sources.forEach(s => s.connect(dest));
// Record 10s of mixed audio as an example
const recorder = new MediaRecorder(dest.stream);
recorder.start();
recorder.ondataavailable = e => console.log("Got data", e.data);
recorder.onstop = () => console.log("stopped");
setTimeout(() => recorder.stop(), 10000);