Why readFile of react-native-fs only read part of a text file? - react-native

I was trying to read from a text file with react-native-fs readFile function, the resulting string always ended at certain chars.
I've change the contents of the text file, it still ended after 4094 chars.
await RNFS.readFile(p, 'utf8')
.then((readresult) => {
console.log(readresult);
res = {success: true, contents: readresult};
})
.catch((err) => {
console.log('ERROR' + err.message);
res = {success: false, errorMsg: err.message};
})
Log always produces first 4094 chars.

You could try reading the file piece by piece using read:
// We'll call this function multiple times to read the file in chunks.
// Feel free to append additional error handling and logging to this function.
const readChunk = (file, length, position) => {
return RNFS.read(file, length, position, 'utf8');
}
// Set the number of character to read in each chunk
const length = 4094;
let chunk = '';
let total = '';
// Add together the chunks as you read them.
// When the next chunk is empty, you've reached the end of the file.
do {
chunk = await readChunk(p, length, total.length);
total += chunk;
} while (chunk.length > 0);

Related

Is there a way to modify the buffer of a png in a pdf document?

For a few days now i have been trying to extract images from a pdf, modify them, and then replace them in my pdf document. It works perfectly when working with jpeg images, but when it comes to png... I'm able to modify the image and then save it correctly, but when changing the buffer in the pdf, it turns all black. So there's my code :
try {
(async () => {
let imageData = imgTab.type === 'jpg' ? imgTab.data : await savePng(imgTab);
console.log(imgTab.type);
let typeJimp = "image/png";
if(imgTab.type === 'jpg'){
typeJimp = "image/jpeg";
}
const img = await Jimp.read(imageData).then(async function (image) {
image.color([
{ apply: 'hue', params: [90] }]);
*//HERE, the image saved is okay! It is modified as I want it to be*
image.write("testimage"+objIdx+"."+imgTab.type);
let data = fs.readFileSync("testimage"+objIdx+"."+imgTab.type);
let bufferData = Buffer.from(data);
*//The problem is when i do this to replace the buffer of my original image*
pdfObject.contents = bufferData;
fs.writeFileSync('client/public/test_modified.pdf', await pdfDoc.save());
}).catch(function (err) {
//return Promise.reject(err);
console.log(err);
});
})();
}
The main code to extract the images can be found here : pdf-lib extract images
I don't know if there's something special to do to make the buffer work, but if you do, feel free :)
Thanks in advance !
////
Okay, so for now, i just use it that way :
const img = await Jimp.read(imageData).then(async function (image) {
image.color([
{ apply: 'hue', params: [90] }]);
let data = await image.getBufferAsync('image/jpeg')
var res = await pdfDoc.embedJpg(data);
res.ref = pdfRef;
var test = await res.embed();
fs.writeFileSync('client/public/test_modified.pdf', await pdfDoc.save());
})
My pdfRef is the ref of the initial object i was trying to modify.

I am trying to share a file over WebRTC but after some time it stops, and log RTCDatachannel send queue is full

let file = fileUpload.files[0];
let offset = 0;
let chunkSize = 1024*1024*16;
file.arrayBuffer().then((buffer) => {
while(buffer.byteLength){
const chunk = buffer.slice(0, chunkSize);
buffer = buffer.slice(chunkSize, buffer.byteLength);
dataChannel.send(chunk);
}
})
it works fine for small files but stops with big size files.
A DataChannel has a bufferedAmount property which tells you how many bytes are still waiting to be sent. It also has a property called bufferedAmountLowThreshold.
The RTCDataChannel property bufferedAmountLowThreshold is used to specify the number of bytes of buffered outgoing data that is considered "low."
https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel/bufferedAmountLowThreshold
https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel/bufferedAmount
You could keep sending data as normal as long as bufferedAmount is below bufferedAmountLowThreshold. Once it is larger you stop queuing more data until you receive a bufferedamountlow event.
const send = () => {
while (buffer.byteLength) {
if (dataChannel.bufferedAmount > dataChannel.bufferedAmountLowThreshold) {
dataChannel.onbufferedamountlow = () => {
dataChannel.onbufferedamountlow = null;
send();
};
return;
}
const chunk = buffer.slice(0, chunkSize);
buffer = buffer.slice(chunkSize, buffer.byteLength);
dataChannel.send(chunk);
}
};
send();

How to know when monitorCharacteristic is done with react native ble plx?

I use react-native-ble-plx in my project to communicante with my tool, and I'm trying to monitor one of its characteristics.
The result is rather long so the monitoringCharacteristic function is looping until it has send me everything, but I don't know how to be sure that the loop is done.
Here is my monitoring function :
const scanMyDevice = async (device) => {
const serviceUUID = '569a1-****'
const writeUUID = '569a2-****'
const readUUID = '569a2-****'
await device.discoverAllServicesAndCharacteristics()
await device.characteristicsForService(serviceUUID)
await device.writeCharacteristicWithResponseForService(serviceUUID, writeUUID, 'IzEwCg==')
var tempTrame = ''
const subscription = device.monitorCharacteristicForService(serviceUUID, readUUID, (error, a) => {
if (error) {
console.log(error)
console.log(error.message)
}
else {
tempTrame = tempTrame + base64.decode(a.value)
setTrameResult(tempTrame)
console.log('// INSIDE ///', tempTrame)
}
}, 'idMonitor')
return () => {
subscription.remove();
}
console.log('DONE')
}
In my code, 'DONE' is going to be printed before 'INSIDE trame'.
I tried to put console.log('DONE') inside the return, but then it was never printed.
I tried to put .then( console.log('DONE')) just before return but it sent me an error saying that it has nothing to do here...
Can someone help me please ?
Finally I went around the problem by defining an ending pattern to the info I wanted to send/read via BLE, and then I wrote a loop to continue reading while the ending pattern was not encountered:
let tempTrame = tempTrame + base64.decode(a.value)
let finalTrame = ''
// check if my variable has the end symbols (to be sure that I have all the information) and if it is the first time I get this information
if (tempTrame.includes('#109F')) {
stop = true
// suppress end symbols
tempTrame = tempTrame.replace(endTrame, '')
finalTrame = tempTrame
}
console.log(finalTrame)

HCL Domino AppDevPack - Problem with writing Rich Text

I use the code proposed as an example in the documentation for Domino AppDev Pack 1.0.4 , the only difference is the reading of a text file (body.txt) as a buffer, this file containing only simple long text (40Ko).
When it is executed, the document is created in the database and the rest of the code does not return an error.
But finally, the rich text field was not added to the document.
Here the response returned:
response: {"fields":[{"fieldName":"Body","unid":"8EA69129BEECA6DEC1258554002F5DCD","error":{"name":"ProtonError","code":65577,"id":"RICH_TEXT_STREAM_CORRUPT"}}]}
My goal is to write very long text (more than 64 Ko) in a rich text field. I use in the example a text file for the buffer but it could be later something like const buffer = Buffer.from ('very long text ...')
Is this the right way or does it have to be done differently ?
I'm using a Windows system with IBM Domino (r) Server (64 Bit), Release 10.0.1FP4 and AppDevPack 1.0.4.
Thank you in advance for your help
Here's code :
const write = async (database) => {
let writable;
let result;
try {
// Create a document with subject write-example-1 to hold rich text
const unid = await database.createDocument({
document: {
Form: 'RichDiscussion',
Title: 'write-example-1',
},
});
writable = await database.bulkCreateRichTextStream({});
result = await new Promise((resolve, reject) => {
// Set up event handlers.
// Reject the Promise if there is a connection-level error.
writable.on('error', (e) => {
reject(e);
});
// Return the response from writing when resolving the Promise.
writable.on('response', (response) => {
console.log("response: " + JSON.stringify(response));
resolve(response);
});
// Indicates which document and item name to use.
writable.field({ unid, fieldName: 'Body' });
let offset = 0;
// Assume for purposes of this example that we buffer the entire file.
const buffer = fs.readFileSync('/driver/body.txt');
// When writing large amounts of data, it is necessary to
// wait for the client-side to complete the previous write
// before writing more data.
const writeData = () => {
let draining = true;
while (offset < buffer.length && draining) {
const remainingBytes = buffer.length - offset;
let chunkSize = 16 * 1024;
if (remainingBytes < chunkSize) {
chunkSize = remainingBytes;
}
draining = writable.write(buffer.slice(offset, offset + chunkSize));
offset += chunkSize;
}
if (offset < buffer.length) {
// Buffer is not draining. Whenever the drain event is emitted
// call this function again to write more data.
writable.once('drain', writeData);
}
};
writeData();
writable = undefined;
});
} catch (e) {
console.log(`Unexpected exception ${e.message}`);
} finally {
if (writable) {
writable.end();
}
}
return result;
};
As of appdev pack 1.0.4, the rich text stream accepts writing data of valid rich text cd format, in the LMBCS character set. We are currently working on a library to help you write valid rich text data to the stream.
I'd love to hear more about your use cases, and we're excited you're already poking around the feature! If you can join the openntf slack channel, I usually hang out there.

How to create an AudioBuffer from a Blob?

I have an audio file/blob that has been created using the MediaRecorder api:
let recorder = new MediaRecorder(this.stream)
let data = [];
recorder.ondataavailable = event => data.push(event.data);
and then later when the recording is finished:
let superBlob = new Blob(data, { type: "video/webm" });
How can I use this blob to create an AudioBuffer? I need to either :
Transform the Blob object into an ArrayBuffer which I could use with AudioContext.decodeAudioData (returns an AudioBuffer) or
Transform the Blob object into an Float32Array, where I could copy it into the AudioBuffer with AudioBuffer.copyToChannel()
Any tips on how to achieve that are appreciated. Cheers!
To convert a Blob object to an ArrayBuffer, use FileReader.readAsArrayBuffer.
let fileReader = new FileReader();
let arrayBuffer;
fileReader.onloadend = () => {
arrayBuffer = fileReader.result;
}
fileReader.readAsArrayBuffer(superBlob);
The accepted answer is great but only gives an array buffer which is not an audio buffer. You need to use the audio context to convert the array buffer into an audio buffer.
const audioContext = AudioContext()
const fileReader = new FileReader()
// Set up file reader on loaded end event
fileReader.onloadend = () => {
const arrayBuffer = fileReader.result as ArrayBuffer
// Convert array buffer into audio buffer
audioContext.decodeAudioData(arrayBuffer, (audioBuffer) => {
// Do something with audioBuffer
console.log(audioBuffer)
})
}
//Load blob
fileReader.readAsArrayBuffer(blob)
I wish the answer had included an example using decodeAudioData. I had to find it somewhere else and I thought since this is the top search for "Blob to Audio Buffer" I would add some helpful information for the next person that comes down this rabbit hole.
All the answers are true. However, in the modern web browsers like Chrome 76 and Firefox 69, there is a much simpler way: using Blob.arrayBuffer()
Since Blob.arrayBuffer() returns a Promise, you can do either
superBlob.arrayBuffer().then(arrayBuffer => {
// Do something with arrayBuffer
});
or
async function doSomethingWithAudioBuffer(blob) {
var arrayBuffer = await blob.arrayBuffer();
// Do something with arrayBuffer;
}
A simplified version using an async function:
async function blobToAudioBuffer(audioContext, blob) {
const arrayBuffer = await blob.arrayBuffer();
return await audioContext.decodeAudioData(arrayBuffer);
}
I put audioContext as a param, because I recommend reusing instances.
Both Answers are true, there are some minor changes. This is the function I finally used:
function convertBlobToAudioBuffer(myBlob) {
const audioContext = new AudioContext();
const fileReader = new FileReader();
fileReader.onloadend = () => {
let myArrayBuffer = fileReader.result;
audioContext.decodeAudioData(myArrayBuffer, (audioBuffer) => {
// Do something with audioBuffer
});
};
//Load blob
fileReader.readAsArrayBuffer(myBlob);
}