HCL Domino AppDevPack - writeAttachments - file-upload

The new V1.0.2 has new capabilities to upload attachments to a domino document. My upload code is successful as long a I use files <= 48KB. As soon as I try to upload a larger file, the upload takes place, in the domino document I find an attachment with the right size - but the file is corrupt!
Here's my code (corresponds to example code from appdev pack documentation for larger files):
for (var x = 0; x < files["tskFile"].length; x++) {
let sFilename = files["tskFile"][x].originalname;
let sPath = files["tskFile"][x].path;
let buffer = fs.readFileSync(sPath);
const writable = await db.bulkCreateAttachmentStream({});
writable.on('error', e => {
// An error occurred and the stream is closed
console.error("Error on write ", e)
});
writable.on('response', response => {
// The attachment content was written to the document and a
// response has arrived from the server
console.log(">> File " + sFilename + " saved to doc ")
});
let error;
// Write the image in n chunks
let offset = 0;
const writeRemaining = () => {
if (error) {
return;
}
let draining = true;
while (offset < buffer.length && draining) {
const remainingBytes = buffer.length - offset;
let chunkSize = 16 * 1024;
if (remainingBytes < chunkSize) {
chunkSize = remainingBytes;
}
const chunk = new Uint8Array(
buffer.slice(offset, offset + chunkSize),
);
draining = writable.write(chunk);
offset += chunkSize;
}
if (offset < buffer.length) {
// Buffer is not draining. Write some more once it drains.
writable.once('drain', writeRemaining);
} else {
writable.end();
}
};
writable.file({
unid: unid,
fileName: sFilename,
});
writeRemaining();
} // end forall attachments
Here are my notes.ini variables for my server:
PROTON_MAX_WRITE_ATTACHMENT_MB=30,
PROTON_MAX_ATTACHMENT_CHUNK_KB=50,
PROTON_MIN_ATTACHMENT_CHUNK_KB=8
My error or bug in AppDevPack? Did anyone try this new feature?

I am able to reproduce a similar issue with Proton on 64-bit Windows. I cannot reproduce with Proton running on Linux. I am using different client code than you are, but I'm 99% sure this is a Windows-only bug in Proton. We will update this answer when we have more information. Meanwhile, are you able to try Proton on Linux?

We have found a fix and it will be included in our next drop. Thank you for this report!

Related

I am trying to share a file over WebRTC but after some time it stops, and log RTCDatachannel send queue is full

let file = fileUpload.files[0];
let offset = 0;
let chunkSize = 1024*1024*16;
file.arrayBuffer().then((buffer) => {
while(buffer.byteLength){
const chunk = buffer.slice(0, chunkSize);
buffer = buffer.slice(chunkSize, buffer.byteLength);
dataChannel.send(chunk);
}
})
it works fine for small files but stops with big size files.
A DataChannel has a bufferedAmount property which tells you how many bytes are still waiting to be sent. It also has a property called bufferedAmountLowThreshold.
The RTCDataChannel property bufferedAmountLowThreshold is used to specify the number of bytes of buffered outgoing data that is considered "low."
https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel/bufferedAmountLowThreshold
https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel/bufferedAmount
You could keep sending data as normal as long as bufferedAmount is below bufferedAmountLowThreshold. Once it is larger you stop queuing more data until you receive a bufferedamountlow event.
const send = () => {
while (buffer.byteLength) {
if (dataChannel.bufferedAmount > dataChannel.bufferedAmountLowThreshold) {
dataChannel.onbufferedamountlow = () => {
dataChannel.onbufferedamountlow = null;
send();
};
return;
}
const chunk = buffer.slice(0, chunkSize);
buffer = buffer.slice(chunkSize, buffer.byteLength);
dataChannel.send(chunk);
}
};
send();

HCL Domino AppDevPack - Problem with writing Rich Text

I use the code proposed as an example in the documentation for Domino AppDev Pack 1.0.4 , the only difference is the reading of a text file (body.txt) as a buffer, this file containing only simple long text (40Ko).
When it is executed, the document is created in the database and the rest of the code does not return an error.
But finally, the rich text field was not added to the document.
Here the response returned:
response: {"fields":[{"fieldName":"Body","unid":"8EA69129BEECA6DEC1258554002F5DCD","error":{"name":"ProtonError","code":65577,"id":"RICH_TEXT_STREAM_CORRUPT"}}]}
My goal is to write very long text (more than 64 Ko) in a rich text field. I use in the example a text file for the buffer but it could be later something like const buffer = Buffer.from ('very long text ...')
Is this the right way or does it have to be done differently ?
I'm using a Windows system with IBM Domino (r) Server (64 Bit), Release 10.0.1FP4 and AppDevPack 1.0.4.
Thank you in advance for your help
Here's code :
const write = async (database) => {
let writable;
let result;
try {
// Create a document with subject write-example-1 to hold rich text
const unid = await database.createDocument({
document: {
Form: 'RichDiscussion',
Title: 'write-example-1',
},
});
writable = await database.bulkCreateRichTextStream({});
result = await new Promise((resolve, reject) => {
// Set up event handlers.
// Reject the Promise if there is a connection-level error.
writable.on('error', (e) => {
reject(e);
});
// Return the response from writing when resolving the Promise.
writable.on('response', (response) => {
console.log("response: " + JSON.stringify(response));
resolve(response);
});
// Indicates which document and item name to use.
writable.field({ unid, fieldName: 'Body' });
let offset = 0;
// Assume for purposes of this example that we buffer the entire file.
const buffer = fs.readFileSync('/driver/body.txt');
// When writing large amounts of data, it is necessary to
// wait for the client-side to complete the previous write
// before writing more data.
const writeData = () => {
let draining = true;
while (offset < buffer.length && draining) {
const remainingBytes = buffer.length - offset;
let chunkSize = 16 * 1024;
if (remainingBytes < chunkSize) {
chunkSize = remainingBytes;
}
draining = writable.write(buffer.slice(offset, offset + chunkSize));
offset += chunkSize;
}
if (offset < buffer.length) {
// Buffer is not draining. Whenever the drain event is emitted
// call this function again to write more data.
writable.once('drain', writeData);
}
};
writeData();
writable = undefined;
});
} catch (e) {
console.log(`Unexpected exception ${e.message}`);
} finally {
if (writable) {
writable.end();
}
}
return result;
};
As of appdev pack 1.0.4, the rich text stream accepts writing data of valid rich text cd format, in the LMBCS character set. We are currently working on a library to help you write valid rich text data to the stream.
I'd love to hear more about your use cases, and we're excited you're already poking around the feature! If you can join the openntf slack channel, I usually hang out there.

PhantomJS crashes after 150-180 urls

My script works fine so far, loading every page in the text file line by line in sequentiell order (page.open is asynchron and the page object is global = overwriting on new requests, it's a big clusterfuck running multiple page.open() at once), matching every request for a specific domain and printing JSON values from it.
But if I use a .txt-file with over ~150 links, it just crashes every time, mostly with no error message and with no crash dump like this:
PhantomJS has crashed. Please read the crash reporting guide at
http://phantomjs.org/crash-reporting.html and file a bug report at
https://github.com/ariya/phantomjs/issues/new.
Unfortunately, no crash dump is available.
(Is %TEMP% (C:\Users\XXX\AppData\Local\Temp) a directory you cannot write?)
I can reproduce that easily if I run it multiple times, doesn't matter if I do it at once or one after one.
How can I prevent the crashes? My script is useless if Phantom can't handle that.
But sometimes I get a crash dump:
PhantomJS has crashed. Please read the crash reporting guide at
http://phantomjs.org/crash-reporting.html and file a bug report at
https://github.com/ariya/phantomjs/issues/new.
Please attach the crash dump file:
C:\Users\XXX\AppData\Local\Temp\a4fd6af6-1244-44d3-8938-3aabe298c2fa.dmp
https://www.dropbox.com/s/i3qi5ed33mbblie/500%20links%20-a4fd6af6-1244-44d3-8938-3aabe298c2fa.dmp?dl=1
https://www.dropbox.com/s/najdz9fhdexvav1/500%20links-%2095ebab5c-859b-40e9-936b-84967471779b.dmp?dl=1
https://www.dropbox.com/s/1d2t8rtev85yf96/500%20links%20-%20d450c8e1-9728-41c7-ba52-dfef466f0222.dmp?dl=1
And in rare cases I even get an error message, Process Explorer says the process has a maximum of 21 threads at once
QThread::start: Failed to create thread ()
console.log('Hello, world!');
var fs = require('fs');
var stream = fs.open('500sitemap.txt', 'r');
var webPage = require('webpage');
var i = 1;
var hasFound = Array();
var hasonLoadFinished = Array();
function handle_page(line) {
var page = webPage.create();
page.settings.loadImages = false;
page.open(line, function() {});
page.onResourceRequested = function(requestData, request) {
var match = requestData.url.match(/example.de\/ac/g)
if (match != null) {
hasFound[line] = true;
var targetString = decodeURI(JSON.stringify(requestData.url));
var klammerauf = targetString.indexOf("{");
var jsonobjekt = targetString.substr(klammerauf, (targetString.indexOf("}") - klammerauf) + 1);
targetJSON = (decodeURIComponent(jsonobjekt));
var t = JSON.parse(targetJSON);
console.log(i + " " + t + " " + t['id']);
request.abort;
} else {
//hasFound = false;
return;
}
};
page.onLoadFinished = function(status) {
if (!hasonLoadFinished[line]) {
hasonLoadFinished[line] = true;
if (!hasFound[line]) {
console.log(i + " :NOT FOUND: " + line);
console.log("");
}
i++;
setTimeout(page.close, 200);
nextPage();
}
}
};
function nextPage() {
var line = stream.readLine();
if (!line) {
end = Date.now();
console.log("");
console.log(((end - start) / 1000) + " Sekunden");
phantom.exit(0);
}
hasFound[line] = false;
hasonLoadFinished[line] = false;
handle_page(line);
}
start = Date.now();
nextPage();
/edit crashed with 1.9.8 after 3836 links .... back to start ...........
Seems like the problem lies into the 2.0 version. Tested 1.9.8 out of frustration and - it works, 60% less RAM used, no crashes with 1000 Urls.
Crash report on github is done, what a relief, it works.

why GainNode connections are not workinng?

I'm using Web audio API to get the frecuency data from a sound file. Basicly I have implemented the code showed in this example what I want to add it is a gainNode so I can control the volume in any place of my code, but something is wrong with the connections I have made, everything else works just fine.
the volume part is the only thing I've changed from the original code:
request.onload = function() {
context.decodeAudioData(
request.response,
function(buffer) {
if(!buffer) {
$('#info').text('Error decoding file data');
return;
}
sourceJs = context.createJavaScriptNode(2048);
sourceJs.buffer = buffer;
sourceJs.connect(context.destination);
analyser = context.createAnalyser();
analyser.smoothingTimeConstant = 0.6;
analyser.fftSize = 512;
source = context.createBufferSource();
source.buffer = buffer;
source.loop = true;
source.connect(analyser);
analyser.connect(sourceJs);
source.connect(context.destination);
////////////////////////////////////
//////////VOLUME////////////////////
gainNode = context.createGain();
source.connect(gainNode);
gainNode.connect(context.destination);
//////////////////////////////////////
sourceJs.onaudioprocess = function(e) {
array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(array);
boost = 0;
for (var i = 0; i < array.length; i++) {
boost += array[i];
}
boost = boost / array.length;
};
// popup
//aca avisa cuando ya cargo el buffer
},
function(error) {
$('#info').text('Decoding error:' + error);
}
);
};
then I used this to turn off the volume but not working: ...gainNode.gain.value = 0;
You need the gain node to be the only thing that the output of source connects to.
Right now, it has multiple connections and your gain node is only one of them - which means it's not affecting your whole signal.

Access sd card in android for uploading a file to my php server using phonegap

I want to go to select a file from sdcard and upload it to server. is it possible to access the sdcard in android via phonegap as how we are picking a image from gallery and uploading. I went through samples but all are specifying the file name also like eg: mnt/sdcard/read.txt. But i want to goto only sdcard so that user can select his own file is it possible to do.
U can easily do that its very easy
window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, onFileSystemSuccessUpload, fail);
function onFileSystemSuccessUpload(fileSystem) {
// get directory entry through root and access all the folders
var directoryReader = fileSystem.root.createReader();
// Get a list of all the entries in the directory
directoryReader.readEntries(successReader,fail);
}
function successReader(entries) {
var i;
for (i=0; i<entries.length; i++) {
//alert(entries[i].name);
if(entries[i].isDirectory==true)
{
var directoryReaderIn = entries[i].createReader();
directoryReaderIn.readEntries(successReader,fail);
}
if(entries[i].isFile==true)
{
entries[i].file(uploadFile, fail);
}
}
};
function uploadFile(file) {
var target=""; //the url to upload on server
var ft = new FileTransfer(),path = "file://"+ file.fullPath,name = file.name;
ft.upload(path, target, win, fail, { fileName: name });
// var ft = new FileTransfer();
//ft.upload(file.fullPath, target, win, fail, options);
function win(r) {
alert("Code = " + r.responseCode);
alert("Response = " + r.response);
alert("Sent = " + r.bytesSent);
}
function fail(error) {
alert("An error has occurred: Code = " + error.code);
}
}