"Failed - Blocked" messaged popped up when downloading .eml file from servers but it is working locally in localhost:4200 - createobjecturl

It's an Angular application. when I click the "download" link locally in 4200, the file can download without error. But when downloaded from a rnd server, it pops the "Failed - blocked" error. It is the same Chrome browser I am using in both situations. I see solution of setting "Launching applications and unsafe files." to "Prompted". I checked. The setting was already such. Apparently it is not fixing the "Failed - blocked" error.
Here is the angular code
this.http.post(this.END_POINT+"/download/" + emailMessageId, Blob).subscribe((res) => {
const byteCharacters = atob(res['something']);
const byteNumbers = new Array(byteCharacters.length);
for (let i = 0; i < byteCharacters.length; i++) {
byteNumbers[i] = byteCharacters.charCodeAt(i);
}
const byteArray = new Uint8Array(byteNumbers);
let blob = new Blob([byteArray], { type: "application/eml" });
var link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
link.download = "email.eml";
link.click();
document.body.appendChild(link);
})
}```

Related

How to download files through development vscode extension on code-server?

I wrote a vscode extension. Now you want to download a file in the vscode working directory by developing extensions. But no files were obtained through vscode vscode.Uri.file.
const downloadPanel = vscode.window.createWebviewPanel(
"view",
"下载",
vscode.ViewColumn.Two,
{
enableScripts: true,
retainContextWhenHidden: true,
}
)
if (vscode.workspace.workspaceFolders === undefined) {
throw new Error("not found!");
}
const filePath = vscode.workspace.workspaceFolders[0].uri.fsPath;
let downloadContent = vscode.commands.registerCommand('download.click', () => {
console.log("filePath = " + filePath);
const onDiskPath = vscode.Uri.file(
path.join(context.extensionPath, "resources","blockchain.svg")
);
// And get the special URI to use with the webview
const catGifSrc = panel.webview.asWebviewUri(onDiskPath) + "";
getWebviewContent(catGifSrc);
function getWebviewContent(_src: string) {
return '<html><head><script></script></script></head><body><div>download</div></body></html>';
}
When clicking the link, the file is not found! Currently, only nginx proxy can be used for full path downloading. Is there any other plan or solution?

Jsreport.aspnetcore is not working on iis server

We have implement the jsreport in asp.net core project to save pdf file in physical location. This is working on local setup very well but after publish when we upload the source file on server, jsreport feature is not work.
Startup Code:
services.AddJsReport(new LocalReporting().UseBinary(JsReportBinary.GetBinary()).KillRunningJsReportProcesses().AsUtility().Create());
Controller Code:
string filepath = Path.Combine(env.WebRootPath, "Upload/license/Unsigned" + id + ".pdf");
HttpContext.JsReportFeature().Recipe(Recipe.ChromePdf)
.Configure((r) => r.Template.Chrome = new Chrome
{
HeaderTemplate = "",
DisplayHeaderFooter = true,
MarginTop = ".5cm",
MarginLeft = "0cm",
MarginBottom = "2cm",
MarginRight = "0cm"
})
.OnAfterRender((r) =>
{
using (var file = System.IO.File.Open(filepath, FileMode.Create))
{
r.Content.CopyTo(file);
}
r.Content.Seek(0, SeekOrigin.Begin);
});
return View("print_accepted_license", result);
Kindly provide help.

HCL Domino AppDevPack - writeAttachments

The new V1.0.2 has new capabilities to upload attachments to a domino document. My upload code is successful as long a I use files <= 48KB. As soon as I try to upload a larger file, the upload takes place, in the domino document I find an attachment with the right size - but the file is corrupt!
Here's my code (corresponds to example code from appdev pack documentation for larger files):
for (var x = 0; x < files["tskFile"].length; x++) {
let sFilename = files["tskFile"][x].originalname;
let sPath = files["tskFile"][x].path;
let buffer = fs.readFileSync(sPath);
const writable = await db.bulkCreateAttachmentStream({});
writable.on('error', e => {
// An error occurred and the stream is closed
console.error("Error on write ", e)
});
writable.on('response', response => {
// The attachment content was written to the document and a
// response has arrived from the server
console.log(">> File " + sFilename + " saved to doc ")
});
let error;
// Write the image in n chunks
let offset = 0;
const writeRemaining = () => {
if (error) {
return;
}
let draining = true;
while (offset < buffer.length && draining) {
const remainingBytes = buffer.length - offset;
let chunkSize = 16 * 1024;
if (remainingBytes < chunkSize) {
chunkSize = remainingBytes;
}
const chunk = new Uint8Array(
buffer.slice(offset, offset + chunkSize),
);
draining = writable.write(chunk);
offset += chunkSize;
}
if (offset < buffer.length) {
// Buffer is not draining. Write some more once it drains.
writable.once('drain', writeRemaining);
} else {
writable.end();
}
};
writable.file({
unid: unid,
fileName: sFilename,
});
writeRemaining();
} // end forall attachments
Here are my notes.ini variables for my server:
PROTON_MAX_WRITE_ATTACHMENT_MB=30,
PROTON_MAX_ATTACHMENT_CHUNK_KB=50,
PROTON_MIN_ATTACHMENT_CHUNK_KB=8
My error or bug in AppDevPack? Did anyone try this new feature?
I am able to reproduce a similar issue with Proton on 64-bit Windows. I cannot reproduce with Proton running on Linux. I am using different client code than you are, but I'm 99% sure this is a Windows-only bug in Proton. We will update this answer when we have more information. Meanwhile, are you able to try Proton on Linux?
We have found a fix and it will be included in our next drop. Thank you for this report!

Titanium - Get image file from filesystem on Android

I have a problem getting an image from filesystem. On iOS works fine.
First of all, I save a remote image in the filesystem with this function:
img.imagen = url from the remote image (e.g. http://onesite.es/img2.jpeg)
function descargarImagen(img, callback){
var path = img.imagen;
var filename = path.split("/").pop();
var xhr = Titanium.Network.createHTTPClient({
onload: function() {
// first, grab a "handle" to the file where you'll store the downloaded data
var f = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory, filename);
f.write(this.responseData); // write to the file
Ti.API.debug("-- Imagen guardada: " + f.nativePath);
callback({path: f.nativePath});
},
timeout: 10000
});
xhr.open('GET', path);
xhr.send();
}
Now, I want to share this image creating an Android Intent:
args.image = f.nativePath(in the previous function)
var intent = null;
var intentType = null;
intent = Ti.Android.createIntent({
action: Ti.Android.ACTION_SEND
});
// add text status
if (args.status){
intent.putExtra(Ti.Android.EXTRA_TEXT, args.status);
}
// change type according to the content
if (args.image){
intent.type = "image/*";
intent.putExtraUri(Ti.Android.EXTRA_STREAM, args.image);
}else{
intent.type = "text/plain";
intent.addCategory(Ti.Android.CATEGORY_DEFAULT);
}
// launch intent
Ti.Android.currentActivity.startActivity(Ti.Android.createIntentChooser(intent, args.androidDialogTitle));
What I'm doing wrong?

PhantomJS crashes after 150-180 urls

My script works fine so far, loading every page in the text file line by line in sequentiell order (page.open is asynchron and the page object is global = overwriting on new requests, it's a big clusterfuck running multiple page.open() at once), matching every request for a specific domain and printing JSON values from it.
But if I use a .txt-file with over ~150 links, it just crashes every time, mostly with no error message and with no crash dump like this:
PhantomJS has crashed. Please read the crash reporting guide at
http://phantomjs.org/crash-reporting.html and file a bug report at
https://github.com/ariya/phantomjs/issues/new.
Unfortunately, no crash dump is available.
(Is %TEMP% (C:\Users\XXX\AppData\Local\Temp) a directory you cannot write?)
I can reproduce that easily if I run it multiple times, doesn't matter if I do it at once or one after one.
How can I prevent the crashes? My script is useless if Phantom can't handle that.
But sometimes I get a crash dump:
PhantomJS has crashed. Please read the crash reporting guide at
http://phantomjs.org/crash-reporting.html and file a bug report at
https://github.com/ariya/phantomjs/issues/new.
Please attach the crash dump file:
C:\Users\XXX\AppData\Local\Temp\a4fd6af6-1244-44d3-8938-3aabe298c2fa.dmp
https://www.dropbox.com/s/i3qi5ed33mbblie/500%20links%20-a4fd6af6-1244-44d3-8938-3aabe298c2fa.dmp?dl=1
https://www.dropbox.com/s/najdz9fhdexvav1/500%20links-%2095ebab5c-859b-40e9-936b-84967471779b.dmp?dl=1
https://www.dropbox.com/s/1d2t8rtev85yf96/500%20links%20-%20d450c8e1-9728-41c7-ba52-dfef466f0222.dmp?dl=1
And in rare cases I even get an error message, Process Explorer says the process has a maximum of 21 threads at once
QThread::start: Failed to create thread ()
console.log('Hello, world!');
var fs = require('fs');
var stream = fs.open('500sitemap.txt', 'r');
var webPage = require('webpage');
var i = 1;
var hasFound = Array();
var hasonLoadFinished = Array();
function handle_page(line) {
var page = webPage.create();
page.settings.loadImages = false;
page.open(line, function() {});
page.onResourceRequested = function(requestData, request) {
var match = requestData.url.match(/example.de\/ac/g)
if (match != null) {
hasFound[line] = true;
var targetString = decodeURI(JSON.stringify(requestData.url));
var klammerauf = targetString.indexOf("{");
var jsonobjekt = targetString.substr(klammerauf, (targetString.indexOf("}") - klammerauf) + 1);
targetJSON = (decodeURIComponent(jsonobjekt));
var t = JSON.parse(targetJSON);
console.log(i + " " + t + " " + t['id']);
request.abort;
} else {
//hasFound = false;
return;
}
};
page.onLoadFinished = function(status) {
if (!hasonLoadFinished[line]) {
hasonLoadFinished[line] = true;
if (!hasFound[line]) {
console.log(i + " :NOT FOUND: " + line);
console.log("");
}
i++;
setTimeout(page.close, 200);
nextPage();
}
}
};
function nextPage() {
var line = stream.readLine();
if (!line) {
end = Date.now();
console.log("");
console.log(((end - start) / 1000) + " Sekunden");
phantom.exit(0);
}
hasFound[line] = false;
hasonLoadFinished[line] = false;
handle_page(line);
}
start = Date.now();
nextPage();
/edit crashed with 1.9.8 after 3836 links .... back to start ...........
Seems like the problem lies into the 2.0 version. Tested 1.9.8 out of frustration and - it works, 60% less RAM used, no crashes with 1000 Urls.
Crash report on github is done, what a relief, it works.