I'm working on a site hosted in Azure that has a download functionality. To reduce the load on our servers, the download is done use Shared Access Signatures. However, in Safari when downloading the file, the filename is wrapped in single quotes, as in myFile.txt downloads as 'myFile.txt'. This has made it so zips being downloaded have to be renamed by the client so the contents can be extracted.
Code for generated the Shared Access Signature is as follows:
CloudBlockBlob blob = container.GetBlockBlobReference(Helpers.StringHelper.TrimIfNotNull(blobName));
if (!blob.Exists())
{
return string.Empty;
}
var sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.SharedAccessStartTime = DateTime.UtcNow.AddSeconds(-5);
sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.Add(duration);
sasConstraints.Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write;
var headers = new SharedAccessBlobHeaders();
string filename = blobName;
if (filename.Contains("/"))
{
filename = blobName.Substring(blobName.LastIndexOf("/") + 1, blobName.Length - blobName.LastIndexOf("/") - 1);
}
headers.ContentDisposition = "attachment; filename='" + filename + "'";
//Generate the shared access signature on the blob, setting the constraints directly on the signature.
string sasBlobToken = blob.GetSharedAccessSignature(sasConstraints, headers);
//Return the URI string for the container, including the SAS token.
return blob.Uri + sasBlobToken;
This code has worked fine in Chrome, Firefox, and IE. Is there something I'm missing with the headers? The only one I'm modifying is content-disposition.
You should use double quotes for quoted strings in HTTP headers, as outlined in RFC2616.
So replace
headers.ContentDisposition = "attachment; filename='" + filename + "'";
with
headers.ContentDisposition = "attachment; filename=\"" + filename + "\"";
Related
I am use aws Lamda to decompress and traverse tar.gz files then uploading them back to s3 deflated retaining the original directory structure.
I am running into an issue streaming a TarArchiveEntry to a S3 bucket via a PutObjectRequest. While first entry is successfully streamed, upon trying to getNextTarEntry() on the TarArchiveInputStream a null pointer is thrown due to the underlying GunzipCompress inflator being null, which had an appropriate value prior to the s3.putObject(new PutObjectRequest(...)) call.
I have not been able to find documentation on how / why the gz input stream inflator attribute is being set to null after partially being sent to s3.
EDIT Further investigation has revealed that the AWS call appears to be closing the input stream after completing the upload of specified content length... haven't not been able to find how to prevent this behavior.
Below is essentially what my code looks like. Thank in advance for your help, comments, and suggestions.
public String handleRequest(S3Event s3Event, Context context) {
try {
S3Event.S3EventNotificationRecord s3EventRecord = s3Event.getRecords().get(0);
String s3Bucket = s3EventRecord.getS3().getBucket().getName();
// Object key may have spaces or unicode non-ASCII characters.
String srcKey = s3EventRecord.getS3().getObject().getKey();
System.out.println("Received valid request from bucket: " + bucketName + " with srckey: " + srcKeyInput);
String bucketFolder = srcKeyInput.substring(0, srcKeyInput.lastIndexOf('/') + 1);
System.out.println("File parent directory: " + bucketFolder);
final AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
TarArchiveInputStream tarInput = new TarArchiveInputStream(new GzipCompressorInputStream(getObjectContent(s3Client, bucketName, srcKeyInput)));
TarArchiveEntry currentEntry = tarInput.getNextTarEntry();
while (currentEntry != null) {
String fileName = currentEntry.getName();
System.out.println("For path = " + fileName);
// checking if looking at a file (vs a directory)
if (currentEntry.isFile()) {
System.out.println("Copying " + fileName + " to " + bucketFolder + fileName + " in bucket " + bucketName);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(currentEntry.getSize());
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + fileName, tarInput, metadata)); // contents are properly and successfully sent to s3
System.out.println("Done!");
}
currentEntry = tarInput.getNextTarEntry(); // NPE here due underlying gz inflator is null;
}
} catch (Exception e) {
e.printStackTrace();
} finally {
IOUtils.closeQuietly(tarInput);
}
}
That's true, AWS closes an InputStream provided to PutObjectRequest, and I don't know of a way to instruct AWS not to do so.
However, you can wrap the TarArchiveInputStream with a CloseShieldInputStream from Commons IO, like that:
InputStream shieldedInput = new CloseShieldInputStream(tarInput);
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + fileName, shieldedInput, metadata));
When AWS closes the provided CloseShieldInputStream, the underlying TarArchiveInputStream will remain open.
PS. I don't know what ByteArrayInputStream(tarInput.getCurrentEntry()) does but it looks very strange. I ignored it for the purpose of this answer.
I am using org.apache.commons.net.ftp.FTPClient for retrieving files from a ftp server. It is crucial that I preserve the last modified timestamp on the file when its saved on my machine. Do anyone have a suggestion for how to solve this?
This is how I solved it:
public boolean retrieveFile(String path, String filename, long lastModified) throws IOException {
File localFile = new File(path + "/" + filename);
OutputStream outputStream = new FileOutputStream(localFile);
boolean success = client.retrieveFile(filename, outputStream);
outputStream.close();
localFile.setLastModified(lastModified);
return success;
}
I wish the Apache-team would implement this feature.
This is how you can use it:
List<FTPFile> ftpFiles = Arrays.asList(client.listFiles());
for(FTPFile file : ftpFiles) {
retrieveFile("/tmp", file.getName(), file.getTimestamp().getTime());
}
You can modify the timestamp after downloading the file.
The timestamp can be retrieved through the LIST command, or the (non standard) MDTM command.
You can see here how to do modify the time stamp: that: http://www.mkyong.com/java/how-to-change-the-file-last-modified-date-in-java/
When download list of files, like all files returned by by FTPClient.mlistDir or FTPClient.listFiles, use the timestamp returned with the listing to update timestemp of local downloaded files:
String remotePath = "/remote/path";
String localPath = "C:\\local\\path";
FTPFile[] remoteFiles = ftpClient.mlistDir(remotePath);
for (FTPFile remoteFile : remoteFiles) {
File localFile = new File(localPath + "\\" + remoteFile.getName());
OutputStream outputStream = new BufferedOutputStream(new FileOutputStream(localFile));
if (ftpClient.retrieveFile(remotePath + "/" + remoteFile.getName(), outputStream))
{
System.out.println("File " + remoteFile.getName() + " downloaded successfully.");
}
outputStream.close();
localFile.setLastModified(remoteFile.getTimestamp().getTimeInMillis());
}
When downloading a single specific file only, use FTPClient.mdtmFile to retrieve the remote file timestamp and update timestamp of the downloaded local file accordingly:
File localFile = new File("C:\\local\\path\\file.zip");
FTPFile remoteFile = ftpClient.mdtmFile("/remote/path/file.zip");
if (remoteFile != null)
{
OutputStream outputStream = new BufferedOutputStream(new FileOutputStream(localFile));
if (ftpClient.retrieveFile(remoteFile.getName(), outputStream))
{
System.out.println("File downloaded successfully.");
}
outputStream.close();
localFile.setLastModified(remoteFile.getTimestamp().getTimeInMillis());
}
UPDATE 1: I've created a GIST with actual running code in a test jig to show exactly what I'm running up against. I've included working bot tokens (to a throw-away bot) and access to a telegram chat that the bot is already in, in case anyone wants to take a quick peek. It's
https://gist.github.com/pleasantone/59efe5f9d7f0bf1259afa0c1ae5a05fe
UPDATE 2: I've looked at the following articles for answers already (and a ton more):
https://github.com/francois2metz/html5-formdata/blob/master/formdata.js
PhantomJS - Upload a file without submitting a form
https://groups.google.com/forum/#!topic/casperjs/CHq3ZndjV0k
How to instantiate a File object in JavaScript?
How to create a File object from binary data in JavaScript
I've got a program written in casperjs (phantomjs) that successfully sends messages to Telegram via the BOT API, but I'm pulling my hair out trying to figure out how to send up a photo.
I can access my photo either as a file, off the local filesystem, or I've already got it as a base64 encoded string (it's a casper screen capture).
I know my photo is good, because I can post it via CURL using:
curl -X POST "https://api.telegram.org/bot<token>/sendPhoto" -F chat_id=<id> -F photo=#/tmp/photo.png
I know my code for connecting to the bot api from within capserjs is working, as I can do a sendMessage, just not a sendPhoto.
function sendMultipartResponse(url, params) {
var boundary = '-------------------' + Math.floor(Math.random() * Math.pow(10, 8));
var content = [];
for (var index in params) {
content.push('--' + boundary + '\r\n');
var mimeHeader = 'Content-Disposition: form-data; name="' + index + '";';
if (params[index].filename)
mimeHeader += ' filename="' + params[index].filename + '";';
content.push(mimeHeader + '\r\n');
if (params[index].type)
content.push('Content-Type: ' + params[index].type + '\r\n');
var data = params[index].content || params[index];
// if (data.length !== undefined)
// content.push('Content-Length: ' + data.length + '\r\n');
content.push('' + '\r\n');
content.push(data + '\r\n');
};
content.push('--' + boundary + '--' + '\r\n');
utils.dump(content);
var xhr = new XMLHttpRequest();
xhr.open("POST", url, false);
if (true) {
/*
* Heck, try making the whole thing a Blob to avoid string conversions
*/
body = new Blob(content, {type: "multipart/form-data; boundary=" + boundary});
utils.dump(body);
} else {
/*
* this didn't work either, but both work perfectly for sendMessage
*/
body = content.join('');
xhr.setRequestHeader("Content-Type", "multipart/form-data; boundary=" + boundary);
// xhr.setRequestHeader("Content-Length", body.length);
}
xhr.send(body);
casper.log(xhr.responseText, 'error');
};
Again, this is in a CASPERJS environment, not a nodejs environment, so I don't have things like fs.createReadableStream or the File() constructor.
Trying to Construct a Shared Access Signature URI for a Blob access in a container
BlobHelper BlobHelper = new BlobHelper(StorageAccount, StorageKey);
string signature = "";
string signedstart = DateTime.UtcNow.AddMinutes(-1).ToString("yyyy'-'MM'-'dd'T'HH':'mm':'ss'Z'");
string signedexpiry = DateTime.UtcNow.AddMinutes(2).ToString("yyyy'-'MM'-'dd'T'HH':'mm':'ss'Z'");
//// SET CONTAINER LEVEL ACCESS POLICY
string accessPolicyXml = "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" +
"<SignedIdentifiers>\n" +
" <SignedIdentifier>\n" +
" <Id>twominutepolicy</Id>\n" +
" <AccessPolicy>\n" +
" <Start>" + signedstart + "</Start>\n" +
" <Expiry>" + signedexpiry + "</Expiry>\n" +
" <Permission>r</Permission>\n" +
" </AccessPolicy>\n" +
" </SignedIdentifier>\n" +
"</SignedIdentifiers>\n";
BlobHelper.SetContainerAccessPolicy("xxxxxxx", "container", accessPolicyXml));
string canonicalizedresource = "/xxxxxxx/501362787";
string StringToSign = String.Format("{0}\n{1}\n{2}\n{3}\n{4}\n{5}\n{6}\n{7}\n{8}\n{9}\n{10}",
"r",
signedstart,
signedexpiry,
canonicalizedresource,
"twominutepolicy",
"2013-08-15",
"rscc",
"rscd",
"rsce",
"rscl",
"rsct"
);
using (HMACSHA256 hmacSha256 = new HMACSHA256(Convert.FromBase64String(StorageKey)))
{
Byte[] dataToHmac = System.Text.Encoding.UTF8.GetBytes(StringToSign);
signature = Convert.ToBase64String(hmacSha256.ComputeHash(dataToHmac));
}
StringBuilder sasToken = new StringBuilder();
sasToken.Append(BlobHelper.DecodeFrom64(e.Item.ToolTip).ToString().Replace("http","https") + "?");
//signedversion
sasToken.Append("sv=2013-08-15&");
sasToken.Append("sr=b&");
//
sasToken.Append("si=twominutepolicy&");
sasToken.Append("sig=" + signature + "&");
//
sasToken.Append("st=" + HttpUtility.UrlEncode(signedstart).ToUpper() + "&");
//
sasToken.Append("se=" + HttpUtility.UrlEncode(signedexpiry).ToUpper() + "&");
//
sasToken.Append("sp=r");
string url = sasToken.ToString();
I am getting the following exception below
<Error>
<Code>AuthenticationFailed</Code>
<Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:e424e1ac-fd96-4557-866a-992fc8c41841 Time:2014-05-22T18:46:15.3436786Z</Message>
<AuthenticationErrorDetail>Signature did not match. String to sign used was r 2014-05-22T18:45:06Z 2014-05-22T18:48:06Z /xxxxxxx/501362787/State.SearchResults.pdf twominutepolicy 2013-08-15 </AuthenticationErrorDetail>
</Error>
rscc, rscd, rsce, rscl, rsct are placeholders for overridden response headers. Your sasToken variable does not seem to override response headers, so you should just use empty strings with a new-line character when signing them. Moreover, it looks like your canonicalized resource also does not match the server's resource.
By the way, did you look at Azure Storage Client Library to create Shared Access Signature tokens? It provides lots of features and is the official SDK to access Microsoft Azure Storage.
I have an application that uses a flex form to capture user input. When the user has entered the form data (which includes a drawing area) the application creates a jpg image of the form and sends back to the server. Since the data is sensitive, it has to use https. Also, the client requires both jpg and pdf versions of the form to be stored on the server.
The application sends data back in three steps
1 - send the jpg snapshot with ordernumber
2 - send the form data fields as post data so it is not visible in the address bar
3 - send the pdf data
I am sending the jpg data first using urlloader and waiting for the server to respond before performing opperation 2 and 3 to ensure that the server has created the record associated with the new orderNumber.
This code works fine in IE over http. But If I try to use the application over https, IE blocks the page response from store jpg step and the complete event of the urlloader never fires. The application works fine in FireFox over http or https.
Here is the crossdomain.xml (I have replaced the domain with ""):
<!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-access-from domain="*.<mydomain>.com" to-ports="*" secure="false"/>
<allow-http-request-headers-from domain="*.<mydomain>.com" headers="*">
</cross-domain-policy>
Here is the code that is executed when the user presses the submit button:
private function loaderCompleteHandler(event:Event):void {
sendPDF();
sendPatientData();
}
private function submitOrder(pEvt:MouseEvent):void
{
//disable submit form so the order can't be submitted twice
formIsValid = false;
waitVisible = true;
//submit the jpg image first with the order number, userID, provID
//and order type. The receiveing asp will create the new order record
//and save the jpg file. jpg MUST be sent first.
orderNum = userID + "." + provID + "." + Date().toString() + "." + orderType;
var jpgURL:String = "https://orders.mydomain.com/orderSubmit.asp?sub=jpg&userID=" + userID + "&provID=" + provID + "&oNum=" + orderNum + "&oType=" + orderType;
var jpgSource:BitmapData = new BitmapData (vbxPrint.width, vbxPrint.height);
jpgSource.draw(vbxPrint);
var jpgEncoder:JPEGEncoder = new JPEGEncoder(100);
var jpgStream:ByteArray = jpgEncoder.encode(jpgSource);
var header:URLRequestHeader = new URLRequestHeader ("content-type", "application/octet-stream");
//Make sure to use the correct path to jpg_encoder_download.php
var jpgURLRequest:URLRequest = new URLRequest (jpgURL);
jpgURLRequest.requestHeaders.push(header);
jpgURLRequest.method = URLRequestMethod.POST;
jpgURLRequest.data = jpgStream;
//navigateToURL(jpgURLRequest, "_blank");
var jpgURLLoader:URLLoader = new URLLoader();
try
{
jpgURLLoader.load(jpgURLRequest);
}
catch (error:ArgumentError)
{
trace("An ArgumentError has occurred.");
}
catch (error:SecurityError)
{
trace("A SecurityError has occurred.");
}
jpgURLLoader.addEventListener(Event.COMPLETE, loaderCompleteHandler);
}
private function sendPatientData ():void
{
var dataURL:String = "https://orders.mydomain.com/orderSubmit.asp?sub=data&oNum=" + orderNum + "&oType=" + orderType;
//Make sure to use the correct path to jpg_encoder_download.php
var dataURLRequest:URLRequest = new URLRequest (dataURL);
dataURLRequest.method = URLRequestMethod.POST;
var dataUrlVariables:URLVariables = new URLVariables();
dataUrlVariables.userID = userID
dataUrlVariables.provID = provID
dataUrlVariables.name = txtPatientName.text
dataUrlVariables.dob = txtDOB.text
dataUrlVariables.contact = txtPatientContact.text
dataUrlVariables.sex=txtSex.text
dataUrlVariables.ind=txtIndications.text
dataURLRequest.data = dataUrlVariables
navigateToURL(dataURLRequest, "_self");
}
private function sendPDF():void
{
var url:String = "https://orders.mydomain.com/pdfOrderForm.asp"
var fileName:String = "orderPDF.pdf&sub=pdf&oNum=" + orderNum + "&oType=" + orderType + "&f=2&t=1" + "&mid=" + ModuleID.toString()
var jpgSource:BitmapData = new BitmapData (vbxPrint.width, vbxPrint.height);
jpgSource.draw(vbxPrint);
var jpgEncoder:JPEGEncoder = new JPEGEncoder(100);
var jpgStream:ByteArray = jpgEncoder.encode(jpgSource);
myPDF = new PDF( Orientation.LANDSCAPE,Unit.INCHES,Size.LETTER);
myPDF.addPage();
myPDF.addImageStream(jpgStream,0,0, 0, 0, 1,ResizeMode.FIT_TO_PAGE );
myPDF.save(Method.REMOTE,url,Download.ATTACHMENT,fileName);
}
The target asp page is not sending back any data, except the basic site page template.
Can anyone help me figure out how to get around this IE crossdomain issue? I have turned off the XSS filter in IE tools security settings, but that still didn't solve the problem.
THANKS
Do everything over https. Load the swf from an https url. Send the initial form post via https. Send the images via https.