I'm trying to use the FileSystem API to write an uploaded file on a SPA to a Local sandboxed FileSystem using the FileSystem API.
The File Is uploaded with drop acion and I can get the File object array in the call back.
From the File I can get the ReadableStream calling the stream method (yes, it return only readable sream).
Considering that the uploaded file could be big enough, I would go for a streaming than loading entirely into a blob and then writing into FileSystem api.
So, following the docs the steps are:
get a FileSystem (DOMFileSystem) through the async webkitRequestFileSystem call.
get the prop root that is a FileSystemDirectoryEntry
create a file through getFile (with flag create:true) that returns (async) a FileSystemFileEntry
Now from the FileEntry I can get a FileWriter using createWriter but it is obsolete (in MDN), and in any case it is a FileWriter while I would look to obtain a WritableStream instead in order to use the pipeTo from the uploaded file Handler->ReadableStream.
So, I see that in the console the class (interface) FileSystemFileHandler is defined but I cannot understand how to get an instance from the FileSystemFileEntry. If I can obtain a FileSystemFileHandler I can call the createWritable to obtain a FileSystemWritableFileStream that I can "pipe" with the ReadStream.
Anyone who can clarify this mess ?
references:
https://web.dev/file-system-access/
https://wicg.github.io/file-system-access/#filesystemhandle
https://developer.mozilla.org/en-US/docs/Web/API/FileSystemFileEntry
You have the solution in your "references" links at the bottom. Specifically, this is the section to read. You can create files or directories like so:
// In an existing directory, create a new directory named "My Documents".
const newDirectoryHandle = await existingDirectoryHandle.getDirectoryHandle('My Documents', {
create: true,
});
// In this new directory, create a file named "My Notes.txt".
const newFileHandle = await newDirectoryHandle.getFileHandle('My Notes.txt', { create: true });
Once you have a file handle, you can then pipe to it or write to it:
async function writeFile(fileHandle, contents) {
// Create a FileSystemWritableFileStream to write to.
const writable = await fileHandle.createWritable();
// Write the contents of the file to the stream.
await writable.write(contents);
// Close the file and write the contents to disk.
await writable.close();
}
…or…
async function writeURLToFile(fileHandle, url) {
// Create a FileSystemWritableFileStream to write to.
const writable = await fileHandle.createWritable();
// Make an HTTP request for the contents.
const response = await fetch(url);
// Stream the response into the file.
await response.body.pipeTo(writable);
// pipeTo() closes the destination pipe by default, no need to close it.
}
I'm trying to update my app to Alamofire 5 and having difficulties due to a hack-ish way I'm using it I guess.
Anyhow, I need background uploads and Alamofire is not really designed to do this. Even so, I was using it to create a properly formatted file containing multipart form so I can give it to the OS to upload in the background later.
I'll post the code doing this in Alamofire 4, my question is how can I get the url of the file I was previously getting with encodingResults?
// We're not actually going to upload photo via alamofire. It does not offer support for background uploads.
// Still we can use it to create a request and more importantly properly formatted file containing multipart form
Api.alamofire.upload(
multipartFormData: { multipartFormData in
multipartFormData.append(imageData, withName: "photo[image]", fileName: filename, mimeType: "image/jpg")
},
to: "http://", // if we give it a real url sometimes alamofire will attempt the first upload. I don't want to let it get to our servers but it fails if I feed it ""
usingThreshold: UInt64(0), // force alamofire to always write to file no matter how small the payload is
method: .post,
headers: Api.requestHeaders,
encodingCompletion: { encodingResult in
switch encodingResult {
case .success(let alamofireUploadTask, _, let url):
alamofireUploadTask.suspend()
defer { alamofireUploadTask.cancel() }
if let alamofireUploadFileUrl = url {
// we want to own the multipart file to avoid alamofire deleting it when we tell it to cancel its task
let fileUrl = ourFileUrl
do {
try FileManager.default.copyItem(at: alamofireUploadFileUrl, to: fileUrl)
// use the file we just created for a background upload
} catch {
}
}
case .failure:
// alamofire failed to encode the request file for some reason
}
}
)
Multipart encoding is fully integrated into the now-asynchronous request pipeline in Alamofire 5. That means there's no separate step to use. However, you can use the MultipartFormData type directly, just like you would in the request closure.
let data = MultipartFormData()
data.append(Data(), withName: "dataName")
try data.encode()
I am trying to use Microsoft Azure Storage SDK for Node.js and JavaScript for Browsers (https://github.com/Azure/azure-storage-node) to display PDF contents stored in Azure blob in browsers. So far I couldn't find any examples on how to do it.
I tried to follow the suggestion from https://github.com/Azure/azure-storage-node/issues/440, But couldn't make it work. I am using Azure function.
module.exports = async function (context, req) {
let accessToken = await getAccessToken();
let container = req.params.container;
let filename = req.params.filename;
let tokenCredential = new azure.TokenCredential(accessToken);
let storageAccountName = process.env.StorageAccountName;
let blobService = azure.createBlobServiceWithTokenCredential(`https://${storageAccountName}.blob.core.windows.net/`, tokenCredential);
return new Promise((resolve, reject) => {
let readStream = blobService.createReadStream(container, filename, function (error, result, response) {
if (error) {
context.log(error);
context.log(response);
context.res = {
status: 400,
body: response
};
resolve(context.res);
}
});
let body = '';
readStream.on('data', (chunk) => {
body += chunk;
});
readStream.on('end', () => {
context.res = {
headers: {
'Content-Type': "application/pdf"
},
body: body
};
resolve(context.res);
});
});
};
But I got "Couldn't open PDF" error message in the browser or timeout error.
For downloading blob in browser environment, using URL with SAS is recommended, and in the framework you are using, would an accessible URL pointing to PDF be enough?
Please follow example:
Download Blob
BlobService provides interfaces for downloading a blob into browser memory. Because of browser's sandbox limitation, we cannot save the downloaded data trunks into disk until we get all the data trunks of a blob into browser memory. The browser's memory size is also limited especially for downloading huge blobs, so it's recommended to download a blob in browser with SAS Token authorized link directly.
Shared access signatures (SAS) are a secure way to provide granular access to blobs and containers without providing your storage account name or keys. Shared access signatures are often used to provide limited access to your data, such as allowing a mobile app to access blobs.
I created an API function to work with S3. I imported the template swagger. After deployment, I tested with a Node.js project by the npm module aws-api-gateway-client.
It works well with: get bucket lists, get bucket info, get one item, put a bucket, put a plain text object, however I am blocked with put a binary file.
firstly, I ensure ACL is allowed with all permissions on S3. secondly, binary support also added
image/gif
application/octet-stream
The code snippet is as below. The behaviors are:
1) after invokeAPI, the callback function is never hit, after sometime, the Node.js project did not respond. no any error message. The file size (such as an image) is very small.
2) with only two times, the uploading seemed to work, but the result file size is bigger (around 2M bigger) than the original file, so the file is corrupt.
Could you help me out? Thank you!
var filepathname = './items/';
var filename = 'image1.png';
fs.stat(filepathname+filename, function (err, stats) {
var fileSize = stats.size ;
fs.readFile(filepathname+filename,'binary',function(err,data){
var len = data.length;
console.log('file len' + len);
var pathTemplate = '/my-test-bucket/' +filename ;
var method = 'PUT';
var params = {
folder: '',
item:''
};
var additionalParams = {
headers: {
'Content-Type': 'application/octet-stream',
//'Content-Type': 'image/gif',
'Content-Length': len
}
};
var result1 = apigClient.invokeApi(params,pathTemplate,method,additionalParams,data)
.then(function(result){
//never hit :(
console.log(result);
}).catch( function(result){
//never hit :(
console.log(result);
});;
});
});
We encountered the same problem. API Gateway is meant for limited data (10MB as of now), limits shown here,
http://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Self Signed URL to S3:
Create an S3 self signed URL for POST from the lambda or the endpoint where you are trying to post.
How do I put object to amazon s3 using presigned url?
Now POST the image directly to S3.
Presigned POST:
Apart from posting the image if you want to post additional properties, you can post it in multi-form format as well.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
If you want to process the file after delivering to S3, you can create a trigger from S3 upon creation and process with your Lambda or anypoint that need to process.
Hope it helps.
I'm using Heroku, which means I have to upload multiple large files to S3 directly.. I'm using Rails 3.2.11, and Ruby 1.9.3. I do not wish to use carrierwave or paperclip gems, or really change much at this point - I just need to get this what I have working.
Before trying to move to S3, if I ran my app locally, I could upload multiple large files to the local file system. When I ran it on Heroku, small files upload but large ones failed. Hence the switch to S3..
I tried several tweaks, and also this link below, but it's just too much of a change to what I have that already working with the local server's file system (and Heroku as well, but Heroku just can't handle large files ..)
Tried: https://devcenter.heroku.com/articles/direct-to-s3-image-uploads-in-rails
I've tried some of the other examples here on Stack Overflow but they are too much of a change for what works locally, and well, I don't grasp everything they are doing.
Now, what happens when I do try to upload images?
It's as if the file upload works - the preview images are successfully created, but nothing is ever uploaded to Amazon s3, and I don't receive any kind of error messages (like s3 authentication failure or anything.. nothing)
What do I need to change in order to get the files over to my s3 storage, and what can I write out to console to detect problems, if any, connecting to my s3?
My form:
<%= form_for #status do |f| %>
{A FEW HTML FIELDS USED FOR A DESCRIPTION OF THE FILES - NOT IMPORTANT FOR THE QUESTION}
File:<input id="fileupload" multiple="multiple" name="image"
type="file" data-form-data = <%= #s3_direct_post.fields%>
data-url= <%= #s3_direct_post.url %>
data-host =<%=URI.parse(#s3_direct_post.url).host%> >
<%= link_to 'submit', "#", :id=>'submit' , :remote=>true%>
<% end %>
My jquery is:
....
$('#fileupload').fileupload({
formData: {
batch: createUUID(),
authenticity_token:$('meta[name="csrf-token"]').attr('content')
},
dataType: 'json',
acceptFileTypes: /(\.|\/)(gif|jpe?g|png)$/i,
maxFileSize: 5000000, // 5 MB
previewMaxWidth: 400,
previewMaxHeight: 400,
previewCrop: true,
add: function (e, data) {
tmpImg.src = URL.createObjectURL(data.files[0]) ; // create image preview
$('#'+ fn + '_inner' ).append(tmpImg);
...
My controller:
def index
#it's in the index just to simplify getting it working
#s3_direct_post = S3_BUCKET.presigned_post(key: "uploads/#{SecureRandom.uuid}/${filename}", success_action_status: '201', acl: 'public-read')
end
The element that is generated for the form is (via Inspect Element):
<input id="fileupload" multiple="multiple" name="image"
data-form-data="{"key"=>"uploads/34a64607-8d1b-4704-806b-159ecc47745e/${filename}"," "success_action_status"="
>"201"," "acl"=">"public-read"," "policy"=">"[encryped stuff - no need to post]","
"x-amz-credential"=">"
[AWS access key]/[some number]/us-east-1/s3/aws4_request"
," "x-amz-algorithm"=">"AWS4-HMAC-SHA256"
," "x-amz-date"=">"20150924T234656Z"
," "x-amz-signature"=">"[some encrypted stuff]"}"
data-url="https://nunyabizness.s3.amazonaws.com" data-host="nunyabizness.s3.amazonaws.com" type="file">
Help!
With S3 there actually is no easy out of the box solutions to upload files, because Amazon is a rather complex instrument.
I had a similar issue back in the day and spend two weeks trying to figure out how S3 works, and now use a working solution for uploading files onto S3. I can tell you a solution that works for me, I never tried the one proposed by Heroku.
A plugin of choice I use is Plupload, since it is the only component I actually managed to get working, apart from simple direct S3 uploads via XHR, and offers the use of percentage indicators and in-browser image resizing, which I find completely mandatory for production applications, where some users have 20mb images that they want to upload as their avatar.
Some basics in S3:
Step 1
Amazon bucket needs correct configuration in its CORS file to allow external uploads in the first place. The Heroku totorial already told you how to put the configuration in the right place.
http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
Step 2
Policy data is needed, otherwise your client will not be able to access the corresponding bucket file. I find generating policies to be better done via Ajax calls, so that, for example, admin gets the ability to upload files into the folders of different users.
In my example, cancan is used to manage security for the given user and figaro is used to manage ENV variables.
def aws_policy_image
user = User.find_by_id(params[:user_id])
authorize! :upload_image, current_user
options = {}
bucket = Rails.configuration.bucket
access_key_id = ENV["AWS_ACCESS_KEY_ID"]
secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
options[:key] ||= "users/" + params[:user_id] # folder on AWS to store file in
options[:acl] ||= 'private'
options[:expiration_date] ||= 10.hours.from_now.utc.iso8601
options[:max_filesize] ||= 10.megabytes
options[:content_type] ||= 'image/' # Videos would be binary/octet-stream
options[:filter_title] ||= 'Images'
options[:filter_extentions] ||= 'jpg,jpeg,gif,png,bmp'
policy = Base64.encode64(
"{'expiration': '#{options[:expiration_date]}',
'conditions': [
{'x-amz-server-side-encryption': 'AES256'},
{'bucket': '#{bucket}'},
{'acl': '#{options[:acl]}'},
{'success_action_status': '201'},
['content-length-range', 0, #{options[:max_filesize]}],
['starts-with', '$key', '#{options[:key]}'],
['starts-with', '$Content-Type', ''],
['starts-with', '$name', ''],
['starts-with', '$Filename', '']
]
}").gsub(/\n|\r/, '')
signature = Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest::Digest.new('sha1'),
secret_access_key, policy)).gsub("\n", "")
render :json => {:access_key_id => access_key_id, :policy => policy, :signature => signature, :bucket => bucket}
end
I went as far as put this method into the application controller, although you could find a better place for it.
Path to this function should be put into the route, of course.
Step 3
Frontend, get plupload: http://www.plupload.com/ make some link to act as the upload button:
<a id="upload_button" href="#">Upload</a>
Make a script that configures the plupload initialization.
function Plupload(config_x, access_key_id, policy, signature, bucket) {
var $this = this;
$this.config = $.extend({
key: 'error',
acl: 'private',
content_type: '',
filter_title: 'Images',
filter_extentions: 'jpg,jpeg,gif,png,bmp',
select_button: "upload_button",
multi_selection: true,
callback: function (params) {
},
add_files_callback: function (up, files) {
},
complete_callback: function (params) {
}
}, config_x);
$this.params = {
runtimes: 'html5',
browse_button: $this.config.select_button,
max_file_size: $this.config.max_file_size,
url: 'https://' + bucket + '.s3.amazonaws.com/',
flash_swf_url: '/assets/plupload/js/Moxie.swf',
silverlight_xap_url: '/assets/plupload/js/Moxie.xap',
init: {
FilesRemoved: function (up, files) {
/*if (up.files.length < 1) {
$('#' + config.select_button).fadeIn('slow');
}*/
}
},
multi_selection: $this.config.multi_selection,
multipart: true,
// resize: {width: 1000, height: 1000}, // currently causes "blob" problem
multipart_params: {
'acl': $this.config.acl,
'Content-Type': $this.config.content_type,
'success_action_status': '201',
'AWSAccessKeyId': access_key_id,
'x-amz-server-side-encryption': "AES256",
'policy': policy,
'signature': signature
},
// Resize images on clientside if we can
resize: {
preserve_headers: false, // (!)
width: 1200,
height: 1200,
quality: 70
},
filters: [
{
title: $this.config.filter_title,
extensions: $this.config.filter_extentions
}
],
file_data_name: 'file'
};
$this.uploader = new plupload.Uploader($this.params);
$this.uploader.init();
$this.uploader.bind('UploadProgress', function (up, file) {
$('#' + file.id + ' .percent').text(file.percent + '%');
});
// before upload
$this.uploader.bind('BeforeUpload', function (up, file) {
// optional: regen the filename, otherwise the user will upload image.jpg that will overwrite each other
var extension = file.name.split('.').pop();
var file_name = extension + "_" + (+new Date);
up.settings.multipart_params.key = $this.config.key + '/' + file_name + '.' + extension;
up.settings.multipart_params.Filename = $this.config.key + '/' + file_name + '.' + extension;
file.name = file_name + '.' + extension;
});
// shows error object in the browser console (for now)
$this.uploader.bind('Error', function (up, error) {
console.log('Expand the error object below to see the error. Use WireShark to debug.');
alert_x(".validation-error", error.message);
});
// files added
$this.uploader.bind('FilesAdded', function (up, files) {
$this.config.add_files_callback(up, files, $this.uploader);
// p(uploader);
// uploader.start();
});
// when file gets uploaded
$this.uploader.bind('FileUploaded', function (up, file) {
$this.config.callback(file);
up.refresh();
});
// when all files are uploaded
$this.uploader.bind('UploadComplete', function (up, file) {
$this.config.complete_callback(file);
up.refresh();
});
}
Plupload.prototype.init = function () {
//
}
Step 4
The implemetation of the general multi-purpose file uploader function:
ImageUploader = {
init: function (user_id, config, callback) {
$.ajax({
type: "get",
url: "/aws_policy_image",
data: {user_id: user_id},
error: function (request, status, error) {
alert(request.responseText);
},
success: function (msg) {
// set aws credentials
callback(config, msg);
}
});
},
},
// local functions
photo_uploader: function (user_id) {
var container = "#photos .unverified_images" // for example;
var can_render = false;
this.init(user_id,
{
select_button: "upload_photos",
callback: function (file) {
file.aws_id = file.id;
file.id = "0";
file.album_title = "userpics"; // I use this param to manage photo directory
file.user_id = user_id;
//console.log(file);
[** your ajax code here that saves the image object in the database via file variable you get here **]
});
},
add_files_callback: function (up, files, uploader) {
$.each(files, function (index, value) {
// do something like adding a progress bar html
});
uploader.start();
},
complete_callback: function (files) {
can_render = true;
}
}, function (config, msg) {
config.key = "users/" + user_id;
// Most important part:
window.photo_uploader = new Plupload(config, msg.access_key_id, msg.policy, msg.signature, msg.bucket);
});
}
can_render variable is useful so that you can make the application only then re-render the page, when the uploader is actually done.
And to make the button work from somewhere else call:
ImageUploader.photo_uploader(user_id);
And the button will act as a Plupload uploader button.
What is important is that Policy is made in a way so that noone can upload the photo into someone else's directory.
It would be great to have a version that does the same not via ajax callbacks, but with web hooks, this is something I want to do in the future.
Again, this is not a perfect solution, but something that works good enough from my experience for the purpose of uploading images and videos onto amazon.
Note in case someone asks why I have this complex object-oriented structure of uploader objects, the reason is that my application has all different kinds of uploaders that behave differently and they need to have an initializer with common behavior. The way I did it I can write an initializer for, say, videos, with minimum amount of code, that will do similar things to the existing image uploader.