Setting metadata on S3 multipart upload - file-upload

I'd like to upload a file to S3 in parts, and set some metadata on the file. I'm using boto to interact with S3. I'm able to set metadata with single-operation uploads like so:
Is there a way to set metadata with a multipart upload? I've tried this method of copying the key to change the metadata, but it fails with the error: InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: <size>
I've also tried doing the following:
key = bucket.create_key(key_name)
key.set_metadata('some-key', 'value')
<multipart upload>
...but the multipart upload overwrites the metadata.
I'm using code similar to this to do the multipart upload.

Sorry, I just found the answer:
Per the docs:
If you want to provide any metadata describing the object being uploaded, you must provide it in the request to initiate multipart upload.
So in boto, the metadata can be set in the initiate_multipart_upload call. Docs here.

Faced such issue earlier today and discovered that there is no information on how to do that right.
The code example on how we solved that issue provided below.
$uploader = new MultipartUploader($client, $source, [
'bucket' => $bucketName,
'key' => $filename,
'before_initiate' => function (\Aws\Command $command) {
$command['ContentType'] = 'application/octet-stream';
$command['ContentDisposition'] = 'attachment';
},
]);
Unfortunately, documentation https://docs.aws.amazon.com/aws-sdk-php/v3/guide/service/s3-multipart-upload.html#customizing-a-multipart-upload doesn't make it clear and easy to understand that if you'd like to provide alternative meta data with multipart upload you have to go this way.
I hope that will help.

Related

How to rename a file in blob storage by using Azure Datalake Gen2 Rest API

I've try to do the following instruction of this document : LINK
I used SAS authentication and added this to request header "x-ms-rename-source" but i kept getting this error "403-AuthorizationPermissionMismatch". Doing fine with all others api method but this one seem really tricky. Does anyone have success rename a file or directory with this one ?
Instead of using SAS authentication, i used authorization headers. You can check it here.
My request headers :
DateTime now = DateTime.UtcNow;
requestMessage.Headers.Add("x-ms-date", now.ToString("R", CultureInfo.InvariantCulture));
requestMessage.Headers.Add("x-ms-version", "2018-11-09");
//your source path you want to rename
requestMessage.Headers.Add("x-ms-rename-source", renameSourcePath);
//rename operation only accept authorize by shared key via header
requestMessage.Headers.Authorization = AzureStorageAuthenticationHelper.GetAuthorizationHeader(
StorageGen2AccountName, StorageGen2AccountKey, now, requestMessage);
You can try to rename the file in Blob Storage by using Storage Explorer tool
Kindly let us know if the above helps or you need further assistance on this issue.

Unable to upload multipart file in karate, Required request part '' not present

ActualAPIRequest OutputFromKarate
Trying to upload a json file for an api using karate. Since api takes multipart input i am passing multipart configurations in karate.
But Required request part 'inputData' not present error is coming. Is there any solution for this please?
I have attached actual input and result from karate screenshot for reference
Just make sure that the data type of inputData and maybe swaggerFile is JSON. Looks like you are sending a string.
Please refer to this section of the doc: https://github.com/intuit/karate#type-conversion
If the server does not like the charset being sent for each multipart, try * configure charset = null

Sails Skipper: how to read and validate a csv file and exclude the invalid file types during upload?

I'm trying to write a controller that uploads a file to S3 location. However, before upload I need to validate if the incoming file type is a csv or not. And then I need to read the file to check for header colummns in the files etc. I got the type of the file as per below snippet:
req.file('foo')._files[0].stream
But, how to read the entire file stream and check for headers and data etc?There were other similar Qs like (Sails.js Skipper: How to read the uploaded file stream during upload?). But the solution mentioned is to use skipper-csv adapter(which i cannot use as I already use skipper-s3 to upload to s3).
Can someone please post an example on how to read the upstreams and perform any validations before the upload?
Here is how my problem got solved: I'm making a copy of the stream to validate before actual upload. And then checking my validations on the original stream and once passed, I upload the copied stream to my desired location.
For reading the Csv stream, I found a npm package: csv-parser(https://github.com/mafintosh/csv-parser) , which I felt easy to handle events like headers, data.
For creating the copy of the stream, I used the following logic:
const upstream = req.file('file');
const fileStreamMap = {};
const fileStreamMapCopy = {};
_.each(upstream._files, (file) => {
const stream = PassThrough();
const streamCopy = PassThrough();
file.stream.pipe(stream);
file.stream.pipe(streamCopy);
fileStreamMap[fileName] = stream;
fileStreamMapCopy[fileName] = streamCopy;
});
// validate and upload files to S3, if Valid.
validateAndUploadFile(fileStreamMap, fileStreamMapCopy);
}
validateAndUploadFile() contains my custom validation logic for my csv upload.
Also, we can use aws-sdk(https://www.npmjs.com/package/aws-sdk) for s3 upload.
Hope, this helps someone.

Vb.Net Gmail API Send Message with attachment > 5Mb

On Gmail Api documentation i read that i've to make an HTTP request at "/upload/gmail/v1/users/userId/messages/send" when sending message larger then 5mb but i don't find any example how to implement this using the Client Library in .net
All the examples on the site refer to the "messages.Send" function that takes as parameter a raw message and the user id but i saw that there's another overload that also take the stream of the content to upload and the content type of it.
The problem is that i've no clue how to correctly call it.
Does anyone have successfully achived it ?
Thx for your answer
Simone
Simone, it's means that you use simple upload:
uploadType=media. For quick transfer of smaller files, for example, 5 MB or less.
You must use Multipart upload or Resumable upload (https://developers.google.com/gmail/api/guides/uploads)
You can send a post query with payload (see CURLOPT_POSTFIELDS, if use a CURL) on the https://www.googleapis.com/gmail/v1/users/me/messages/send?access_token=your_access_token&uploadType=multipart. Payload must contain json encoded message. Structure of this message for example:
$message = [
'message' => [
'raw' => str_replace(['+', '/'], ['-', '_'], base64_encode($mimeString)),
'threadId' => $yourThreadId
]
];
Variable $mimeString must contain correct mime string

Receiving "Invalid policy document or request headers!"

I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.