Cannot do multipart upload to S3 bucket with SSE-KMS encryption (using .NET SDK) - amazon-s3

I can successfully send the InitiateMultipartUploadRequest and get InitiateMultipartUploadResponse back, but then get Access Denied error when sending the 1st UploadPartRequest.
Note that all of the below cases upload the document successfully:
Exactly the same code (i.e. using multipart upload), but to a different bucket that uses SSE-S3 encryption.
Using low-level API and uploading the document in one go, i.e. creating PutObjectRequest and then calling amazonS3Client.PutObjectAsync(putObjectRequest).
Using high-level API TransferUtility class.

Maybe the encryption key was not forwarded in the call properly.

Related

How to make an upload method for a large file to Yandex Cloud Serverless function to be called on it?

So I want to have no personal server infrastructure. I want to have a HTTP API roun
t a user can upload a file into (2GB+) so that:
File would be stored to object storedge for 3 days
A serverless function would be called on it
So how to make an upload method for a large file to Yandex.Cloud Serverless function to be called on it?
So I need something similar to this AWS sample for YC
There is a limit on request size that Yandex Cloud Serverless Function could handle. It is 3.5MB. So you won't be able to upload 2Gb.
There is a workaround — upload the file directly to Object Storage using a pre-signed link. To generate the link, you'll need AWS-like credentials from Yandex Cloud.
Passing them to the client side is not safe, so it would be better to generate the link on the server (or Serverless Function) and return it to the client.
Here is the tutorial covering the topic.

Fineuploader with Lambda for Signing S3 request

I am trying to upload large files to S3 in chunks. This needs to be done via browser. I was looking at FineUploader as solution. https://fineuploader.com/
It would need a Signing function which gives me the correct Authorization header.
This is specified in FineUploader as :
signature: {
endpoint: "/s3/signtureHandler",
version: 4
}
I saw the current examples have Servlet and PHP. I wanted to to know if we can get the same achieved by a Lambda function exposed by an API Gateway.
Has anyone tried it before or know of any potential pitfalls?

Setting different S3 read permissions based on uploader

I'm trying to arrive at a situation, where
one class of users can upload files that are subsequently not publicly available
another class of users can upload files that are publicly available.
I think I need to use two IAM users:
the first which has putObject permissions only and where I bake the secret key into javascript. (I use the AWS SDK putObject here, and bake in the first secret key)
the other where I keep the secret key on the server, and provide signatures for uploading to signed-in users of the right category. (I ended up using a POST command for this with multipart form-data, as I could not understand how to do it with the SDK other than baking in the second secret key, which would be bad as files can be uploaded and downloaded)
But I'm struggling to set up bucket permissions that support some files being publicly available while others are not at all.
Is there a way, or do I need to use separate buckets?
Update
Based on the first comment, I tried to add "acl": "public-read" to my policy and POST form data fields. The signatures are matching correctly, but I am now getting a forbidden response from AWS, which I don't get when this field is absent (but then the uploads are not publicly visible)

FineUploader - Error on multi-part upload to S3

I am using FineUploader to upload to S3. I have everything working including deletes. However, when I upload larger files that get broken into multi-part uploads, I get the following error in the console (debugging turned on):
Specific problem detected initiating multipart upload request for 0: 'The request signature we calculated does not match the signature you provided. Check your key and signing method.'.
Can someone point me in the right direction as what I should check for settings, or what additional info you might need?
Since you haven't included anything really specific to your setup, code, or the failing request, my best guess is that your server isn't returning a proper signature response for uploads made to the S3 REST API (which is used for larger files). You'll need to review that procedure for generating a response to this type of signature request.
Here's the relevant section from Fine Uploader's S3 documentation:
Fine Uploader S3 uses Amazon S3’s REST API to initiate, upload,
complete, and abort multipart uploads. The REST API handles
authentication by signing canonically formatted headers. This signing
is something you need to implement server-side. All your server needs
to do to authenticate and supported chunked uploads direct to Amazon
S3 is sign a string representing the headers of the request that Fine
Uploader sends to S3. This string is found in the payload of the
signature request:
{ "headers": /* string to sign */ }
The presence of this property indicates to your sever that this is, in
fact, a request to sign a REST/multipart request and not a policy
document.
This signature for the headers string differs slightly from the policy
document signature. You should NOT base64 encode the headers string
before signing it. All you must do, server-side, is generate an HMAC
SHA1 signature of the string using your AWS secret key and then base64
encode the result. Your server should respond with the following in
the body of an ‘application/json’ response:
{ "signature": /* signed headers string */ }

Amazon S3 authentiaction model

What is the proper way of delegating file access authentication from S3 to our authentiation service?
For example: web site's user(he have our session id in headers) sending request to S3 to get file by url. S3 sends request to our authentication service asking if user with such headers can access that file, and if our auth service allow getting that file it will be downloaded.
There are a lot of information about presigned requests but absolutely nothing about s3 quering with "hidden" authentication.
If a file has been made public on S3, then of course anyone can download it, using a direct link to the file.
If the file is not public, then there needs to be some type of authentication. There are really only two ways a file from S3 can be obtained if it is not public, one is via a pre-signed url, and the other is to be an Amazon user who has access to S3. Obviously this is how it works when you yourself want to access an object on S3, you must provide your access key and a signature in the header of the GET request. You can grant other users access to S3 via Amazon IAM, which is more like the 'hidden' authentication you mentioned. Via the IAM route, there are different ways of providing access including Federated Users. Visit this link to learn more:
http://docs.aws.amazon.com/AmazonS3/latest/dev/MakingAuthenticatedRequests.html
If you are simply trying to provide a authenticated user access to a file, the best and easiest way to do that would be to create a pre-signed url with an expiration time. The expiration time can be something short, like 10 minutes or even 1 minute, to prevent the user from passing the link to others.