Our requirement is to upload objects in Amazon S3 using a browser based interface. For this we're utilizing Query String Authentication mechanism (we don't have end-user's credentials during the upload process and we're using ASP.Net to write code to do so. I'm running into issues when trying to do multipart uploads.
I'm using AWS .Net SDK (version 2.0.2.5) to create a query string using the following code below:
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest()
{
BucketName = bucketName,
Key = objectKey,
Expires = expiryTime,
Protocol = Amazon.S3.Protocol.HTTP,
Verb = Amazon.S3.HttpVerb.PUT,
ContentType = "application/octet-stream,
};
var url = AWSClientFactory.CreateAmazonS3Client(credentials, region).GetPreSignedURL(request);
This works great if I don't do multi-part upload. However I'm not able to figure out how to do a multipart upload using Query String.
Problems that I'm running into are:
In order to do a multipart upload, I need to get an UploadId first. However the method to get UploadId is a POST HTTP Request and GetPreSignedUrlRequest method does not take POST as an HTTP verb.
I finally ended up putting a bucket policy where I granted my Amazon S3 account permission on the bucket in question (which does not belong to me) and using that account I am doing HTTP POST to get UploadId.
Now based on my understanding, the query string must contain UploadId and PartNumber parameters when creating the query string authentication URL but looking at GetPreSignedUrlRequest object properties, I can't figure out how to specify these parameters there.
I am inclined to believe that .Net SDK does not support this scenario and I have to resort to native REST API to create query string. Is my understanding correct?
Any insights into this would be highly appreciated.
Happy New Year in advance.
You are correct in your belief. The AWS SDK for .NET does not currently support creating presigned URLs for multipart uploads
Related
I've got a Kotlin application that retrieves publicly availably PDFs stored on Google drive. To download the PDFs, I do the following
#Throws(IOException::class)
fun download(url: String?, destination: File?) {
val connection: URLConnection = URL(url).openConnection()
connection.setConnectTimeout(60000)
connection.setReadTimeout(60000)
connection.addRequestProperty("User-Agent", "Mozilla/5.0")
val output = FileOutputStream(destination, false)
val buffer = ByteArray(2048)
var read: Int
val input: InputStream = connection.getInputStream()
while (input.read(buffer).also { read = it } > -1) output.write(buffer, 0, read)
output.flush()
output.close()
input.close()
}
My url is of the form https://www.googleapis.com/drive/v3/files/${fileId}?key=<MY_KEY>&alt=media.
Google seems to be rejecting requests after it serves about 10 requests. I checked the API usage, and it says I get 20,000 requests per 100 seconds (https://developers.google.com/drive/api/guides/limits). I can see my requests on the API usage chart, so the API key is being recognized. I'm using 10-15 requests then getting the 403. It's not coming back as json, so here is the detailed message:
We're sorry...
... but your computer or network may be sending automated queries. To protect our users, we can't process your request right now.
See Google Help for more information.
I assume I'm missing something obvious. In that HTML blob, it says but your computer or network may be sending automated queries. To protect our users, we can't process your request right now., which is obviously what I'm trying to do.
Do I need to use a different method to pull a couple hundred PDFs from Drive?
You should be using Oauth2 to request that much data tbh. However if you insist on using an api key try adding quotaUser and userIp as part of your request.
Standard Query Parameters
Note: Per-user quotas are always enforced by the Drive API, and the user's identity is determined from the access token passed in the request. The quotaUser and userIp parameters can only be used for anonymous requests against public files.
If all the files are in the same directory you could use a service account and not have to worry about this error.
Oauth tokens.
An Api key is created on Google cloud console. They are used to access public api end points only. They identify your application to google and no more. You can only access public data not private user data. How to create an api key
Access token + refresh token. Are the results of an Oauth2 authorization request by a user. Access tokens are short lived they work for an hour then expire, they give you access to a users data, by sending an authorization header with the access token along with your request for data. Refresh tokens are long lived and can be used to request a new access token on behalf of the user when the one you have has expired Understand Oauth2 with curl
Our application only have Front End and Back End. Each user can have a lot of ducument which is uploaded by backend side but need to be display on FE. We trying to use pre-signed URL to let FE download document directly from S3.
BE return the docutments path, and expose an API which can be use to generate pre-signed url for each document.
POST: /app/v1/generate-pre-signed-url/path-to-document
The issue is above generate pre-signed url API need to be verity that current user has access to the document or not, and it's realy hard to map the user permission with a document path.
Any suggestion design or pattern would be appreciate!
As per AWS Docs
When you create a presigned URL for your object, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (GET to download the object), and an expiration date and time.
So in summary, only object key would not suffice (like in your case).
I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.
Scenario: I place some files on Google web storage.
And I want only paid users can download this file. So my question is, how to hide this file from paid user to prevent them from sharing this URL with other unpaid users.
So, is there a way to hide the real file location? Single-use or time-restricted URLs or any other?
May be hiding URL is possible with other CDN providers - MIcrosoft Azure Storage or Amazon S3?
Amazon S3 provides query string authentication (usually referred to as pre-signed URLs) for this purpose, see Using Query String Authentication:
Query string authentication is useful for giving HTTP or browser
access to resources that would normally require authentication. The
signature in the query string secures the request. Query string
authentication requests require an expiration date. [...]
All AWS Software Development Kits (SDKs) provide support for this, here is an example using the GetPreSignedUrlRequest Class from the AWS SDK for .NET, generating a pre-signed URL expiring 42 minutes from now:
using (var s3Client = AWSClientFactory.CreateAmazonS3Client("AccessKey", "SecretKey"))
{
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest()
.WithBucketName("BucketName")
.WithKey("Key")
.WithProtocol(Protocol.HTTP)
.WithExpires(DateTime.Now.AddMinutes(42));
string url = s3Client.GetPreSignedURL(request);
}
Azure Storage has the concept of a Shared Access Signature. It's basically the URL for a BLOB (file) with parameters that limit access. I believe it's nearly identical to the Amazon S3 query string authentication mentioned in Steffen Opel's answer.
Microsoft provides a .NET library for handling Shared Access Signatures. They also provide the documentation you would need to roll your own library.
You can use Signed URLs in Google Cloud Storage to do this:
https://developers.google.com/storage/docs/accesscontrol#Signed-URLs
One way would be to create a Google Group containing only your paid users. Then, for the object's of interest, grant read permission to the group's email address (via the object's Access Control List). With that arrangement, only your paid members will be able to download these projected objects. If someone outside that group tries to access the URL, they'll get an access denied error.
After you set this up, you'll be able to control who can access your objects by editing your group membership, without needing to mess with object ACLs.
Here's an alternative that truly hides the S3 URL. Instead of creating a query string authenticated URL that has a limited viability, this approach takes a user's request, authorizes the user, fetches the S3 data, and finally returns the data to the requestor.
The advantage of this approach is that the user has no way of knowing the S3 URL and cannot pass the URL along to anyone else, as is the case in the query string authenticated URL during its validity period. The disadvantages to this approach are: 1) there is an extra intermediary in the middle of the S3 "get", and 2) it's possible that extra bandwidth charges will be incurred, depending on where the S3 data physically resides.
public void streamContent( User requestor, String contentFilename, OutputStream outputStream ) throws Exception {
// is the requestor entitled to this content?
Boolean isAuthorized = authorizeUser( requestor, filename );
if( isAuthorized ) {
AWSCredentials myCredentials = new BasicAWSCredentials( s3accessKey, s3secretKey );
AmazonS3 s3 = new AmazonS3Client( myCredentials );
S3Object object = s3.getObject( s3bucketName, contentFilename );
FileCopyUtils.copy( object.getObjectContent(), outputStream );
}
}
I've created a group with read-only access to S3 objects and then added a new user within that group.
I'm having trouble understanding what the url to the file will be. I have the link thus far as:
https://s3.amazonaws.com/my-bucket/
Do I need to pass in some sort of id and key to let people in that group get access? What are those params and where I do I find the values?
To let people access your files you either need to make the bucket public and then access the URL's of each object which you can find out by checking the properties of each object in AWS management console. The catch is that anyone who knows this URL can access the files. To make it more secure, use ACL to limit read only access for all users. Go to permissions and add a new permission for "Everyone" and check "Open/download" check box only.
if you want to limit access to only few users then you will have yo use IAM policies. With IAM policy you will get a security key and secret access key. Now, you CANNOT append the secret access key at the end of your string to give access to users. The secret user key is meant to be "SECRET".but what you can do is provide Presigned URL's to your users through code. Here is a C# example
private void GetWebUrl()
{
var request =
new GetPreSignedUrlRequest().WithBucketName(your bucketName)
.WithKey(Your obj KEY);
request.WithExpires(DateTime.Now.Add(new TimeSpan(0, 0, 0, 50)));// Time you want URL to be active for
var url = S3.GetPreSignedURL(request);
}
The catch is the URL will only be active for the given time. you can increase it though.
Also, in the above function "S3' is an instance of amazonS3 client that I have created:
private AmazonS3Client S3;
public static AmazonS3Client CreateS3Client()
{
var appConfig = ConfigurationManager.AppSettings;
var accessKeyId = appConfig["AWSAccessKey"];
var secretAccessKeyId = appConfig["AWSSecretKey"];
S3= new AmazonS3Client(accessKeyId, secretAccessKeyId);
return S3;
}
Each IAM user will have their own Cert and Key that they setup with whatever S3tools they are using. Just give them the url to login to their IAM AWS accont, and their credentials to login, and they should be good to go. The bucket URL will be the same.
The URL to a file on S3 does not change.
Usually you would not use the form that you have but rather:
https://my-bucket.s3.amazonaws.com/path/to/file.png - where path/to/file.png is the key. (see virtual hosted compatible bucket names).
The issue is that this url will result in a 404 unless the person asking for it has the right credentials, or the bucket is publicly readable.
You can take a URL like that and sign it (using a single call in some SDK) with any credentials that have read only access to the file, which will result in a time - limited URL that anyone can use. When the url is signed it has some extra args added, so that AWS can tell it was an authorized person that made up the URL.
OR
You can use the AWS API, and use the credentials that you have to directly download the file.
It depends on what you are doing.
For instance, to make a web page which has links to files, you can create a bunch of time limited URLs, one for each file. This page would be generated by some code on your server, upon a user logging in and having some sort of IAM credentials.
Or if you wanted to write a tool to manage an S3 repo, you would perhaps just download/upload, etc directly using the API.