I am using minio client to access S3. The S3 storage I am using has two endpoints - one (say EP1) which is accessible from a private network and other (say EP2) from the internet. My application creates a presigned URL for downloading an S3 object using EP1 since it cannot access EP2. This URL is used by another application which is not on this private network and hence has access only to EP2. This URL is (obviously) not working when used by the application outside the network since this URL has EP1 in it.
I have gone through minio documentation but did not find anything which help me specify alternate endpoints.
So my question is -
Is there anything which I have missed from minio that can help me?
Is there any S3 feature which allows generating presigned URL for an
object with EP2 in it?
Or is this not solvable without changing
current network layout?
You can use minio-js to manage this
Here is an example that you can use
var Minio = require('minio')
var s3Client = new Minio.Client({
endPoint: "EP2",
port: 9000,
useSSL: false,
accessKey: "minio",
secretKey: "minio123",
region: "us-east-1"
})
var presignedUrl = s3Client.presignedPutObject('my-bucketname', 'my-objectname', 1000, function(e, presignedUrl) {
if (e) return console.log(e)
console.log(presignedUrl)
})
This will not contact the server at all. The only thing here is that you need to know the region that bucket belongs to. If you have not set any location in minio, then you can use us-east-1 by default.
Related
Does anyone know how to enable the EventBridge notifications via the s3 API? The documentation is not very helpful: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-notification-configuration.html.
EventBridgeConfiguration
All events get sent to the default bus in the account, so there is nothing to configure. This makes the documentation and the call structure confusing. It makes more sense if you translate the API Call into XML.
await s3Client.putBucketNotificationConfiguration({
Bucket: 'my-bucket-name',
NotificationConfiguration: {
EventBridgeConfiguration: {},
}
}).promise();
I'm using AWS CDK to setup S3 and CloudFront static website hosting. All works well until I want to redirect "http[s]//:www.mydomain.com" to "https ://mydomain.com". I do not want to make the S3 repositories public rather provide bucket permission for the CloudFront "Origin Access Identity". The relevant snippet of my CDK code is as follows:
const wwwbucket = new s3.Bucket(this, "www." + domainName, {
websiteRedirect: {
hostName: domainName,
protocol: s3.RedirectProtocol.HTTPS },
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL
})
const oaiWWW = new cloudfront.OriginAccessIdentity(this, 'CloudFront-OriginAccessIdentity-WWW', {
comment: 'Allows CloudFront to access the bucket'
})
wwwbucket.grantRead(oaiWWW)
const cloudFrontRedirect = new cloudfront.CloudFrontWebDistribution(this, 'https://www.' + domainname + '.com redirect', {
aliasConfiguration: {
acmCertRef: certificateArn,
names: [ "www." + domainName ],
sslMethod: cloudfront.SSLMethod.SNI,
securityPolicy: cloudfront.SecurityPolicyProtocol.TLS_V1_1_2016,
},
defaultRootObject: "",
originConfigs: [
// {
// customOriginSource: {
// domainName: wwwbucket.bucketWebsiteDomainName
// },
// behaviors : [ {isDefaultBehavior: true}],
// },
{
s3OriginSource: {
s3BucketSource: wwwbucket,
originAccessIdentity: oaiWWW
},
behaviors : [ {isDefaultBehavior: true}],
}
]
});
Unfortunately the result is that rather than redirecting, browsing to www.mydomain.com results in the browser showing an S3 XML bucket listing result. I can fix the problem manually by using the AWS console to edit CloudFront's "Origin Domain Name" within "origin settings" from:
bucketname.s3.eu-west-2.amazonaws.com
to:
bucketname.s3-website.eu-west-2.amazonaws.com
Then all works as expected. I have tried changing my CDK script to use a customOriginSource rather than s3OriginSource (commented-out code above) which results in the correct address in CloudFront's "Origin Domain Name" but then the CloudFront distribution does not have a "Origin Access Identity" and so can't access the S3 bucket.
Does anyone know a way to achieve the redirect without having to make the redirect bucket public or edit the "Origin Domain Name" manually via the AWS console?
I thought I'd found an answer using a CDK escape hatch. After creating the CloudFront distribution for my redirect I modified the CloudFormation JSON behind the CDK class as (in typescript):
type ChangeDomainName = {
origins: {
domainName: string
}[]
}
const cfnCloudFrontRedirect = cloudFrontRedirect.node.defaultChild as cloudfront.CfnDistribution
var distributionConfig = cfnCloudFrontRedirect.distributionConfig as cloudfront.CfnDistribution.DistributionConfigProperty & ChangeDomainName
distributionConfig.origins[0].domainName = wwwbucket.bucketWebsiteDomainName
cfnCloudFrontRedirect.distributionConfig = distributionConfig
Unfortunately although this appeared to generate the CloudFormation template I was aiming for (checked using cdk synthesize) when deploying (cdk deploy) the following error was generated by CloudFormation:
UPDATE_FAILED | AWS::CloudFront::Distribution |
The parameter Origin DomainName does not refer to a valid S3 bucket.
It appears that even though it's possible to set a website endpoint of the form - ${bucketname}.s3-website.${region}.amazonaws.com - manually in the field Origin Domain Name within the CloudFront console this isn't possible using CloudFormation. This leads me to conclude either:
This is a bug with CloudFormation.
It's a bug in the console, in that the console shouldn't allow this setup.
However although currently modifying Origin Domain Name in console works, I don't know if this is a "legal" configuration that could be changed in the future in which case my code might stop working. The current solutions are:
Make the redirect bucket public in which case the customOriginSource will work.
Rather than using a redirect bucket instead use a Lambda to perform the redirect and deploy within CloudFront using "Lambda#Edge".
I would prefer not to make my redirect bucket public as it results in warnings when using security checking tools. The option of deploying "Lambda#Edge" using the cdk outside of us-east-1 currently looks painful so for the moment I'll continue manually editing Origin Domain Name in the console.
For reference the AWS documentation appears to imply that the API prohibits this use, though the console permits it, see: Using Amazon S3 Buckets Configured as Website Endpoints for Your Origin. See also:
Key differences between a website endpoint and a REST API endpoint
My code uses the AWS Javascript SDK to upload to S3 directly from a browser. Before the upload happens, my server sends it a value to use for 'Authorization'.
But I see no way in the AWS.S3.upload() method where I can add this header.
I know that underneath the .upload() method, AWS.S3.ManagedUpload is used but that likewise doesn't seem to return a Request object anywhere for me to add the header.
It works successfully in my dev environment when I hardcode my credentials in the S3() object, but I can't do that in production.
How can I get the Authorization header into the upload() call?
Client Side
this posts explains how to post from a html form with a pre-generated signature
How do you upload files directly to S3 over SSL?
Server Side
When you initialise the S3, you can pass the access key and secret.
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
accessKeyId: '[value]',
secretAccessKey: '[value]'
});
const params = {};
s3.upload(params, function (err, data) {
console.log(err, data);
});
Reference: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
Alternatively if you are running this code inside AWS services such as EC2, Lambda, ECS etc, you can assign a IAM role to the service that you are using. The permissions can be assigned to the IAM Role
I suggest that you use presigned urls.
I'm creating an ASP .Net Core 2.1 Web API. The front end (being written in Angular) will consume this API, which is used for a number of things, one of which is saving and retrieving files (pictures, PDF and Word docs, etc.)
We are storing all these files on Amazon S3. I was watching a tutorial video (https://www.youtube.com/watch?v=eRUjPrMMhCc) where the guy shows how to create a bucket, as well as upload and download a file from Amazon S3 from an ASP .Net Core 2.0 Web API, which I thought was fantastic as it's exactly what I needed.
But then I realized that, although the uploading functionality could be useful, the downloading might not be. The reason being that, if the user requests a file (stored on Amazon S3) via the client web app, and this request goes to the API (as was my original intention), then the API would have to first download this file from S3 (which might take a few seconds) and then send it to the client (another few seconds). So the file is being transmitted twice, and therefore unnecessarily slowing down the process of getting a file from S3 to the client.
Is my thinking correct here? Would it be better if the Angular client retrieved the file directly from S3 instead of going via the API? In terms of speed?
Amazon SDK has a methods to handle all you scenarios the principe here is to get a signed URL from Amazon S3 using SDK and then passe it to your front end
import * as AWS from "aws-sdk/global";
AWS.config.update({
region: env.bucketRegion,
});
let clientParams:any = {
region: env.bucketRegion,
apiVersion: '2006-03-01',
params: {Bucket: env.rekognitionBucket}
};
if (environment.s3_endpoint) {
clientParams.endpoint = env.s3_endpoint;
}
let s3 = new S3(clientParams);
let url = s3.getSignedUrl('getObject', {
Bucket: env.rekognitionBucket,
Key: '1234.txt',
});
I am using fine uploader to uploaded files to S3.
Based on my experience, fine uploader forces to hard code s3 bucket name in the java script itself or it may be my misundestanding!. My challenge is that I have different bucket per environment. does that mean I have to use separate java script (like below) per environment such local, dev,test,etc? Is there any option in which I can pass bucket name from the server side configuration ?
Local
$('#fine-uploader').fineUploaderS3({
template: 'qq-template',
autoUpload: false,
debug: true,
request: {
endpoint: "http://s3.amazonaws.com/bucket_local",
accessKey: "AKxxxxxxxxxxBIA",
},
)}
Dev
$('#fine-uploader').fineUploaderS3({
template: 'qq-template',
autoUpload: false,
debug: true,
request: {
endpoint: "http://s3.amazonaws.com/bucket_dev",
accessKey: "AKxxxxxxxxxxBIA",
},
)}
Test
$('#fine-uploader').fineUploaderS3({
template: 'qq-template',
autoUpload: false,
debug: true,
request: {
endpoint: "http://s3.amazonaws.com/bucket_test",
accessKey: "AKxxxxxxxxxxBIA",
},
)}
Based on my experience, fine uploader forces to hard code s3 bucket name in the java script itself or it may be my misundestanding!
Yes, this is definitely a misunderstanding.
Normally, you would specify your bucket as a URL via the request.endpoint option. You can specify a default value here and then override it at almost any time, for all subsequent files or one or more specific files, via the setEndpoint API method. You can call this method from, for example, an onSubmit callback handler, and even delegate to your server for the bucket by returning a Promise in your callback handler and resolving the Promise once your have called setEndpoint.
Fine Uploader attempts to determine the bucket name from the request.endpoint URL. If this is not possible (such as if you are using a CDN as an endpoint), you will need to supply the bucket name via the objectProperties.bucket option. This too can be dynamically updated, as the option value may be a function, and you may even return a Promise from this function if you would like to delegate to an async task to determine the bucket name (such as if you need to get the bucket name from a server endpoint using an ajax call). Fine Uploader S3 will call your bucket function before it attempts to upload each file, passing the ID of the file to your function.
I set s3_url as hidden value in html, and value set at server based on environment config.
request: {
endpoint: $('#s3_url').val()
}