Does anyone know how to enable the EventBridge notifications via the s3 API? The documentation is not very helpful: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-notification-configuration.html.
EventBridgeConfiguration
All events get sent to the default bus in the account, so there is nothing to configure. This makes the documentation and the call structure confusing. It makes more sense if you translate the API Call into XML.
await s3Client.putBucketNotificationConfiguration({
Bucket: 'my-bucket-name',
NotificationConfiguration: {
EventBridgeConfiguration: {},
}
}).promise();
Related
My code uses the AWS Javascript SDK to upload to S3 directly from a browser. Before the upload happens, my server sends it a value to use for 'Authorization'.
But I see no way in the AWS.S3.upload() method where I can add this header.
I know that underneath the .upload() method, AWS.S3.ManagedUpload is used but that likewise doesn't seem to return a Request object anywhere for me to add the header.
It works successfully in my dev environment when I hardcode my credentials in the S3() object, but I can't do that in production.
How can I get the Authorization header into the upload() call?
Client Side
this posts explains how to post from a html form with a pre-generated signature
How do you upload files directly to S3 over SSL?
Server Side
When you initialise the S3, you can pass the access key and secret.
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
accessKeyId: '[value]',
secretAccessKey: '[value]'
});
const params = {};
s3.upload(params, function (err, data) {
console.log(err, data);
});
Reference: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
Alternatively if you are running this code inside AWS services such as EC2, Lambda, ECS etc, you can assign a IAM role to the service that you are using. The permissions can be assigned to the IAM Role
I suggest that you use presigned urls.
I am using minio client to access S3. The S3 storage I am using has two endpoints - one (say EP1) which is accessible from a private network and other (say EP2) from the internet. My application creates a presigned URL for downloading an S3 object using EP1 since it cannot access EP2. This URL is used by another application which is not on this private network and hence has access only to EP2. This URL is (obviously) not working when used by the application outside the network since this URL has EP1 in it.
I have gone through minio documentation but did not find anything which help me specify alternate endpoints.
So my question is -
Is there anything which I have missed from minio that can help me?
Is there any S3 feature which allows generating presigned URL for an
object with EP2 in it?
Or is this not solvable without changing
current network layout?
You can use minio-js to manage this
Here is an example that you can use
var Minio = require('minio')
var s3Client = new Minio.Client({
endPoint: "EP2",
port: 9000,
useSSL: false,
accessKey: "minio",
secretKey: "minio123",
region: "us-east-1"
})
var presignedUrl = s3Client.presignedPutObject('my-bucketname', 'my-objectname', 1000, function(e, presignedUrl) {
if (e) return console.log(e)
console.log(presignedUrl)
})
This will not contact the server at all. The only thing here is that you need to know the region that bucket belongs to. If you have not set any location in minio, then you can use us-east-1 by default.
I'm creating an ASP .Net Core 2.1 Web API. The front end (being written in Angular) will consume this API, which is used for a number of things, one of which is saving and retrieving files (pictures, PDF and Word docs, etc.)
We are storing all these files on Amazon S3. I was watching a tutorial video (https://www.youtube.com/watch?v=eRUjPrMMhCc) where the guy shows how to create a bucket, as well as upload and download a file from Amazon S3 from an ASP .Net Core 2.0 Web API, which I thought was fantastic as it's exactly what I needed.
But then I realized that, although the uploading functionality could be useful, the downloading might not be. The reason being that, if the user requests a file (stored on Amazon S3) via the client web app, and this request goes to the API (as was my original intention), then the API would have to first download this file from S3 (which might take a few seconds) and then send it to the client (another few seconds). So the file is being transmitted twice, and therefore unnecessarily slowing down the process of getting a file from S3 to the client.
Is my thinking correct here? Would it be better if the Angular client retrieved the file directly from S3 instead of going via the API? In terms of speed?
Amazon SDK has a methods to handle all you scenarios the principe here is to get a signed URL from Amazon S3 using SDK and then passe it to your front end
import * as AWS from "aws-sdk/global";
AWS.config.update({
region: env.bucketRegion,
});
let clientParams:any = {
region: env.bucketRegion,
apiVersion: '2006-03-01',
params: {Bucket: env.rekognitionBucket}
};
if (environment.s3_endpoint) {
clientParams.endpoint = env.s3_endpoint;
}
let s3 = new S3(clientParams);
let url = s3.getSignedUrl('getObject', {
Bucket: env.rekognitionBucket,
Key: '1234.txt',
});
I am dealing with some legacy applications and want to use Amazon AWS API Gateway to mitigate some of the drawbacks.
Application A, is able to call URLs with parameters, but does not support HTTP basic AUTH. Like this:
https://example.com/api?param1=xxx¶m2=yyy
Application B is able to handle these calls and respond. BUT application B needs HTTP basic authentication.
The question is now, can I use Amazon AWS API Gateway to mitigate this?
The idea is to create an API of this style:
http://amazon-aws-api.example.com/api?authcode=aaaa¶m1=xxx¶m2=yyy
Then Amazon should check if the authcode is correct and then call the API from Application A with all remaining parameters while using some stored username+password. The result should just be passed along back to Application B.
I could also give username + password as a parameter, but I guess using a long authcode and storing the rather short password at Amazon is more secure. One could also use a changing authcode like the ones used in 2-factor authentications.
Path to a solution:
I created the following AWS Lambda function based on the HTTPS template:
'use strict';
const https = require('https');
exports.handler = (event, context, callback) => {
const req = https.get(event, (res) => {
let body = '';
res.setEncoding('utf8');
res.on('data', (chunk) => body += chunk);
res.on('end', () => callback(null, body));
});
req.on('error', callback);
req.end();
};
If I use the Test function and provide it with this event it works as expected:
{
"hostname": "example.com",
"path": "/api?param1=xxx¶m2=yyy",
"auth": "user:password"
}
I suppose the best way from here is to use the API gateway to provide an interface like:
https://amazon-aws-api.example.com/api?user=user&pass=pass¶m1=xxx¶m2=yyy
Since the params of an HTTPs request are encrypted and they are not stored in Lambda, this method should be pretty secure.
The question is now, how to connect the API gateway to the Lambda.
You can achieve the scenario mentioned with AWS API Gateway. However it won't be just a proxy integration, rather you need to have a Lambda function which will forward the request by doing the transformation.
If the credentials are fixed credentials to invoke the API, then you can use the environmental variables in Lambda to store them, encrypted by using AWS KMS Keys.
However if the credentials are sent for each user (e.g logged into the application from a web browser) the drawbacks of this approach is that you need to store username and password while also retrieving it. Its not encourage to store passwords even encrypted. If this is the case, its better to passthrough (Also doing the transformations) rather storing and retrieving in between.
I have a rails app that uses aws cli to sync bunch of content and config with my s3 bucket like so:
aws s3 sync --acl 'public-read' #{some_path} s3://#{bucket_path}
Now I am looking for some easy way to mark everything that was just updated in sync to be marked as invalidated or expired for CloudFront.
I am wondering if there is some way to use -cache-control flag that aws cli provides to make this happen. So that instead of invalidating CouldFont, just mark the files as expired, so CloudFront will be forced to fetch fresh data from bucket.
I am aware of CloudFront POST API to mark files for invalidation, but that will mean I will have detect what changed in the last sync, then make the API call. I might have any where from 1000s to 1 file syncing. Not a pleasent prospect. But if I have to go this route, how would I go about detecting changes without parsing the s3 sync's console output of-course.
Or any other ideas?
Thanks!
You cannot use the --cache-control option that aws cli provides to invalidate files in CloudFront. The --cache-control option maps directly to the Cache-Control header and CloudFront caches the headers along with the file, so if you change a header you must also invalidate to tell CloudFront to pull in the changed headers.
If you want to use the aws cli, then you must parse the output of the sync command and then use the aws cloudfront cli.
Or, you can use s3cmd from s3tools.org. This program provides the the --cf-invalidate option to invalidate the uploaded filed in CloudFront and a sync command synchronize a directory tree to S3.
s3cmd sync --cf-invalidate <local path> s3://<bucket name>
Read, the s3cmd usage page for more details.
What about using the brand new AWS Lambda? Basically, it executes custom code whenever an event is triggered in AWS (in your case, a file is synchronized in S3).
Whenever you synchronize a file you get an event similar to:
{
"Records": [
{
"eventVersion": "2.0",
// ...
"s3":
{
"s3SchemaVersion": "1.0",
// ...
"object":
{
"key": "hello.txt",
"size": 4,
"eTag": "1234"
}
}
}
]
}
Thus, you can check the name of the file that has changed and invalidate it in CloudFront. You receive one event for every file that has changed.
I have created a script that invalidates a path in CloudFront whenever an update occurs in S3, which might be a good starting point if you decide to use this approach. It is written in JavaScript (Node.js) as it is the language used by Lambda.
var aws = require('aws-sdk'),
s3 = new aws.S3({apiVersion: '2006-03-01'}),
cloudfront = new aws.CloudFront();
exports.handler = function(event, context) {
var filePath = '/' + event.Records[0].s3.object.key,
invalidateParams = {
DistributionId: '1234',
InvalidationBatch: {
CallerReference: '1',
Paths: {
Quantity: 1,
Items: [filePath]
}
}
};
console.log('Invalidating file ' + filePath);
cloudfront.createInvalidation(invalidateParams, function(err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log(data); // successful response
}
});
context.done(null,'');
};
For more info you can check Lambda's and CloudFront's API documentation.
Note however that the service is still in preview and is subject to change.
The AWS CLI tool can output JSON. Collect the JSON results, then submit an invalidation request per the link you included in your post. To make it really simple you could use a gem like CloudFront Invalidator, which will take a list of paths to invalidate.