Container storage location with specified provisioning code not available - amazon-s3

On creating a bucket using s3 API, I get the
Container storage location with specified provisioning code not available (Service: Amazon S3; Status Code: 400; Error Code: InvalidLocationConstraint; Request ID: f377cc84-2e76-490b-8161-4407a4b8d9d7), S3 Extended Request ID: null error.
However, I can create a bucket using the service portal on Softlayer. Programmatically I can get the latest list of buckets and even delete it, but creation throws the above error.

A recent update has introduced unintended behavior around bucket creation, and we're working to fix it. The system is expecting a Location Constraint of us-standard. Provide the following XML block in the body of the request:
<CreateBucketConfiguration>
<LocationConstraint>us-standard</LocationConstraint>
</CreateBucketConfiguration>
If using an SDK, you'd follow the conventions of the particular library you are using. For example, using boto3 creating a new bucket might look like this:
bucket = s3.create_bucket(Bucket='my-bucket',
CreateBucketConfiguration={'LocationConstraint': 'us-standard'})
In java, it would look like:
s3.createBucket(bucketName, "us-standard");
Here's a link to the docs (we're steadily improving them).
Let me know if that doesn't help, or if you are using another tool or SDK. :)

Related

How can I add a CORS rule to an existing s3 bucket by CDK v3 javascript?

I could see there is a function called addCorsRule under type Bucket to add a cors rule to a newly created S3.
I am trying to achieve similar thing for a existing S3 bucket that is not created by me, and it's in another aws account.
The return type of all the finding existing bucket function e.g. fromBucketAttributes is IBucket. It does not provide a method for add a new cors rule.
I am wondering if there will be a work around to achieve this?
Thank you very much.
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.Bucket.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.IBucket.html

What is the S3 hostedZoneId input used for?

Digging through our Pulumi code, I see that it sets the hostedZoneId on the S3 bucket it creates and I don't understand what it's used for.
The bucket holds internal content (Pulumi state files) and is not set as a static website.
The Pulumi docs in AWS S3 Bucket hostedZoneId only state:
The Route 53 Hosted Zone ID for this bucket's region.
with a link to what appears to be an irrelevant page (looks like a copy-paste error since that link is mentioned earlier on the page).
S3 API docs don't mention the field either.
Terraform's S3 bucket docs, which Pulumi relies on many times but is also a good reference for S3 API in general, exposes this as an output, but not as an input attribute.
Does anyone know what this attribute is used for?

Attach Existing Cloud Resource (ex: S3 Bucket) to a Pulumi Project

Firstly, I love Pulumi.
We're trying to propose Pulumi as a solution for a distributed architecture and it is going swimmingly. The uncertainty I have right now is if it's possible to attach an existing cloud resource to the Pulumi configuration.
There is already an S3 bucket existing with media, what I'm wondering is if it is possible to define the S3 bucket in our Pulumi config, or does Pulumi have to be the creator of the cloud resource before it can be managed by Pulumi?
This is possible with a get function of a resource. In case of an S3 Bucket named "tpsReports-4f64efc" and a Lambda function "zipTpsReports-19d51dc", it would look like this:
const tpsReports = aws.s3.Bucket.get("tpsReports", "tpsReports-4f64efc");
const zipFunc = aws.lambda.Function.get("zipTpsReports", "zipTpsReports-19d51dc");
When you run your Pulumi program, the status of these resources will say read instead of create or update.
If you want to go one step further and adopt an existing resource to be fully managed by Pulumi, this blog post documents the entire process.

FineUploader: Harvest original last modified date when uploading to Amazon S3

I would like to send the last modified date of the uploaded file to the server. I have the javascroipt snippet to get that using FileApi ($(this).fineUploaderS3('getFile', id).lastModifiedDate). I would like to send this information when the uploadSuccess's endPoint is called, but I cannot find the callback which is right for me at Events | Fine Uploader documentation, and I cannot find the way I could inject the data.
These are submitted as POST parameters to my server when the upload finished to S3: key, uuid, name, bucket. I would like to inject the lastModified date here somehow.
Option 2:
Asking the Amazon S3 service about last modification date does not help directly, because the uploaded file has the current date, not the file's original date. It would be great if we could inject the information into the FineUploader->S3 communication in a way that S3 would use it for setting it's own last modified date for the uploaded file.
Other perspective I considered:
If I use onSubmit and setParams then I the Amazon S3 server will take it as 'x-amz-meta-lastModified'. The problem is that when I upload larger files (which is uploaded in chunks with an other dance) then I get signing error. ...<Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>....
EDIT
The Other perspective I considered works. The bottleneck was the name of the custom metadata chih I used at setParams. It cannot contain capital letters, otherwise the signing fails. I did not find any reference documentation for it. For one I checked Object Key and Metadata - Amazon Simple Storage Service. If someone could find me a reference I would include that here.
The original question (when and how to send last modified date to the server component) remains.
(Server is PHP.)
EDIT2
The Option 2 will not work, as far my research went the "Last Modified" entry cannot be manually altered at Amazon S3.
If the S3 API does not return the expected last modified date, you can check the value of the lastModifiedDate on the File object associated with the upload (provided the browser supports the file API) and send that value as a parameter to the upload success endpoint. See the documentation for the setUploadSuccessParams API method for more details.

How can I update files on Amazon's CDN (CloudFront)?

Is there any way to update files stored on Amazon CloudFront (Amazon's CDN service)?
Seems like it won't take any update of a file we make (e.g. removing the file and storing the new one with the same file name as before).
Do I have to explicitly trigger an update process to remove the files from the edge servers to get the new file contents published?
Thanks for your help
Here is how I do it using the CloudFront control panel.
Select CloudFront from the list of services.
Make sure Distributions from the top left is selected.
Next click the link for the associated distribution from the list (under id).
Select the Invalidations tab.
Click the Create Invalidation button and enter the location of the files you want to be invalidated (updated).
For example:
Then click the Invalidate button and you should now see InProgress under status.
It usually takes 10 to 15 minutes to complete your invalidation
request, depending on the size of your request.
Once it says completed you are good to go.
Tip:
Once you have created a few invalidations if you come back and need to invalidate the same files use the select box and the Copy link will become available making it even quicker.
Amazon added an Invalidation Feature. This is API Reference.
Sample Request from the API Reference:
POST /2010-08-01/distribution/[distribution ID]/invalidation HTTP/1.0
Host: cloudfront.amazonaws.com
Authorization: [AWS authentication string]
Content-Type: text/xml
<InvalidationBatch>
<Path>/image1.jpg</Path>
<Path>/image2.jpg</Path>
<Path>/videos/movie.flv</Path>
<CallerReference>my-batch</CallerReference>
</InvalidationBatch>
Set TTL=1 hour and replace
http://developer.amazonwebservices.com/connect/ann.jspa?annID=655
Download Cloudberry Explorer freeware version to do this on single files:
http://blog.cloudberrylab.com/2010/08/how-to-manage-cloudfront-object.html
Cyberduck for Mac & Windows provides a user interface for object invalidation. Refer to http://trac.cyberduck.ch/wiki/help/en/howto/cloudfront.
I seem to remember seeing this on serverfault already, but here's the answer:
By "Amazon CDN" I assume you mean "CloudFront"?
It's cached, so if you need it to be updated right now (as opposed to "new version will be visible in 24hours") you'll have to choose a new name. Instead of "logo.png", use "logo.png--0", and then update it using "logo.png--1", and change your html to point to that.
There is no way to "flush" amazon cloudfront.
Edit: This was not possible, it is now. See comments to this reply.
CloudFront's user interface offers this under the [i] button > "Distribution Settings", tab "Invalidations": https://console.aws.amazon.com/cloudfront/home#distribution-settings
In ruby, using the fog gem
AWS_ACCESS_KEY = ENV['AWS_ACCESS_KEY_ID']
AWS_SECRET_KEY = ENV['AWS_SECRET_ACCESS_KEY']
AWS_DISTRIBUTION_ID = ENV['AWS_DISTRIBUTION_ID']
conn = Fog::CDN.new(
:provider => 'AWS',
:aws_access_key_id => AWS_ACCESS_KEY,
:aws_secret_access_key => AWS_SECRET_KEY
)
images = ['/path/to/image1.jpg', '/path/to/another/image2.jpg']
conn.post_invalidation AWS_DISTRIBUTION_ID, images
even on invalidation, it still takes 5-10 minutes for the invalidation to process and refresh on all amazon edge servers
CrossFTP for Win, Mac, and Linux provides a user interface for CloudFront invalidation, check this for more details: http://crossftp.blogspot.com/2013/07/cloudfront-invalidation-with-crossftp.html
I am going to summarize possible solutions.
Case 1: One-time update: Use Console UI.
You can manually go through the console's UI as per #CoalaWeb's answer and initiate an "invalidation" on CloudFront that usually takes less than one minute to finish. It's a single click.
Additionally, you can manually update the path it points to in S3 there in the UI.
Case 2: Frequent update, on the Same path in S3: Use AWS CLI.
You can use AWS CLI to simply run the above thing via command line.
The command is:
aws cloudfront create-invalidation --distribution-id E1234567890 --paths "/*"
Replace the E1234567890 part with the DistributionId that you can see in the console. You can also limit this to certain files instead of /* for everything.
An example of how to put it in package.json for a Node/JavaScript project as a target can be found in this answer. (different question)
Notes:
I believe the first 1000 invalidations per month are free right now (April 2021).
The user that performs AWS CLI invalidation should have CreateInvalidation access in IAM. (Example in the case below.)
Case 3: Frequent update, the Path on S3 Changes every time: Use a Manual Script.
If you are storing different versions of your files in S3 (i.e. the path contains the version-id of the files/artifacts) and you need to change that in CloudFront every time, you need to write a script to perform that.
Unfortunately, AWS CLI for CloudFront doesn't allow you to easily update the path with one command. You need to have a detailed script. I wrote one, which is available with details in this answer. (different question)