│ Error: creating CloudFront Distribution: CNAMEAlreadyExists: - amazon-s3

│ Error: creating CloudFront Distribution: CNAMEAlreadyExists: One or more of the CNAMEs you provided are already associated with a different resource. │ status code: 409
Hello, I try to create new S3 Buckets, CF Distributions and attach it to the existing Route53 Record.
I use module for it and module is set up to create new CNAME record, however I already have it and I want One Record to pont to Multiple CF.
How could I attach several CF disto to existing Route53?
Thank you all!

Related

Update a Word file that has been created in ACC

I need to update a Word file that has been created in ACC. I can download the file, but when I try to upload it again, I get the error: 'Only the bucket creator is allowed to access this api.'
It seems you can only upload files to buckets the application has created. Is this correct ?
Note that I don't want to create a new version of the file.
Looks like you were uploading the new file via the bucket wip.dm.prod directly, which is owned by Autodesk Cloud products, e.g. BIM360 Docs/ Autodesk Docs(ACC Docs). It's expected that you cannot do that directly, since you're not the bucket owner.
To upload a new file version to Autodesk Cloud products, you will need to do the following.
Request a storage location: https://forge.autodesk.com/en/docs/bim360/v1/tutorials/document-management/upload-document/#step-5-create-a-storage-object
Create Additional Versions of the File for the updated file: https://forge.autodesk.com/en/docs/bim360/v1/tutorials/document-management/upload-document/#step-5-create-a-storage-object
Note. Forge Data Management API is forward compatible with ACC.

How does versioning work on Amazon Cloudfront?

I've just set up a static website on Amazon S3. I'm also using the Cloudfront CDN service.
According to Amazon, there are 2 methods available for clearing the Cloudfront cache: invalidation and versioning. My question is regarding the latter.
Consider the following example:
I link to an image file (image.jpg) from my index.html file. I then decide to replace the image. I upload a second image with the filename: image_2.jpg and change the link in my index.html file.
Will the changes automatically take effect or is some further action required?
What triggers the necessary changes if the edited and newly uploaded files are located in the bucket and not the cache?
Versioning in CloudFront is nothing more than adding (or prefixing) a version in the name of the object or 'folder' where objects are in stored.
all objects in a folder v1 and use a URL like
https://xxx.cloudfront.net/v1/image.png
all objects contain a version in their name like image_v1.png and use a URL like https://xxx.cloudfront.net/image_v1.png
The second option is often a bit more work but then you don't need to upload new files which do not require to be updated (=cheaper in the context of storage). The first solution is often more clear and requires less work.
Using CloudFront Versioning requires more S3 storage but is often cheaper than creating many invalidations.
The other way to invalidate the cache is to create invalidations (can be expensive). If you don't really need invalidations but just need more quick cache refreshes (default 24h) then you can update the origin TTL settings (origin level) or set cache duration for an individual object (object level).
Your cloudfront configuration has a cache TTL, which tells you when the file will be updated, regardless of when the source changes.
If you need it updated right away, use the invalidation function on your index.html file
I'll chime in on this in case anyone else comes here looking for what I did. You can set up Cloudfront with S3 versioning enabled and reference specific S3 versions if you know which version you need. I put it behind a presigned Cloudfront URL and ended up with this in the Java SDK:
S3Properties s3Properties... // Custom properties pulled from a config file
String cloudfrontUrl = "https://" + s3Properties.getCloudfrontDomain() + "/" +
documentS3Key + "?versionId=" + documentS3VersionId;
URL cloudfrontSignedUrl = new URL(CloudFrontUrlSigner.getSignedURLWithCannedPolicy(
cloudfrontUrl,
s3Properties.getCloudfrontKeypairId(),
SignerUtils.loadPrivateKey(s3Properties.getCloudfrontKeyfilePath()),
getPresignedUrlExpiration()));

Import data from BigQuery to Cloud Storage in different project

I have two projects under the same account:
projectA with BQ and projectB with cloud storage
projectA has BQ with dataset and table - testDataset.testTable
prjectB has cloud storage and bucket - testBucket
I use python, google cloud rest api
account key credentials for every project, with different permissions: projectA key has permissions only for BQ; projectB has permissions only for cloud storage
What I need:
import data from projectA testDataset.testTable to projectB testBucket
Problems
of course, I'm running into error Permission denied while I'm trying to do it, because apparently, projectA key does not have permissions for projectB storage and etc
another strange issue as I have testBucket in projetB I can't create a bucket with the same name in projectA and getting
This bucket name is already in use. Bucket names must be globally
unique. Try another name.
So, looks like all accounts are connected I guess it means should be possible to import data from one account to another one via API
What can I do in this case?
You put this wrong. You need to provide access to the user account on both projects to have accessible across projects. So there needs to be a user authorized to do the BQ thing and also the GCP thing on the different project.
Also Bucket names must be globally unique it means I can't create the name as well, it's global (for the entire planet you reserved that name, not just for project)

Redis Cache Share Across Regions

I've got an application using redis for cache, it works well so far. However we need spread our applications to different regions(thru dynamic DNS dispatcher via user locations, local user could visit nearest server).
Considering the network limitation and bandwith, it's not likely to build a centralised redis. So we have to assign different redis for different regions. So the problem here is how can we handle the roaming case. User opens the app in location 1, while continuing using the app in location 2 without missing the cache in location1.
You will have to use a tiered architecture. This is how most CDNs like Akamai, or Amazon Cloudfront work.
Simply put, this is how it works :
When a object is requested, see if it exists in the redis cache server S1 assigned for location L1.
If it does not exist in S1, check whether it exists in caching servers of other locations i.e. S2,S3....SN.
If it is found in S2...SN, store the object in S1 as well, and serve the object.
If not found in S2...SN as well, fetch the object fresh from backend, and store in S1.
If you are using memcached for caching, then facebook's open-source mcrouter project will help, as it does centralized caching.

Is it possible to setup DNS for s3 using multiple buckets for a single domain?

Is there a way to use another bucket name when hosting a site (or indeed any content) than just www.example.com.s3-region.amazonaws.com? I want to use multiple buckets so that when I update the site I can rollback a version if problems arrise and so that updates are an atomic switch between site versions. I only want one bucket used for a domain at a time.
I.e. something like
Bucket Names:
www.example.com.bucket1
www.example.com.bucket2
Procedure:
www.example.com currently points to -> www.example.com.bucket1.s3-region.amazonaws.com
New site version is uploaded to www.example.com.bucket2.
Once verified DNS is changed so that www.example.com points to -> www.example.com.bucket2.s3-region.amazonaws.com
This should not work because S3 looks at the hostname of the request (www.example.com) to find out what bucket you're trying to access so the bucket has to have the same name.
But it is possible to achieve what you want with Amazon CloudFront. There are two options:
You can create a single distribution and only update the origin of it (the S3 bucket).
You can create two different distributions and update the DNS settings to point to the desired distribution. You would also need to update the CNAME properties in both of the distributions (remove www.example.com from the old distribution and add it to the new one).