I have an api system(api.com) run elasticbeanstalk(singapore region), and I want users in Virginia can be access to api.com with lowest latency. From what I know Cloudfront can do this by put it in front of elb, but I got some issue when deploy for example:
- api.com cname to elasticbeanstalk url(xxx.elasticbeanstalk.com)
- my cloudfront name: xxx.cloudfront.net with Origin api.com(Use All Edge Locations)
What I understand is when users in Virginia access to xxx.cloudfront.net will be faster than access to api.com because cloudfront use Edge Locations.
But amazing when I test the result in Virginia:
- api.com took 2s to return the result
- xxx.cloudfront.net took 2.3s to return the result
It's even slower than normal.
What am I wrong and how can I build a system for Global users?
Related
How can I configure a Cloudfront distribution to use multiple s3 origins with same hierarchy but using different domain names?
Currently I have a Cloudfront distribution with a distribution domain name for example xyz.cloudfront.net.
The distribution has been configured to use an alternate domain e.g. assets.example.com and to serve content using that domain I've added cname record in my DNS management console that maps assets.example.com to xyz.cloudfront.net.
Now this set up works fine when serving content from a single s3 origin as i can call something like assets.example.com/images/my-image.png
However I want to configure 3 s3 origins as follows which have identical hierarchies i.e. they all have an images folder:
dev-bucket.s3.eu-west-2.amazonaws.com
test-bucket.s3.eu-west-2.amazonaws.com
live-bucket.s3.eu-west-2.amazonaws.com
If I've configured assets.example.com to map to the distribution xyz.cloudfront.net, how is cloudfront going to know which origin to serve from?
basically if im running the dev website i want cloudfront to serve content from the dev origin and if im running the test site then i want it to serve using the test origin.
The only way i can see how can i achieve this is creating 3 different cloud front distribution for each environment and map different domains to the distribution e.g. assets-dev.example.com, assets-test.example.com and assets.example.com for the live site.
Any advise appreciated.
Update:
The short version of the issue that I wanted to cross-reference some values in different stacks in different regions but the documentations were so confusing to let me think it's not possible to do that but it's possible, just I had to output those values as exported values in that stack and then use:
${cf.us-east-1:another-stack.theNeededArn} in the other stack.
The Long version if you are interested:
I am maintaining the infrastructure code using a Serverless framework. I have CloudFront that connecting to an S3 bucket that hosted in Europe. I got a client request to limit access to this bucket through the CloudFront to be limited only to the authenticated users (custom auth). Lambda#Edge was the best solution and I already implemented it that way but because Lambda#Edge has to be deployed to us-east-1, I ended up moving the S3 and the Cloudfront to the same region us-east-1 (That's because Cloufront depends on the lambda edge and S3 depends on the CloudFront so I have to keep them in the same stack or at least the same region). But I don't want to move my bucket to the US for legal stuff and I want to keep the data in Europe, Also my S3 has a lambda trigger function that listens to it and writes some data to a DynamoDb hosted in Europe.
So The problem:
I had S3 in Europe and I want to keep it in Europe but due to using lambda edge and because that the cloud formation or the serverless doesn't support a cross-region stack reference I ended up moving this S3 to the US but that's not the requirements!
I think even though we use the cloud front but having all of our customers in Europe and putting the S3 bucket in the US will increase the latency.
For legal stuff, I want to keep the European user's data inside Europe and not move it outside.
Here in this question's answer, I specified my approach and the full code example in case you are interested: How to access AWS CloudFront that connected with S3 Bucket via Bearer token of a specific user (JWT Custom Auth)
Any Suggestions?
Update (Steps to show the exact problem):
In my serverless.yml, I created this stack contains this lambda
edge:
I didn't want to configure lambda edge using a serverless framework but instead, I used cloud formation to configure the cloud front and everything.
In the cloud formation resources file, I added the cloud front
origins and configure the private origin through its behavior to use
the lambda edge (Please check the highlights in the picture):
Please note that now I am using the lambda edge ARN inside my Cloudfront so they need to be in the same region and because lambda#edge should be in us-east-1 so I decided to move the CloudFront to the same region and that does not really matter cause it has edge behavior by design.
Als if you are interested here I defined all the needed roles for
the predefined lambda edge in the step.1 (And that includes
publishing the correct policies and also the lambda edge version as
in lambda edge, you have to reference the version, not the function
and all set in this step I just put it here for completeness.):
Now I have the cloud front configuration, lambda edge configuration,
and in the cloud front, we are referencing the lambda edge ARN which
leads me to put them in the same region but now I will define my S3
bucket and will make it private so no one can access it, just the
cloud front CloudFrontOriginAccessIdentity can do that:
As you may see in the role I just gave access to CloudFront to get and gave access to the (lambda edge to get and put but not sure if that's correct) but anyways even if we need the CloudFrontOriginAccessIdentity only to be connected to my bucket so now the bucket linked with my CloudFront which is also linked to the lambda edge so I can't separate them to put the S3 in Europe only??
And by that even if I have an S3 trigger lambda function so I should put this function in the US even if this function is doing some stuff related to DynamoDb in Europe? So what is the point? Also even if the Cloudfront is edged but the bucket is regional so if I really need to process some data related to it that means by putting it in us I increased the latency? So that's my full detailed problem.
Update2:
I wanted to post the code as screenshots so I can highlight some lines and make it easier but for whom interested to check the code itself, I already posted the full version of it in my answer to this question here: How to access AWS CloudFront that connected with S3 Bucket via Bearer token of a specific user (JWT Custom Auth)
The solution was instead of moving everything over to us-east-1, just maintain two stacks which are the primary stack and the lambda edge stack. The primary stack resides in EU and the lambda edge stack resides in us-east-1. You can reference the lambda edge functions in us-east-1 by using ${cf.us-east-1:another-stack.lambdaEdgeArn}.
It is impossible to reference Lambda#Edge deployed to us-east-1 from other regions by Fn:ImportValue. In the CloudFormation template, there is a workaround by looking up the version-specific ARN dynamically and passing it as a template parameter for the CloudFront template, in a task file.
- name: Get Lambda Version-ARN
shell:
cmd: "
aws lambda list-versions-by-function \
--function-name '{{ lambda_func_name }}' \
--region '{{ lambda_region }}' \
--query \"max_by(Versions, &to_number(to_number(Version) || '0'))\" \
| jq -r '.FunctionArn'
"
register: lambda_output
- set_fact:
lambda_arn: "{{ lambda_output.stdout }}"
- name: CloudFront
cloudformation:
stack_name: "{{ stack_name }}"
state: "{{ state }}"
region: "{{ region }}"
template: "roles/{{ role_name }}/templates/cloudfront-template.yml"
template_parameters:
LambdaARN: "{{ lambda_arn }}"
...
Our AWS statement came in and we noticed we're being doubly charged for the number of requests.
First charge is for Asia Pacific (Tokyo) (ap-northeast-1) and this is straightforward because it's where our bucket is located. But there's another charge against US East (N. Virginia) (us-east-1) with a similar number of requests.
Long story short, it appears this is happening because we're using the aws s3 command and we haven't specified a region either via the --region option or any of the fallback methods.
Typing aws configure list shows region: Value=<not set> Type=None Location=None.
And yet our aws s3 commands succeed, albeit with this seemingly hidden charge. The presumption is, our requests first go to us-east-1, but since there isn't a bucket there by the name we specified, it turns around and comes back to ap-northeast-1, where it ultimately succeeds while getting accounted twice.
The ec2 instance where the aws command is run is itself in ap-northeast-1 if that counts for anything.
So the question is, is the presumption above a reasonable account of what's happening? (i.e. Is it expected behaviour.) And, it seems a bit insidious to me but is there a proper rationale for this?
What you are seeing is correct. The aws s3 command needs to know the region in order to access the S3 bucket.
Since this has not been provided, it will make a request to us-east-1, which is effectively the default - see the AWS S3 region chart to see that us-east-1 does not require a location constraint.
If the S3 receives a request for a bucket which is not in that region then it returns a PermanentRedirect response with the correct region for the Bucket. The AWS CLI handles this transparently and repeats the request with the correct endpoint which includes the region.
The easiest way to see this in action is to run commands in debug mode:
aws s3 ls ap-northeast-1-bucket --debug
The output will include:
DEBUG - Response body:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access
must be addressed using the specified endpoint. Please send all future requests to
this endpoint.</Message>
<Endpoint>ap-northeast-1-bucket.s3.ap-northeast-1.amazonaws.com</Endpoint>
<Bucket>ap-northeast-1</Bucket>
<RequestId>3C4FED2EFFF915E9</RequestId><HostId>...</HostId></Error>
The AWS CLI does not assume the Region is the same as the calling EC2 instance, it's a long running confusion/feature request.
Additional Note: Not all AWS services will auto-discover the region in this way and will fail if the Region is not set. S3 works because it uses a Global Namespace which inherently requires some form of discovery service.
Is it possible to do that? I need to be able to access mydomain.com by typing in my-domain.com in the address bar of the browser?
Now I added a DNS entry:
my-domain.com CNAME mydomain.com
But this doesn't seem to work. I get an 404 not found error.
You can only map a single domain to your S3 bucket. However you could use Cloudfront to do this.
See my answer to another similar question for more information.
We had the same issue and basically I set our CI to publish to two S3 buckets on release. Not ideal but keeps you clear from resetting caches in CloudFront on publish for the short term.
Is it possible to point a top level domain like http://example.com to a amazon cloudfront distribution?
I know it's possible with CNAMEs, but as far as I know, I need to set an A-name record for the top level domain in the DNS settings.
As explained by #dgeske, this can be done.
In my case, I had not purchased the domain from Route 53, and hence had to do extra configuration.
Scenario: You have the following
Cloud front distribution
A second-level domain (example.com) not purchased from Amazon Route 53. It was Google domains in my case, but the idea will work for other providers also.
You want to point the second-level domain (example.com) to the cloud front distribution (as opposed to a subdomain like www.example.com)
Your nomenclature is slightly inaccurate. example.com is not a TLD (top-level domain), it is what is called a second-level domain. See the following image.
Steps to do this.
Create a hosted zone in Route 53.
Route 53 will now give you a list of name servers that you have to set in the domain settings panel of the provider from which you purchased the domain (Google domains in my case).
Go back to Route 53 dashboard, and create an A - Alias record for this hosted zone (use create record set option). Remember to select 'Yes' radio button. Make sure you leave the subdomain part empty (since we are only interested in creating record for second-level domain).
Now you should be able to access your cloudfront distribution at http://example.com.
Depending on your DNS server, it may take a while to get records updated.
You may configure your system to use a public DNS server such as 8.8.8.8 to verify if you are able to access the cloudfront distribution using the URL. I used firefox's DNS over https feature for this. This makes firefox use cloudflare's (not cloudfront) DNS servers. You can also use dig command line utility dig #8.8.8.8 example.com (My domain was fightcoronapune.com, hence, dig #8.8.8.8 fightcoronapune.com) (telling dig to use 8.8.8.8 DNS server to resolve names)
You may additionally get Access Denied error, in which case you will have to configure the default root object for your cloudfront distribution. So that when you visit http://example.com the file http://example.com/index.html is served to you (assuming you specified index.html as default root object). This error has nothing to do with the steps we did above, and you will still get this error even if you directly use your cloudfront distribution's URL given by Amazon (eg. when you go to http://abcd.cloudfront.net directly, instead of going to http://example.com).
Q. Can I point my zone apex (example.com versus www.example.com) at my Amazon CloudFront distribution?
Yes. Amazon Route 53 offers a special type of record called an ‘Alias’ record that lets you map your zone apex (example.com) DNS name to your Amazon CloudFront distribution (for example, d123.cloudfront.net). IP addresses associated with Amazon CloudFront endpoints vary based on your end user’s location (in order to direct the end user to the nearest CloudFront edge location) and can change at any time due to scaling up, scaling down, or software updates. Route 53 responds to each request for an Alias record with the IP address(es) for the distribution. Route 53 doesn't charge for queries to Alias records that are mapped to a CloudFront distribution. These queries are listed as “Intra-AWS-DNS-Queries” on the Amazon Route 53 usage report.
Source: Amazon Route 53 FAQs
My understanding is that you cannot create an A record for Cloudfront.
Amazon provides you with a domain name like YourName.cloudfront.net. They need to manage the DNS resolution for that domain name behind the scenes in order to route each request to the nearest edge server.
you can if you add alias in cloudfront
then select A or AAAA(ipv6 if enabled on cloudfront)