I have my site (www.mycompany.com.ar) hosted on my data center. We develop an static site and put it on www.mycompany.com.ar/myapp. In my country connectivity issues are an everyday thing, so we decided to host a cloud version in AWS S3 and keep both versions working. The domain is handled by our internet provider, how can I map the S3 bucket to a subdomain?
Related
I have a problem with a domain I bought on AWS. I bought a domain that contains a special character('ñ'). I have a static website on S3 but I cannot route the domain to the host. I cannot create a bucket with the character 'ñ' and when I check the hosted zones I see only a different domain, can anyone help me to solve this issue? Do I have to buy another domain?
I have multiple one page apps (static sites) in buckets on google cloud storage.
Each app can access the information it needs from one API running on a google app engine.
I can serve the one page apps by pointing the CName of each domain to c.google.storage.com, but it doesn't serve it over HTTPS, just HTTP.
My question is:
1) why does google storage not serve contents of buckets via HTTPS if I use a custom domain?
2) How can I serve content on google cloud storage via HTTPS?
NOTE: From my basic understanding of google load balancers, I can serve the content of buckets via HTTPS if I point the domain to the load balancer, but then I would need a load balancer for each app. Those load balancers are too expensive. Is it possible to have one load balancer for all apps maybe?
You don't need a load balancer for each app. You can add multiple backends to a single load balancer and each backend can be connected to a separate storage bucket (that would be app specific). You can then add a hostname mapping on the load balancer per application, that will proxy requests to the correct backend bucket based on the Host-header in the request. You can also add path mapping to these rules if necessary.
You can achieve with only one HTTPS load balancer. create the LB and add each storage bucket as a backend bucket in the loadbalancer. Don't forget to create your bucket with the DNS name (e.g bucket1.mycompany.com, bucket2.mycompany.com etc). Add a wildcard A record in your DNS entry pointing to external IP of LB.
This maybe isn't the answer you are looking for, but I recommend Firebase Hosting (https://firebase.google.com/docs/hosting) to host single-page sites (React, Vue, etc) on GCP.
I've got small static JSON files sitting in an AWS S3 bucket that my (hybrid PhoneGap/Cordova) mobile app needs to read in (not write). I want to use Cloudflare between them. I know there are plenty of articles about static website hosting with this combination but I'm wondering if that's overkill for this? i.e. can I just connect Cloudflare to my S3 bucket without configuring all the static hosting stuff on S3, and if so how?
The JSON files are public and that's fine, I don't need to restrict access to just the app.
Thanks
You will need to configure static web hosting in S3 to achieve this since it will require an endpoint to forward traffic from Cloudfare.
For more details about the configurations, refer the article on Static Site Hosting with S3 and CloudFlare.
Let’s assume that I have a S3 hosted website. Aside from that I have an EC2 that would be to receive http requests from that website. Is there a way that I can set up a security group so that that EC2 can only receive http requests from that website? I know that if the website was hosted on another EC2 I could this vos the IP address or a load balancer, I’m just not sure how to go about it in the S3 website case.
When you launch a website on S3 you will have all Static front-end Contents being served (Just like having a pure HTML/CSS/Javascript website with no Webserver on your local Machine). Means all the calls, XHRs or embedded resource pointing to your EC2 instance are requests which are generated by visitors Browsers with Network Source of their IP with the Origin of "S3 or If you place a CloudFront on S3 it will be CloudFront as Origin in HTTP Headers) communicating with the Destination Target of EC2 (Where you have your WebServer serving on port 80or443). There is no SG that could be applied on the Bucket. However, S3 Buckets can be configured with a Policy to white-list certain IPs address to access Bucket Content and subsequently the Static Web content hosted on it. You can Also enforce CORS policy and have conditions to check Referees and Origins.
Putting aside the Bucket Level Policy, IP White listing, CORS and Condition Restrictions If you serve your Web S3 Bucket from a CloudFront Distribution you can apply GEOIP restriction Rules at the CloudFront level as well.
Just in case if say like you have an API server on EC2 which is going to be called by your CloudFront Domain you can Apply some access control at both CloudFront and EC2 Web level to enforce tightened CORS policies. I.e. Other Websites on the Internet can not Hijack your API service or do CSRF attacks(again as a Browser Level Protection Only).
I want to host a HTTPS-only static website using Amazon S3 and GoDaddy. Here's what I've done so far:
a) Created S3 static website at Amazon
b) Added S3 static URL at GoDaddy's CNAME records
CNAME **NAME**: *WWW* and **HOST** : *mywebsite.com.s3-website.ap-south-1.amazonaws.com*
c) Then in forwading **Domain** section added http://mywebsite.com
Now, I can access my website using http://mywebsite.com address. However, I want the site to be available via HTTPS only, for which I bought an SSL certificate from GoDaddy. And did setup on GoDaddy.
Now, the question is:
Is there a way to have an automatic http to https redirect with this setup?
You need to use CloudFront to serve HTTPS requests for an Amazon S3 Bucket. Amazon published Key Differences Between the Amazon Website and the REST API Endpoint to highlight website hosting on s3 does not support SSL connections.
However, this tutorial by Amazon will walk you through creating a CloudFront distribution to serve s3 content using HTTPS.