Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a site on AWS with an SSL. The site is an ec2 instance and runs WordPress.
I wanted to move the site out of Wordpress, so I have a different ec2 instance with the new site.
The domain will remain the same and I want to minimize downtime during the switchover. Can I get a new SSL for the new site before the domain DNS points there? I know the connection won't show it's secure until the SSL it matches the domain.
Is there another way to handle the migration?
If the domain isn't changing then as far as SSL is concerned neither is your site. You just need to properly configure the new site to use the same SSL certificate. To minimize downtime move the AWS Elastic IP to the new ec2 instance during migration. If done correctly you'll have no downtime at all.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a custom domain (busymusic.ga) for php-javani.rhcloud.com domain. Because I want a https connection and don't have this feature with custom domain Openshift (that's right?) used CloudFlare. Set CloudFlare DNS address in domain panel, then create a cname record in CloudFlare like this:
But know when pinging busymusic.ga about 91% of packets lost (test it for long time) while when pinging php-javani.rhcloud.com I don't have this problem.
Could you please help me to solve this problem?
You'll want to open a support ticket directly with CloudFlare so our support team can look into this further. P.S. I work at CloudFlare.
Also, to note: ping won't be an accurate measurement if network quality. See: https://support.cloudflare.com/hc/en-us/articles/200169826-Why-am-I-seeing-timeouts-pinging-my-site-on-CloudFlare-
We ratelimit ICMP traffic, but that in no way indicates an actual problem.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please note this is not a complaint. I am just wondering what the cost is to Heroku for providing custom-domain SSL, if there is one, as they do not provide the SSL certificate. As i understand it it is quite common to provide SSL support for free, and charge for the certificate itself.
For reference: Custom-domain SSL
In order to use your own SSL certificate with a shared server, your site must run on its own dedicated public IP address.
(since the server needs to send the SSL cert before the browser tells it which host it's connecting for)
IP addresses are a scarce commodity.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Someone has registered a domain and is using a CNAME redirect to direct traffic to my site. Google is seeing this as duplicate content and it's affecting my rankings.
Is there anyway of blocking access for traffic that comes to my site through the domainnotundermycontrol.com redirect?
Thanks in advance.
"There is no BAD publicity."
The CNAME is solely a DNS tool. The request to you server should still be sending a request for the domainnotundermycontrol.com/somepage to your apache server once it gets you're IP from the DNS lookup. Apache will see the 'ServerName' as domainnotundermycontrol.com
It sounds like the domain which you CAN control has no filtering on server name, only ip, maybe. Create a vhost for the domainnotundermycontrol.com on your server to catch all requests to that server name and serve up an index file with links to legitimate page you want people to hit or just some adwords. Then it will no longer be caught by your other vhost.
Enjoy the free traffic.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've just (about 1 hour ago) associated an Elastic IP to my instance at Amazon EC2. If I SSH into my instance and type lynx localhost I can see that apache is responsive because I see the It works page.
However, If I browse into my instance (both via the IP itself and via the public DNS Amazon has created for me), I get Oops! Google Chrome could not connect to.. bla bla...
Should I wait some more time (in case it's due to some DNS thing) or does this indicate something is wrong?
Thanks in advance
EDIT: When I ssh into my instance, I use the full IP address and it works... (the Elastic IP I mean).
You must config the firewall to open the HTTP port.
To be more specific, for AWS this is done via Security Groups. You should create one with the ports you need opened. In most cases that's the por 80 for TCP.
You can see how to achieve this on the documentation http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html
First identify the security group of the Ec2 instance.
Next click on the security groups link in the bottom left nav.
Select the security group under which this EC2 instance lies,
and add Inbound rules by specifying the port or a custom port range.
For those of you using Centos (and perhaps other linux distibutions), you need to make sure that its FW (iptables) allows for port 80 or any other port you want.
See here on how to completely disable it (for testing purposes only!).
And here for specific rules
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
In the interest of hosting purely static sites from Amazon S3, is the only route to rewrite the URLs and endpoints for accessing it's resources in a friendlier way via a rewrite engine such as any web server? And would it best to host this as an EC2?
It seems overkill but wasn't sure if there were alternatives.
I'm not sure why you need to rewrite.
You can assign a DNS CNAME to an S3 bucket for DNS. And they recently started supporting a default document.
So you can perfectly host http://www.example.com/ or http://www.example.com/some/path/to/some/file.html
http://aws.typepad.com/aws/2011/02/host-your-static-website-on-amazon-s3.html
S3 offers no features to 'rewrite' URLs as keys are immutable.
If you want to use URLs that are different from the S3 key you'll have to proxy the requests yourself.