How to run multiple websites with multiple IP addresses in single server? - apache

Currently I have two websites running on single Amazon EC2 and using Apache web server. Configuring Apache to use virtual host and use a single IP address was simple . But I think Amazon gives you up to 5 IP addresses and would like attach two IP addresses to a single EC2 instance and use those IP addresses for each site.
How do I configure the server that Website A uses one of the IP address for incoming and outgoing data?

Unfortunately, you can only map 1 elastic IP per instance. Its a nasty limitation, because I would love to setup multiple sites using SSL with default port of 443 but I cannot. I usually just use other ports if I have to, but that is not best practice.
The associated elastic IP to the one instance is free. You can set that up in the management console or through the API. Your server should come with its own internal IP address, and the elastic IP gets translated to that.
There are things that you can do with AWS load balancing, which allow you to use multiple ssl sites to one instance.

You can now do this if you run your instance in a VPC.
You can create multiple ENIs (Elastic Network Interfaces) and associate any number of them with a single instance.
The announcement for this feature is at http://aws.typepad.com/aws/2012/07/multiple-ip-addresses-for-ec2-instances-in-a-virtual-private-cloud.html

Related

Restrict Lightsail machine to be accessed from cloudfront

I have a website (https://www.cakexpo.com) hosted on lightsail. Few days ago, we faced a DDOS Attack on the IP which forced me onboard my website to cloudfront.
I moved my website to cloudfront, yet my ip address is still publically available and making it vulnerable for more attacks again.
I am trying to understand how I can hide my ip from public access.
I found that in vpc, you can get the list of corresponding cloudfront ips and whitelist them in security group., which I tried
It worked for some time, but later on I realised that cloudfront uses lots of Ips which are not listed here and thus not whitelisted in my security group.
This makes my site intermittent unavailable.
nslookup shows a different ip, which is not listed in the above list, and this link says that there 190+ ips associated with Cloudfront, which security group cannot handle, IMO. https://ip-ranges.amazonaws.com/ip-ranges.json
Finally I ended up reverting the config and make my IP public.
Is there any other way to hide the lightsail machines from public access?
you can do this in 2 ways.
easy Way: Create a ngnix reverse proxy instance in lightsnail, allow access to ur lightsnail main instance only from that reverse proxy instance. then Create a distribution instance (with is cloudfront for lightsnail) then point as Origin the reverse proxy instance.
Hard Way: vpc peering to Aws, from there you Create a cloudfront instance. allows access from it.

Bastionhost configuration with NaviServer on GCP?

How to add TLS/SSL letsencrypt or GCP provided certificate to VM instance in GCP with an internal ip address and static external address?
When I create one via a letsencrpt certificate install script, resultant connections break because the VM doesn't have an external facing ip number --only an internal number.
The traffic passes through a firewall (or load balancer) of sorts.
I'm used to bastionhost VM servers in the wild..
Details: NaviServer web server is running on a GCP Compute Engine with a FreeBSD 11.3 image.
(Linux Shield OSes aren't letting me compile Naviserver and use it on any port).
All works for port 80 and 8000 on an internal ip address, and a static ip address pointed externally and not connected to the VM.
I can't find any proxy/firewall settings to navigate via GCP menus.
How to resolve?
Is there some special term I should use to search for docs?
Any link with instructions to follow?
Is there a way to expose a VM instance directly to an external ip address?
Any other creative way I may get SSL/TLS to work with NaviServer?
thank you
Links to some things I've tried:
Enable SSL on Tomcat on Google Compute Engine
How to setup Letsencrypt for Google Cloud Compute Engine load balancer? <-- this is for Kubernetes clusters
I'm currently trying adding a load balancer:
https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs
This appears to be the solution: Use a GCP HTTP/S load balancer: https://cloud.google.com/load-balancing/docs/https
and specifically:
https://cloud.google.com/load-balancing/docs/https/ext-https-lb-simple
Argh. Actually No.
GCP Team kindly suggested this url: https://cloud.google.com/compute/docs/instances/custom-hostname-vm#create-custom-hostname
Set the hostname to the domain name. Treat this as if there's no proxy, just a firewall.

Accessing Public Page from AWS Ubuntu server

I am new to Amazon Web service. I created an Ubuntu 16 instance with AWS. Installed Apache and restarted the service. But still I am unable to figure out how to access the start page from a browser. Which IP address should I use? Public ip or elastic ip? Also do I need to change any configuration file? Thanks.
You need to use the public IP address, depending on your usecase you can even use an elastic IP address.
However you need to configure your security groups in order to access the web page.
Go to your security groups
Select the relevant security group
Add inbound rule to port 80 (TCP)
Then you will be able to access the page. Please refer this guide for more information.
You can use public ip / public DNS. These both will change upon restart of an instance. Elastic IP is useful when you want your IP address to be persistent. e.g. To make an entry in your domain DNS records.
Make sure your default site is pointing to correct directory as you are going to access using IP address.
If your instance is in VPC then it must be in public subnet. (subnet with Internet Gateway route attached)

Can I get a list of Public Ip Addresses that the bq tool may use to import into Big Query?

I use a Proxy to use the bq tool to import data into bigquery, but my Proxy requires that i specify both the domain and all potential public IP addresses that it will need to allow out. I have it set to allow the googleapis.com and google.com domain, but for some reason the bq tool seems to connect often directly to an IP. Therefore I need to specify each of those IPs in the Proxy configuration to be able to connect to Bigquery. Currently the list of IPs I am using is 74.125.133.95, 74.125.142.92, and 74.125.133.84. I know this can change and there may be more IPs that it connects to. Is it possible to get a range or list of IPs that I can put into my proxy configuration so I do not get interrupted uploads when the IP changes due to load balancing, etc.
Thanks
As with all Google APIs, it is not possible to provide a list of IP addresses or IP address ranges for the BigQuery API. Google's API have a range of IP adresses that are dynamic and change to accommodate changes in demand.

How can I make Apache on an amazon ec2 linux box using the elastic IP instead of the private IP?

I've migrated a website to Amazon ec2 that hooks into a service we are using that is installed on another server (not on Amazon). Access to the API for that service is IP-restricted and done by sending XML data using *http_build_query* & *stream_context_create* in PHP.
If I want to connect to the service from a new server, I need to ask the vendor to add the new IP first. I did that by sending the Elastic IP to them, but it doesn't work.
While trying to debug, I noticed that the output for $_SERVER['SERVER_ADDR'] is the private IP of the ec2 instance.
I assume that the server on the other side is receiving the same data, so it tries to authenticate the private IP.
I've asked the vendor to allow access from the private IP as well – it's not implemented yet, so I'm not sure if that solves the problem, but as far as I understand the way their API works, it will then try to parse data back to the IP it was contacted from, which shouldn't be possible because the server is outside the Amazon cloud.
I might miss something really obvious here. I added a command to rc.local (running CENT OS on my ec2 instance) that associates the elastic IP to the server upon startup by using ec2-associate-address, and this seemed to help make a MySQL connection to another outside server working, but no luck with the above mentioned API.
To rule out one thing - the API is accessed through HTTPS, with ports 80 and 443 (and a mysql port) enabled in security groups and tested. The domain and SSL are running fine.
Any hint highly appreciated - I searched a lot already, but couldn't find anything useful so far.
It sounds like both IPs (private and elastic) are active in your VM. Check by running ifconfig -a. If that's what's happening then the IP that gets used for external traffic will depend on the remote address and your VM's routing table. It could even vary from one connection to the next.
If that's what's going on then the quickest fix would be to ifconfig down the interface that has the private address. That should leave only the elastic address for all external connections. If that resolves the problem then you can script something that downs the private IP automatically after the elastic IP has been made active, or if the elastic IP will be permanently assigned to this VM and you really don't need the private IP then you can permanently disassociate the private IP from this VM.