My settings
I am building a pretty standard stack of an Auto Scale group of EC2 instances, receiving traffic via an ALB.
The instances expose port 80 to the entire VPC, and the ALB exposes port 443 with a certificate externally to receive traffic from the Internet.
My problem
I would like enable port 80 access from the ALB only, not from the entire VPC.
My question
How can I define a security group in Terraform, that exposes port 80 of the instances to the ALB only, but not to other parts of the VPC?
Maybe this is helpful:
resource "aws_security_group_rule" "opened_to_alb" {
type = "ingress"
from_port = 80
to_port = 80
protocol = "tcp"
source_security_group_id = "${var.alb_sg_id}"
security_group_id = "${aws_security_group.your_sg.id}"
}
var.alb_sg_id can be replaced with your actual alb security group id.
You can create a security group (aws_security_group) for your load balancer and give that access to port 80 using an aws_security_group_rule.
Then, in the security group of the servers, you want to only have allow access to them from for port 80 only from the load balancer security group like so source_security_group_id = "${aws_security_group.mylb.id}"
Does this make sense? If not, I can elaborate further.
Related
I've decided to give Nomad a try, and I'm setting up a small environment for side projects in my company.
Although the documentation on Nomad/Consul is nice and detailed, they don't reach the simple task of exposing a small web service to the world.
Following this official tutorial to use Traefik as a load balancer, how can I make those exposed services reachable?
The tutorial has a footnote stating that the services could be accessed from outside the cluster by port 8080.
But in a cluster where I have 3 servers and 3 clients, where should I point my DNS to?
Should a DNS with failover pointing to the 3 clients be enough?
Do I still need a load balancer for the clients?
There are multiple ways you could handle distributing the requests across your servers. Some may be more preferable than the other depending on your deployment environment.
The Fabio load balancer docs have a section on deployment configurations which I'll use as a reference.
Direct with DNS failover
In this model, you could configure DNS to point to the IPs of all three servers. Clients would receive all three IPs back in response to a DNS query, and randomly connect to one of the available instances.
If an IP is unhealthy, the client should retry the request to one of the other IPs, but clients may experience slower response times if a server is unavailable for an extended period of time and the client is occasionally routing requests to that unavailable IP.
You can mitigate this issue by configuring your DNS server to perform health checking of backend instances (assuming it supports it). AWS Route 53 provides this functionality (see Configuring DNS failover). If your DNS server does not support health checking, but provides an API to update records, you can use Consul Terraform Sync to automate adding/removing server IPs as the health of the Fabio instances changes in Consul.
Fabio behind a load balancer
As you mentioned the other option would be to place Fabio behind a load balancer. If you're deploying in the cloud, this could be the cloud provider's LB. The LB would give you better control over traffic routing to Fabio, provide TLS/SSL termination, and other functionality.
If you're on-premises, you could front it with any available load balancer like F5, A10, nginx, Apache Traffic Server, etc. You would need to ensure the LB is deployed in a highly available manner. Some suggestions for doing this are covered in the next section.
Direct with IP failover
Whether you're running Fabio directly on the Internet, or behind a load balancer, you need to make sure the IP which clients are connecting to is highly available.
If you're deploying on-premises, one method for achieving this would be to assign a common loopback IP each of the Fabio servers (e.g., 192.0.2.10), and then use an L2 redundancy protocol like Virtual Router Redundancy Protocol (VRRP) or an L3 routing protocol like BGP to ensure the network routes requests to available instances.
L2 failover
Keepalived is a VRRP daemon for Linux. There can find many tutorials online for installing and configure in.
L3 failover w/ BGP
GoCast is a BGP daemon built on GoBGP which conditionally advertises IPs to the upstream network based on the state of health checks. The author of this tool published a blog post titled BGP based Anycast as a Service which walks through deploying GoCast on Nomad, and configuring it to use Consul for health information.
L3 failover with static IPs
If you're deploying on-premises, a more simple configuration than the two aforementioned solutions might be to configure your router to install/remove static routes based on health checks to your backend instances. Cisco routers support this through their IP SLA feature. This tutorial walks through a basic setup configuration http://www.firewall.cx/cisco-technical-knowledgebase/cisco-routers/813-cisco-router-ipsla-basic.html.
As you can see, there are many ways to configure HA for Fabio or an upstream LB. Its hard to provide a good recommendation without knowing more about your environment. Hopefully one of these suggestions will be useful to you.
In the following case a network of nomad nodes are in 192.168.8.140-250 range and a floating IP 192.168.8.100 are the one that is DNAT from the firewall for 80/443 ports.
The traefik is coupled to a keepalived in the same group. The keepalived will assign its floating ip to the node where traefik is running. There will be only one keepalived in master state.
its not the keepalived "use case" but it's good at broadcast arp when it comes alive.
job "traefik" {
datacenters = ["dc1"]
type = "service"
group "traefik" {
constraint {
operator = "distinct_hosts"
value = "true"
}
volume "traefik_data_le" {
type = "csi"
source = "traefik_data"
read_only = false
attachment_mode = "file-system"
access_mode = "multi-node-multi-writer"
}
network {
port "http" {
static = 80
}
port "https" {
static = 443
}
port "admin" {
static = 8080
}
}
service {
name = "traefik-http"
provider = "nomad"
port = "http"
}
service {
name = "traefik-https"
provider = "nomad"
port = "https"
}
task "keepalived" {
driver = "docker"
env {
KEEPALIVED_VIRTUAL_IPS = "192.168.8.100/24"
KEEPALIVED_UNICAST_PEERS = ""
KEEPALIVED_STATE = "MASTER"
KEEPALIVED_VIRTUAL_ROUTES = ""
}
config {
image = "visibilityspots/keepalived"
network_mode = "host"
privileged = true
cap_add = ["NET_ADMIN", "NET_BROADCAST", "NET_RAW"]
}
}
task "server" {
driver = "docker"
config {
image = "traefik:2.9.6"
network_mode = "host"
ports = ["admin", "http", "https"]
args = [
"--api.dashboard=true",
"--entrypoints.web.address=:${NOMAD_PORT_http}",
"--entrypoints.websecure.address=:${NOMAD_PORT_https}",
"--entrypoints.traefik.address=:${NOMAD_PORT_admin}",
"--certificatesresolvers.letsencryptresolver.acme.email=email#email",
"--certificatesresolvers.letsencryptresolver.acme.storage=/letsencrypt/acme.json",
"--certificatesresolvers.letsencryptresolver.acme.httpchallenge.entrypoint=web",
"--entrypoints.web.http.redirections.entryPoint.to=websecure",
"--entrypoints.web.http.redirections.entryPoint.scheme=https",
"--providers.nomad=true",
"--providers.nomad.endpoint.address=http://192.168.8.140:4646" ### IP to your nomad server
]
}
volume_mount {
volume = "traefik_data_le"
destination = "/letsencrypt/"
}
}
}
}
For keepalived to run, you should allow some CAP in docker plugin config
plugin "docker" {
config {
allow_privileged = true
allow_caps = [...,"NET_ADMIN","NET_BROADCAST","NET_RAW"]
}
}
I am trying to configure my website to have a secure connection (https://) via Amazon's EC2, ELB, and Route 53.
I am running a t2.micro instance (no Elastic IP or anything). My Elastic Load Balancer has the SSL certificate attached. My SecurityGroup allows for https connections through port 443. I'm not sure what I'm doing wrong here.
All of my configurations are below. Any help is appreciated because, as it stands, I can't access my website at all.
Thank you in advance!
EC2 - - -
Load Balancer - - -
Route 53 - - -
Step 1: Hit the EC2 instance directly and verify that the health check URL responds with an HTTP 200 status code. If not, then get that working first.
You aren't clear about your security group configuration. You should have a security group on your load balancer that allows HTTP and HTTPS connections. Then you should have a security group on your EC2 instance that allows HTTP (port 80) connections from the load balancer's group.
The issue is obviously the failing health check on the load balancer at this point, so no need to look at Route 53 settings right now. You need to concentrate on getting the communication working between the EC2 instance and the load balancer to get that health check to start working. Until then the load balancer won't accept any traffic because it doesn't have instances it considers healthy that it can forward traffic to.
So I have a docker application that runs on port 9000, and I'd like to have this accessed only via https rather than http, however I don't appear to be making any sense of how amazon handles ports. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
So my Dockerfile has:
EXPOSE 9000
and my Dockerrun.aws.json has:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "9000"
}]
}
and I cannot seem to access things via port 9000, but by 80 only.
When I ssh into the instance that the docker container is running and look for the ports with netstat I get ports 80 and 22 and some other udp ports, but no port 9000. How on earth does Amazon manage this? More importantly how does a user get expected behaviour?
Attempting this with ssl and https also yields the same thing. Certificates are set and mapped to port 443, I have even created a case in the .ebextensions config file to open port 443 on the instance and still no ssl
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupName: {Ref : AWSEBSecurityGroup}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
The only way that I can get SSL to work is to have the Load Balancer use port 443 (ssl) forwarding to the instance port 80 (non https) but this is ridiculous. How on earth do I open the ssl port on the instance and set docker to use the given port? Has anyone ever done this successfully?
I'd appreciate any help on this - I've combed through the docs and got this far with it, but this just plain puzzles me. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
Have a great day
Cheers
It's known problem, from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html:
You can specify multiple container ports, but Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
So, if you need multiple ports, AWS Elastic Beanstalk is probably not the best choice. At least Docker option.
Regarding SSL - we solved it by using dedicated nginx instance and proxy_pass'ing to Elastic Beanstalk environment URL.
I have a load balancer in front on an ec2-Classic instance. I have checked that the load balancer is working properly by directly linking to the DNS Name value that is listed in the Description tab for my load balancer. This gives me the main page of the webpage that lies on the EC2 instance. Thus my load balancer is working. My load balancer and my EC2 instance are in the same avalibility zone.
My load balancer has set up an SSL certificate and I have two listeners setup to forward http (port 80) and https (port 443) to instance port 80 as http. My EC2 instance has a security group set to accept http and https with protocol TCP on ports 80 and 443 respectively. Although my understanding is that only the port 80 would be useful, right? The data for the certificate are in the pem format. I have addded to my instance security group a custom TCP on Port Range 0 - 65535 for amazon-elb/amazon-elb-sg. This did nothing.
I can access my site using http just fine. If I try to access using https then I get Error code: ERR_CONNECTION_REFUSED on Chrome and Unable to Connect on Firefox.
I have checked similar posts for this question and nothing seems to help.
Any help or ideas would be greatly appreciated. Thanks
Have you made sure that the ELB is in a security group that allows https on port 443?
I had a similar problem with both classic and advanced load balancer. The thing that was missing for me is that the https to http translation stuff only workers AFTER you make an A record in the DNS for the domain your SSL is on ALIASED to the load balancer you just created. Once I did that all was well through that new A record DNS. Your instance doesn't need to accept port 443 and your LB definitely should not be forwarding over 443.
Hopefully it is something straightforward like this for you as well.
Wait, what SSL certificate in PEM format? I used an Amazon SSL certificate I just got from the dropdown. Are you sure you used an SSL certificate?
In your description I see that maybe you are not following Step 6 from Amazon's "Elastic Load Balancing in Amazon EC2-Classic ->Create HTTPS/SSL Load Balancer
Using the AWS Management Console -> Configure Listeners" guide.
There, it says that you should configure "HTTPS (...) in the Load Balancer Protocol [and] HTTPS (Secure HTTP) (...) in the Instance Protocol box.", whereas in your configuration you are forwarding ELB's 443 to port 80 in the instance.
For further reference, this is the guide that I'm talking about DEAD LINKhttp://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/configure-https-listener.htmlDEAD LINK
Also, check if your SSL certificate is well built according to the rules specified here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html
I have a problem to add https to my EC2 instance and maybe you guys can have the answer to make it work.
I have a load balancer that is forwarding the connection to my EC2 instance, I've add the SSL certificate to the load balancer and everything went fine, I've add a listener to the port 443 that will forward to the port 443 of my instance and I've configured Apache to listen on both port 443 and 80, now here the screenshot of my load balancer:
The SSL certificate is valid and on port 80 (HTTP) everything is fine, but if I try the with https the request does not got through.
Any idea?
Cheers
Elastic Load Balancer can not forward your HTTPS requests to the server. This is why SSL is there : to prevent a man in the middle attack (amongst others)
The way you can get this working is the following :
configure your ELB to accept 443 TCP connection and install an SSL certificate through IAM (just like you did)
relay traffic on TCP 80 to your fleet of web servers
configure your web server to accept traffic on TCP 80 (having SSL between the load balancer and the web servers is also supported, but not required most of the time)
configure your web servers Security Group to only accept traffic from the load balancer.
(optional) be sure your Web Servers are running in a private subnet, i.e. with only private IP addressed and no route to the Internet Gateway
If you really need to have an end-to-end SSL tunnel between your client and you backend servers (for example, to perform client side SSL authentication), then you'll have to configure your load balancer in TCP mode, not in HTTP mode (see Support for two-way TLS/HTTPS with ELB for more details)
More details :
SSL Load Balancers : http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_SettingUpLoadBalancerHTTPS.html
Load Balancers in VPC :
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/UserScenariosForVPC.html
Do you have an HTTPS listener on your EC2 instance? If not, your instance port should be 80 for both load balancer listeners.