Google cloud load balancer port 80, to VM instances serving port 9000 - load-balancing

I'm new in GCE, and I am confused about setting up the load balancer.
If I have two instances, serving on Port 9000, I want to setup a balancer that accepts on port 80, then route requests to my instances in port 9000..
a diagram like this..
LB:port:80 -> VM:port:9000
I have other load balancers from other providers which has a settings like pointing to VM's port. but in GCE, I cant seem to find it, or I am missing something..
I hope I am making a sense, here. thank you in advance

It isn't possible in GCE to do a port rewriting. As a workaround I use port forwarding using iptables
Then in GCE, you can create a health check on port 9000, your target pool will have your instances listing on port 9000 and your forwarding rule will be on port 80 with your target pool.
Another workaround will be to run HAProxy on the instance to locally forward port 80 on the instance to port 9000.

If your app is HTTP-based (looks like it), then please have a look at the new HTTP load balancing announced in June. It can take incoming traffic at port 80 and forward to a user-specified port (eg. port 9000) on the backend. The doc link for the command is here:
https://developers.google.com/compute/docs/load-balancing/http/backend-service#creating_a_backend_service
Hope it helps.

Related

How web server handles connections which are same ports & IP Addresses?

I open IE explorer & Chrome in my computer and type localhost:80 and I get the index page.
Here I think my machine's IP is same to both connections (IE explorer & Chrome) and ports are too (80).
Note: Source port will be different (as destination is same: localhost IP), this is my second question.
So how webserver (lets say apache) handles this port 80 connections without failing? Does it port forwarding? In OS level even I tried with addr re-use, port re-use parameters and it is all same we cannot make multiple connection with same IPs & ports.
Now, probably you have came up with a solution: although source ports and IPs are same, destination port is different in package: <protocol>, <src addr>, <src port>, <dest addr>, <dest port>.
A. I got 49483~50004 ports as you can see on image. How client knows which destination port (49483~50004) is bound by webserver? If it is random between 0 and 65355 the webserver always binds all ports, it is very resource consuming. How webservers avoid from this?
Look at this image: command prompt-> netstat
If this question is related with low level sources it is OK, I understand Embedded TCP/IP/UDP, Phy MAC communication and package structures.
You have this all back to front.
All the port numbers at the server are the same: 80. So the client only has to know port 80.
All the port numbers at the client are different: 49483-50004 etc. So there is no ambiguity in the connection, because the 4-tuple you mentioned is unique.
All http request by default call to servers in the port 80, because servers listen by default in that port. So you only give an IP or hostname and the web browser add the default port (80). You can give a custom port if you web server is listenning in another port (usually Tomcat listen by default in 8080) for example you call it in: http://www.youamazingweb.com:8080.
A good example is see the IP as the home and the port is the door where clients enter to consume some resource hosted in server.

DNS through socks proxy. How do I change windows settings for domain resolution.

I am looking for a program to reroute windows domain resolution lookup through a socks proxy capable with many internet browsers and internet proxies.
So far in Control Panel, Local Area Connection 1, TCP/IP Properties, I use the following DNS server addresses, preferred DNS Server, I put 127.0.0.1 and use the default in-built port request 53.
I am reading that it is possible to forward this. I can not find a program to forward it through socks 4/5. I think this is possible because Socks supports UDP.
Has anyone come up with the answer to a solution about a UDP-to-socks forwarding program capable and adapted for socks and windows DNS.
It's really quite easy to configure.
You could write your own server and set the server to listen to incoming calls to port 53 or use this program
http://dns2socks.sourceforge.net
here my sample configuration for a socks server running on 1050 and TCP / IP settings on 127.0.0.1
DNS2SOCKS.exe /la:socks.log 127.0.0.1:1050 8.8.8.8:53 127.0.0.1:53
For such a program you can have a look at dnsadblock. Their free daemon/cli app opens up a proxy server that can be configured to use a proxy/socks to communicate with the upstream server. It works since the remote endpoint listens on https which makes DOH (dns over https) possible. Config options/install instructions: https://knowledgebase.dnsadblock.com/how-to-install-and-configure-our-software/

AWS Beanstalk and Docker ports = what manner of tomfoolery is this?

So I have a docker application that runs on port 9000, and I'd like to have this accessed only via https rather than http, however I don't appear to be making any sense of how amazon handles ports. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
So my Dockerfile has:
EXPOSE 9000
and my Dockerrun.aws.json has:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "9000"
}]
}
and I cannot seem to access things via port 9000, but by 80 only.
When I ssh into the instance that the docker container is running and look for the ports with netstat I get ports 80 and 22 and some other udp ports, but no port 9000. How on earth does Amazon manage this? More importantly how does a user get expected behaviour?
Attempting this with ssl and https also yields the same thing. Certificates are set and mapped to port 443, I have even created a case in the .ebextensions config file to open port 443 on the instance and still no ssl
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupName: {Ref : AWSEBSecurityGroup}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
The only way that I can get SSL to work is to have the Load Balancer use port 443 (ssl) forwarding to the instance port 80 (non https) but this is ridiculous. How on earth do I open the ssl port on the instance and set docker to use the given port? Has anyone ever done this successfully?
I'd appreciate any help on this - I've combed through the docs and got this far with it, but this just plain puzzles me. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
Have a great day
Cheers
It's known problem, from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html:
You can specify multiple container ports, but Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
So, if you need multiple ports, AWS Elastic Beanstalk is probably not the best choice. At least Docker option.
Regarding SSL - we solved it by using dedicated nginx instance and proxy_pass'ing to Elastic Beanstalk environment URL.

Can't get https working on Elastic Load Balancer (AWS)

I have a load balancer in front on an ec2-Classic instance. I have checked that the load balancer is working properly by directly linking to the DNS Name value that is listed in the Description tab for my load balancer. This gives me the main page of the webpage that lies on the EC2 instance. Thus my load balancer is working. My load balancer and my EC2 instance are in the same avalibility zone.
My load balancer has set up an SSL certificate and I have two listeners setup to forward http (port 80) and https (port 443) to instance port 80 as http. My EC2 instance has a security group set to accept http and https with protocol TCP on ports 80 and 443 respectively. Although my understanding is that only the port 80 would be useful, right? The data for the certificate are in the pem format. I have addded to my instance security group a custom TCP on Port Range 0 - 65535 for amazon-elb/amazon-elb-sg. This did nothing.
I can access my site using http just fine. If I try to access using https then I get Error code: ERR_CONNECTION_REFUSED on Chrome and Unable to Connect on Firefox.
I have checked similar posts for this question and nothing seems to help.
Any help or ideas would be greatly appreciated. Thanks
Have you made sure that the ELB is in a security group that allows https on port 443?
I had a similar problem with both classic and advanced load balancer. The thing that was missing for me is that the https to http translation stuff only workers AFTER you make an A record in the DNS for the domain your SSL is on ALIASED to the load balancer you just created. Once I did that all was well through that new A record DNS. Your instance doesn't need to accept port 443 and your LB definitely should not be forwarding over 443.
Hopefully it is something straightforward like this for you as well.
Wait, what SSL certificate in PEM format? I used an Amazon SSL certificate I just got from the dropdown. Are you sure you used an SSL certificate?
In your description I see that maybe you are not following Step 6 from Amazon's "Elastic Load Balancing in Amazon EC2-Classic ->Create HTTPS/SSL Load Balancer
Using the AWS Management Console -> Configure Listeners" guide.
There, it says that you should configure "HTTPS (...) in the Load Balancer Protocol [and] HTTPS (Secure HTTP) (...) in the Instance Protocol box.", whereas in your configuration you are forwarding ELB's 443 to port 80 in the instance.
For further reference, this is the guide that I'm talking about DEAD LINKhttp://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/configure-https-listener.htmlDEAD LINK
Also, check if your SSL certificate is well built according to the rules specified here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html

How can I ssh into my EC2 instance from my local computer which has only ports 80 and 443 allowed?

I have recently starred out with EC2. Currently I am using the Free Tier to test and learn about it. However as I am behind a proxy that allows only connections at port 80 and 443, I am unable to connect the EC2 instance. Is there a way to get past this ?
So far I've guess that running sslh on the EC2 instance, as described here might help. But I am not sure if this behavior should remain persistent once the instance is terminated and re-started (as I am using Free Tier). Is there a way I can achieve persistence in terms of settings and installed resources like sslh (and many others) while using the Free Tier ?
Thanks in advance.
Once when behind a firewall that only allowed outgoing communication on ports such as 80, I just ran an sshd on the server on a different port. You won't be able to set this up while behind the firewall, you'll have to go somewhere else, ssh in, and reconfigure ssh.
Instead of running sshd on a non-standard port, you could also just have something redirect traffic from some other port to port 22.
If your ec2 instance isn't running a web server, you can use port 80 or 443 for the sshd. If you're not using https, then use 443.
You say they only allow outgoing traffic to remote ports 80 and 443, but often times ports above 1024 are also unblocked.
Make sure you've also correctly configured your security groups on the ec2 instance, since it has a firewall as well. You'll have to make sure it's configured to allow incoming traffic on the port supplying the sshd from your IP address. This can be done through the aws management console.
Here there's is a neat solution. I haven't tried it. The idea is to pass a script to boot the instance with ssh bind to port 80.
Goto instances
at the top of the list of your running instances you should see "instance action"
In that menu you should see "connect"
Select "connect from your browser using Java ssh client"
note, you need Java to be installed.