I'm doing a GET request to my EC2 instance, but I'm getting the following error:
80: Connection refused
These are the security rules of my instance:
Ports Protocol Source launch-wizard-1
80 tcp 0.0.0.0/0 ✔
22 tcp 177.32.53.207/32 ✔
What's wrong with these rules? Why can't I access port 80?
EDIT
I attached my apache conf file (/etc/apache2/apache2.conf) in this url, since it's too big to post all the code here.
EDIT2
when I run netstat -ntlp | grep LISTEN
I get this:
(No info could be read for "-p": geteuid()=1000 but you should be root.)
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN
What is the source of your connection request? Are you attempting to access your instance from outside of Amazon or from a difference EC2 instance in the same availability zone? Are you supplying an IP address or a DNS name as the argument to your connecting code?
Keep in mind that AWS EC2 uses SDN (software defined networking) which doesn't work quite like classical TCP/IP routing that you may be expecting from using Linux, or other OSes, on bare metal systems (or even on VMs using more traditional networking).
Ultimately you will probably want to allocate "elastic IP" (EIP) from AWS and bind it to your web server instance. Then route your requests to that IP address. (Often you'd also create a DNS entry, perhaps through Amazon's "Route53" service to use the a name rather than the address).
It's possible to get to your instance from within and from outside their network. But you have to use the Amazon generated DNS name to do so then, because they use split-horizon, your clients will get the correct (internal or external) IP address.
Also you have to consider the security settings on your VPC (virtual private cloud) network(s) as well as those you've applied to your instance.
Related
If a webserver is handling traffic on port 80, each client must establish a connection between itself and the server on that port. Assuming a client maintains the connection, how is the server able to service other clients in parallel?
Does the server immediately kill the connection with a client after a request? Or do webservers dynamically generate new ports for clients to use such that port 80 is free for new connections?
A port is one end of a communication channel.
The server initials sets up a LISTENing port (80 in the case of an HTTPS server). A client creates a port (the operating system will assign a random, available port number to this) and CONNECTs to the listening port. At that point the communications channel is uniquely described by the IP address of the server, port 80 at the server, and the IP address of the client along with port number of the client. If you look at the output of netstat you'll see lots of sockets/ports in various stages of connection:
symcbean#skynet ~ $ netstat -t
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 192.168.1.202:47206 stackoverflow.com:https ESTABLISHED
tcp 0 1 192.168.1.202:50894 aba1c1ff9d2ec5376.:smtp SYN_SENT
tcp 0 0 192.168.1.202:47210 stackoverflow.com:https ESTABLISHED
tcp 0 0 192.168.1.202:60806 ec2-34-213-90-136:https ESTABLISHED
tcp 0 0 192.168.1.202:51124 151.101.1.69:https ESTABLISHED
tcp 0 0 192.168.1.202:34784 i0.wp.com:https ESTABLISHED
tcp 0 0 192.168.1.202:54082 lhr25s14-in-f10.1:https ESTABLISHED
tcp 0 0 192.168.1.202:38412 172-155-250-212.s:https ESTABLISHED
Exactly how the server handles communicating concurrently on multiple channels varies. I've never come across a server which only handles a single connection at a time.
On the (prefork) Apache webserver, the process which opened the listening socket hands off the connection to a pre-existing child process to deal with. Some servers run as a single process but with multiple threads of execution. Some (such as nginx and lighthttpd) run as a single thread and give their attention to the channel sending data first.
I have installed the TURN server everything in the server code is working fine. no error in the log file. only a warning stating
0: WARNING: I cannot support STUN CHANGE_REQUEST functionality because only one IP address is provided
but the TURN server running on the server.
here is what shows when I check lsof -i :3478
turnserve 999 root 15u IPv4 446811411 0t0 TCP domain.com:stun (LISTEN)
turnserve 999 root 23u IPv4 446811417 0t0 TCP domain:stun (LISTEN)
turnserve 999 root 24u IPv4 446810998 0t0 UDP domain.com:stun
turnserve 999 root 25u IPv4 446810999 0t0 UDP domain.com:stun
when I check STUN in Trickle ICE it throws an errors
The server stun:xxx.xxx.xxx.xxx:3478 returned an error with code=701:
STUN server address is incompatible.
The server stun:xxx.xxx.xxx.xxx:3478 returned an error with code=701:
STUN allocate request timed out.
what's going wrong in this.
Thank you
I think that 701 error is a more generic connectivity error that Trickle ICE uses to indicate it didn't get a binding response back. Run stunclient your.stun.ip.address with the command line tools at www.stunprotocol.org to see if your STUN service is accessible from the outside world.
STUN technically requires being hosted on a device with two IP addresses and two ports. It's typically a command line parameter to specify which IP addresses the server should listen on. But most server implementations can operate on a host with a single IP address.
The second IP address and port on the server is used for STUN client filtering tests to detect what type of NAT is in effect. The client sends a binding request on the server's primary ip and port, but with a change request attribute to have the server respond from the alternate IP address or port. More often than not, this binding request with a change-request attribute fails since NATs will not forward traffic from the other IP/port.
The filtering test is useful for logging what type of NAT the client is on. So that failed connections can be debugged and that success/failure metrics can be correlated to NAT type.
Since most ICE implementations will exchange all available address candidates (local, mapped, and relay), the filtering test isn't very or useful to connectivity establishment.
I'm surprised Trickle ICE is giving you an error. I didn't think WebRTC ever used the changer-request attribute. I just did a Wireshark trace of a Trickle ICE session to stunserver.stunprotocol.org. I don't see the webrtc client setting the change-request attribute in either of the two binding requests it makes.
More details in RFC 5780 Section 3.2
In macOS, I just do so:
> brew install stuntman
when it done
> stunclient stunserver.stunprotocol.org
Binding test: success
Local address: 198.18.0.1:54898
Mapped address: 210.0.158.130:56750
To specify port, just like this:
> stunclient stunserver.stunprotocol.org 3478
Binding test: success
Local address: 198.18.0.1:63061
Mapped address: 210.0.158.130:37126
Have fun!
I used WAMP in the past without problem.
I needed to use skype for once, so I did and now the Apache service of WAMP won't start. When I test port 80 using the WAMP tools, I get this message:
***** Test which uses port 80 *****
===== Tested by command netstat filtered on port 80 =====
Test for TCP
Port 80 is not found associated with TCP protocol
Test for TCPv6
Port 80 is not found associated with TCP protocol
===== Tested by attempting to open a socket on port 80 =====
Your port 80 seems not actually used.
Unable to initiate a socket connection
Error number: 10061 -
I tried changing the port Skype uses, but this is not possible in the Windows 10 version. I installed Skype Classic and changed the port there, no result. Then changed Apache to port 8080, without result, so I changed it back to 80.
I fully uninstalled both Skype and Skype classic, then I uninstalled WAMP and installed it fresh again. Even after the removal of Skype and resinstallation of WAMP I still get the same error message.
I also tried to kill the tasks using port 80, but the only task I can actually kill is my firefox browser, result of netstat below.
C:\Windows\system32>netstat -aon | findstr :80
TCP 192.168.178.27:49893 93.184.220.29:80 ESTABLISHED 13120
TCP 192.168.178.27:49917 216.58.211.99:80 TIME_WAIT 0
TCP 192.168.178.27:49918 23.208.79.207:80 TIME_WAIT 0
TCP 192.168.178.27:49919 88.221.254.211:80 TIME_WAIT 0
TCP 192.168.178.27:49926 52.85.249.5:80 TIME_WAIT 0
TCP 192.168.178.27:49931 23.208.77.171:80 TIME_WAIT 0
TCP 192.168.178.27:49939 23.208.77.171:80 TIME_WAIT 0
TCP 192.168.178.27:49953 216.58.211.99:80 TIME_WAIT 0
TCP 192.168.178.27:49960 216.58.211.99:80 TIME_WAIT 0
Any help is appreciated.
I'm using the new Azure cloud app and I have created a new VM with ubuntu 14.04.
I installed apache2 and some common modules (like php5).
Well, after that, I configured my app, but when I tried to access, The browser shows "Timeout" (using Chrome). The "ping" maps the hostname to the ip address but it doesn't gets any response (i suppose that ping is disabled by default)
At first I thought it was my app, so I only set the default apache settings in the "sites-enabled" folder (the one with the static html page that comes with apache).
But the same happens, so I check the usual things like firewall, iptables rules, etc. But I get always the same result :/
This is not my first server, but I'm not able to think in another option, so I just want to check what you guys think about what could be the problem.
iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ufw status
Status: inactive
default settings on the site-enabled folder (I erased the comments lines)
<VirtualHost *:80>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1654/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 34899/postgres
tcp 0 0 x2.x2.x2.x2:16001 (other ip diff from server's ip) 0.0.0.0:* LISTEN 937/python
tcp6 0 0 :::80 :::* LISTEN 48801/apache2
tcp6 0 0 :::22 :::* LISTEN 1654/sshd
telnet ip 80 (from my pc)
Connecting To x.x.x.x 80...Could not open connection to the host, on port 80
: Connect failed
telnet localhost 80
Connected to localhost.
Escape character is '^]'.
exit
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>501 Not Implemented</title>
</head><body>
<h1>Not Implemented</h1>
<p>exit to / not supported.<br />
</p>
<hr>
<address>Apache/2.4.7 (Ubuntu) Server at x3.x3.x3.x3 Port 80</address>
</body></html>
Connection closed by foreign host.
The ip x2.x2.x2.x2 and x3.x3.x3.x3 have the same value but they aren't equal to the server IP. (At least isn't the same ip value I use to connect to the VM by ssh)
Sounds like it might be an endpoint. By default when you create a virtual machine in the Azure portal, endpoints for Remote Desktop, Windows PowerShell Remoting, and Secure Shell (SSH) are automatically created.
You will have to go into the Azure portal to configure additional endpoints.
Each endpoint has a public and private. Public is used for outside requests/traffic coming into the VM through the load balance. Private is use by the VM for incoming traffic to route to the proper port/app.
Here is a link on Azure help that talks about setting up endpoints
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-set-up-endpoints/
So I have a docker application that runs on port 9000, and I'd like to have this accessed only via https rather than http, however I don't appear to be making any sense of how amazon handles ports. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
So my Dockerfile has:
EXPOSE 9000
and my Dockerrun.aws.json has:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "9000"
}]
}
and I cannot seem to access things via port 9000, but by 80 only.
When I ssh into the instance that the docker container is running and look for the ports with netstat I get ports 80 and 22 and some other udp ports, but no port 9000. How on earth does Amazon manage this? More importantly how does a user get expected behaviour?
Attempting this with ssl and https also yields the same thing. Certificates are set and mapped to port 443, I have even created a case in the .ebextensions config file to open port 443 on the instance and still no ssl
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupName: {Ref : AWSEBSecurityGroup}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
The only way that I can get SSL to work is to have the Load Balancer use port 443 (ssl) forwarding to the instance port 80 (non https) but this is ridiculous. How on earth do I open the ssl port on the instance and set docker to use the given port? Has anyone ever done this successfully?
I'd appreciate any help on this - I've combed through the docs and got this far with it, but this just plain puzzles me. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
Have a great day
Cheers
It's known problem, from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html:
You can specify multiple container ports, but Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
So, if you need multiple ports, AWS Elastic Beanstalk is probably not the best choice. At least Docker option.
Regarding SSL - we solved it by using dedicated nginx instance and proxy_pass'ing to Elastic Beanstalk environment URL.