Apache 2 Failed to bind to port, port is open - apache

Alright so I installed an apache server and verified it in the firewall. It has access to ports 80 and 443. I am running Ubuntu 16.04 Desktop Edition
I ran "sudo ufw app info "Apache Full"" to verify the firewall rules.
Profile: Apache Full
Title: Web Server (HTTP,HTTPS)
Description: Apache v2 is the next generation of the omnipresent Apache web server.
Ports: 80,443/tcp
I also tried checking if any ports are open under port 80, which there is a service for apache2. I ran "sudo netstat -ltnp | grep ':80'"
tcp6 0 0 127.0.0.1:8002 :::* LISTEN 32641/java
tcp6 0 0 :::80 :::* LISTEN 28722/apache2
tcp6 0 0 :::8000 :::* LISTEN 10738/java
tcp6 0 0 127.0.0.1:8001 :::* LISTEN 1649/java
Then I tried closing the service and checked if the port is open again, by doing "sudo service apache2 stop" then "sudo netstat -ltnp | grep ':80'"
tcp6 0 0 127.0.0.1:8002 :::* LISTEN 32641/java
tcp6 0 0 :::8000 :::* LISTEN 10738/java
tcp6 0 0 127.0.0.1:8001 :::* LISTEN 1649/java
I've even went into the physical router and opened the port, the same setup works on other services just for apache.
Edit: I forgot to say, my URL isn't working. It just says
This site can’t be reached 24.221.202.149 took too long to respond.
My website is http://24.221.202.149/

I use sprint (with a static IP) as a provider. They block common web service host IP's including port 80.

Related

Patching varnish 4.0.3 and port configuration

I am helping an IT department update their current Drupal website and assisting in updating their RedHat webserver. My Linux user account does not have many permissions outside of editing my home folder and the Apache docroot. I have been asked to help patch their current instance of Varnish 4.0.3 by following the instructions in this patch https://varnish-cache.org/security/VSV00001.html#vsv00001. I have to ask their sysadmin to do most things on the server since my account does not have access to most commands.
I asked the sysadmin to set the vcc_allow_inline parameter to true using the instructions in the patch doucmentation. Here is the full command they ran
/opt/rh/rh-varnish4/root/usr/sbin/varnishd -pvcc_allow_inline_c=true -b www-test-cms:80
and now the website is not resolving correctly. Prior to touching varnish Drupal was running with Varnish on port 81
127.0.0.1:81
Here is the current module settings look like
Drupal Varnish module IP settings
And here is an output of Netstat before and after
Before
[root#www-test-cms ~]# netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 1775/zabbix_agentd
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1786/php-fpm: maste
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 1762/memcached
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 117531/varnishd
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1794/httpd
tcp 0 0 127.0.0.1:81 0.0.0.0:* LISTEN 117530/varnishd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1772/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2302/master
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1794/httpd
tcp6 0 0 :::10050 :::* LISTEN 1775/zabbix_agentd
tcp6 0 0 :::33060 :::* LISTEN 2096/mysqld
tcp6 0 0 :::3306 :::* LISTEN 2096/mysqld
tcp6 0 0 :::11211 :::* LISTEN 1762/memcached
tcp6 0 0 :::80 :::* LISTEN 117531/varnishd
tcp6 0 0 :::6556 :::* LISTEN 1763/xinetd
After
[root#www-test-cms ~]# netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1761/php-fpm: maste
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 1777/memcached
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6004/varnishd
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1779/httpd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1780/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2292/master
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1779/httpd
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 1767/zabbix_agentd
tcp 0 0 127.0.0.1:35588 0.0.0.0:* LISTEN 6003/varnishd
tcp6 0 0 :::3306 :::* LISTEN 2031/mysqld
tcp6 0 0 :::11211 :::* LISTEN 1777/memcached
tcp6 0 0 :::80 :::* LISTEN 6004/varnishd
tcp6 0 0 :::6556 :::* LISTEN 1774/xinetd
tcp6 0 0 :::10050 :::* LISTEN 1767/zabbix_agentd
tcp6 0 0 :::33060 :::* LISTEN 2031/mysqld
So obviously this is a port issue. The sysadmin does not know a lot about webservers and I do not know a lot about much outside of the webfolder and we are having a hard time connecting the two! I would love a little more explanation as to what is going on here. Thank you in advance.
Analyzing the netstat output
In your before setup Varnish was running on port 80 & 81. In your after setupt that is still the case. In your before setup the httpd process runs on ports 443 for HTTPS and 8080 for plain HTTP.
Looking at your varnishd runtime config
The only thing that looks different is the use of the -b option to configure the backend that Varnish connects to. Currently this is -b www-test-cms:80.
Based on the netstat output, the right port is 8080 instead of 80. However, I'm not a big fan of doing this via a runtime parameter, because the VCL file itself will probably also contain this information.
A better varnishd runtime config
For reference, here's the out-of-the-box systemd setup for a RHEL-based Varnish setup: https://www.varnish-software.com/developers/tutorials/installing-varnish-red-hat-enterprise-linux/#systemd-configuration.
As specified on https://www.varnish-software.com/developers/tutorials/installing-varnish-red-hat-enterprise-linux/#modifying-the-listening-port-and-cache-size, you need to set the -a property to configured listening addresses.
Here's an example that is tailored to the Varnish port setup from your netstat output:
varnishd \
-a :80 \
-a :81 \
-f /etc/varnish/default.vcl \
-s malloc,2g \
-pvcc_allow_inline_c=true
Make varnishd listen on ports 80 & 81 (I don't know why 81 is needed)
Link to the VCL file that contains the backend definition and caching rules using the -f option
Set the size of the cache to 2GB. using the -s option (tune this to your own needs)
Enable inline C by setting -pvcc_allow_inline_c=true (avoid enabling inline C unless it's absolutely necessary)
I strongly advise against this setup
While I can come up with a solution, I strongly advise against the patching process.
While it is important to fix security issues, patching this version of Varnish yourself is not a good idea.
Varnish 4 is end-of-life, so is Varnish 5 and certain versions of Varnish 6.
If you look at https://varnish-cache.org/security/index.html, you'll see that there are more VSVs. And maybe you think your version is not affected by most of them, because Varnish 4 is EOL the security issues aren't fixed for v4 anymore.
Upgrade to Varnish 6.0 LTS
I recommend that you upgrade to a more recent version of Varnish. Varnish Cache 6.0 LTS is the one I would recommend. See https://www.varnish-software.com/developers/tutorials/installing-varnish-red-hat-enterprise-linux for an install guide on RHEL.
What about VCL compatibility?
The compatibility of the VCL file cannot be guaranteed of course, however just add the vcl 4.1; version marker at the beginning of the VCL file and try to run the VCL code locally to see if it compiles when varnishd starts.
You could try copying the code from /etc/varnish/default.vcl on the server to your local system and test it in a local Docker container. See https://www.varnish-software.com/developers/tutorials/running-varnish-docker/ for more info about spinning up the official Varnish Docker image.
End result
Once you know the VCL file works on Varnish 6.0 LTS, you could go further with the upgrade of your Varnish server.
Patching an EOL version of Varnish is just a bad idea, just bite the bullet and upgrade to a modern version that is supported.

How to make my docker webserver in AWS host available in public internet?

I have a running apache webserver in centos docker container which is hosted by AWS redhat instance.
I am able to curl my container webserver in AWS instance local host but unable to access publicly in my laptop web browser.
details of the Webserver:
docker run -d --name httpd -p 8080:8080 -v /home/ec2-user/apache/web1/www:/var/www:Z docker.io/centos/httpd-24-centos7
The output of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abd790b28b51 docker.io/centos/httpd-24-centos7 "container-entrypo..." 2 hours ago Up 20 minutes 0.0.0.0:8080->8080/tcp, 8443/tcp httpd
In AWS instance :
curl http://localhost:8080
Hello World!!!
But unable to get this in public internet using AWS host public ip from my laptop.
The output of netstat -tulpn:
(No info could be read for "-p": geteuid()=1000 but you should be root.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp6 0 0 :::8080 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:25 :::* LISTEN -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 127.0.0.1:323 0.0.0.0:* -
udp6 0 0 ::1:323 :::* -
My AWS Security inbound rules:
HTTP TCP 80 157.51.138.196/32
All traffic All All 157.51.138.196/32
SSH TCP 22 157.51.138.196/32
DNS (TCP) TCP 53 157.51.138.196/32
HTTPS TCP 443 157.51.138.196/32
you are missing inbound rule 8080 on instance security group add it and check and inform further if not work.
Hurray it Worked!!!
But not sure how it worked actually ;-) Will try to find out that...
I just stopped all apache containers and removed them entirely. Then executed the below command and everything worked perfect.
While using the below make sure you are mapping the volume (-v) correctly to the index.html file in your host.
docker run -d --name httpd -p 9080:8080 -v /home/ec2-user/apache/web1/www/html:/var/www/html:Z docker.io/centos/httpd-24-centos7

How to specify zookeeper clientPortAddress with HBase?

Installed Standalone HBase using the Quick Start Guide, but I need to bind all ports to a particular interface, so I set various *bindAddress properties in conf/hbase-site.xml and ran bin/start-hbase.sh, but the zookeeper clientPort is still bound to :::2181:
# netstat -antop | grep LISTEN | grep 4232/java
tcp6 0 0 :::2181 :::* LISTEN 4232/java
tcp6 0 0 10.134.6.221:41474 :::* LISTEN 4232/java
tcp6 0 0 10.134.6.221:16010 :::* LISTEN 4232/java
tcp6 0 0 127.0.0.1:34212 :::* LISTEN 4232/java
tcp6 0 0 127.0.0.1:34636 :::* LISTEN 4232/java
The bundled version of zookeeper appears to be 3.4.6
# ls lib/zookeeper-*
lib/zookeeper-3.4.6.jar
This should have the clientPortAddress option:
clientPortAddress
New in 3.3.0: the address (ipv4, ipv6 or hostname) to listen for client connections; that is, the address that clients attempt to connect to. This is optional, by default we bind in such a way that any connection to the clientPort for any address/interface/nic on the server will be accepted.
But specifying that option like so in conf/hbase-site.xml doesn't work:
<configuration>
<property><name>hbase.rootdir</name><value>file:///root/hbase</value></property>
<property><name>hbase.zookeeper.property.dataDir</name><value>/root/zookeeper</value></property>
<property><name>hbase.zookeeper.property.clientPortAddress</name><value>10.134.6.221</value></property>
<property><name>hbase.master.info.bindAddress</name><value>10.134.6.221</value></property>
<property><name>hbase.regionserver.info.bindAddress</name><value>10.134.6.221</value></property>
</configuration>
I tried to create a zoo.cfg file with clientPortAddress and put it in various directories and HBASE_CLASSPATH, but HBase didn't seem to pick it up.

no http server will serve pages from my server

I have ubuntu 14.04 server edition with nodejs, mongodb and nginx installed. This worked fine until yesterday. My internet went down for about 8 hours because of a storm and upon coming back up, nodejs works, mongo works, but anyone connecting to port 80 gets ERR_CONNECTION_REFUSED. i attempted to switch the listening port just to see what would happen and got the same result. I can use the server's internal IP and i still get ERR_CONNECTION_REFUSED, however visiting port 80 via lynx on the server itself via localhost or 127.0.0.1, the application works just fine. I have also tried this using apache instead of nginx and it does not work either. ive disabled my ufw completely, restarted the server, double and triple checked configurations. even netstat says the server is listening on port 80, an nmap scan shows port 80 open and listening, but trying to connect to it gives ERR_CONNECTION_REFUSED
I do not know what to do and based on google and stack overflow search results, im the first person in the history of web servers to ever have to ask this question, so alas i could find nothing helpful anywhere.
Thanks in advance
UFW Status
user#io# ufw status
Status: inactive
Nginx Status
user#io# service nginx status
* nginx is running
Nginx Access Log
user#io:/var/log/nginx# cat access.log
::1 - - [26/Feb/2016:16:04:23 -0600] "GET / HTTP/1.0" 200 7746 "-" "Lynx/2.8.8pre.4 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/2.12.23"
netstat
user#io# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 977/mongod
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 23009/nginx
tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 977/mongod
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 974/sshd
tcp 0 0 0.0.0.0:9561 0.0.0.0:* LISTEN 2083/node
tcp6 0 0 :::80 :::* LISTEN 23009/nginx
tcp6 0 0 :::22 :::* LISTEN 974/sshd
udp 0 0 0.0.0.0:27712 0.0.0.0:* 808/dhclient
udp 0 0 0.0.0.0:68 0.0.0.0:* 808/dhclient
udp6 0 0 :::52391 :::* 808/dhclient `
Try iptables -t nat -F to clear all pre-routing rules.

Gateway Timeout: can't connect to remote host after reboot

I'm running apache2 on a CentOS 6.7 VM. My PHP website was working fine before a reboot but afterwards I'm getting 504 Gateway Timeout.
$ telnet <MYIP> 80
Trying <MYIP>...
Connected to <MYHOSTNAME>.
Escape character is '^]'.
HTTP/1.0 504 Gateway Timeout
Gateway Timeout: can't connect to remote host
Connection closed by foreign host.
I've been googling for hours but can't find anything that works. The website works locally i.e. if I wget http://localhost:80/.
My iptables is as follows:
$ sudo netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1556/rpcbind
tcp 0 0 0.0.0.0:35443 0.0.0.0:* LISTEN 1578/rpc.statd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1745/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 1782/postmaster
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1866/master
tcp 0 0 ::ffff:127.0.0.1:8005 :::* LISTEN 2736/java
tcp 0 0 :::8009 :::* LISTEN 2736/java
tcp 0 0 :::111 :::* LISTEN 1556/rpcbind
tcp 0 0 :::80 :::* LISTEN 2854/httpd
tcp 0 0 :::8080 :::* LISTEN 2736/java
tcp 0 0 :::54644 :::* LISTEN 1578/rpc.statd
tcp 0 0 :::22 :::* LISTEN 1745/sshd
tcp 0 0 ::1:5432 :::* LISTEN 1782/postmaster
tcp 0 0 ::1:25 :::* LISTEN 1866/master
enter code here
Any ideas what could be wrong or how to troubleshoot this?
After having restarted apache many times and trying to set firewall rules again I did both again and it worked.
I've no clue what the issue was so still interested if anyone knows.