Golang webapp with Apache multiple VirtualHosts - apache

Unfortunately I've not been able to deploy a basic Golang WebApp on production server. After going through many documentation and tutorials I understood that I need to run Golang WebApp as a Daemon.
First things first: the production server is a single IP running Ubuntu 16.04 with Apache based multiple VirtualHosts /etc/apache2/sites-enabled/.
Golang environment vars
# set golang environment vars
export GOROOT=/usr/local/go
# set multiple gopaths seperated by ":"
export GOPATH=/var/www/go_projects/gotest.domain2.com
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
# set PATH so it includes user's private bin directories
PATH="$HOME/bin:$HOME/.local/bin:$PATH"
Systemd Daemon file
[Unit]
Description=GoTest Webserver
[Service]
Type=simple
WorkingDirectory=/var/www/go_projects/gotest.domain2.com
ExecStart=/var/www/go_projects/gotest.domain2.com/main #binary file
[Install]
WantedBy=multi-user.target
VirtualHost Conf
<VirtualHost *:80>
ServerName gotest.domain.com
DocumentRoot /var/www/go_projects/gotest.domain2.com
<Directory /var/www/go_projects/gotest.domain2.com>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
Go file
package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello World %s!", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
The program on http://gotest.domain2.com unfortunately is not executed. It rather lists the content of DocumentRoot
Manual run returns
admin#xyz:/var/www/go_projects/gotest.domain2.com$ ./main
2018/02/18 15:52:58 listen tcp :8080: bind: address already in use
What am I missing or is my deployment approach principally wrong?
Cheers!
EDIT:
As suggested from Michael Ernst, I tried altering the port/proxy settings and here is the result:
http://gotest.domain2.com leads to 503 Service Unavailable
Following is the outcome of sudo netstat -talpen
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 16367 1250/sshd
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 0 473536 26340/master
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 0 16604 1417/dovecot
tcp 0 0 0.0.0.0:2000 0.0.0.0:* LISTEN 5001 17289 1652/asterisk
tcp 0 xx.xx.xx.xx:22 xx.xx.xx.xx:6126 ESTABLISHED 0 615988 13025/sshd: admin [
tcp6 0 0 :::22 :::* LISTEN 0 16369 1250/sshd
tcp6 0 0 :::25 :::* LISTEN 0 473537 26340/master
tcp6 0 0 :::3306 :::* LISTEN 111 17564 1391/mysqld
tcp6 0 0 :::143 :::* LISTEN 0 16605 1417/dovecot
tcp6 0 0 :::80 :::* LISTEN 0 612412 12554/apache2
tcp6 0 0 xx.xx.xx.xx:80 xx.xx.xx.xx:6128 FIN_WAIT2 0 0 -
tcp6 0 0 xx.xx.xx.xx:80 xx.xx.xx.xx:6129 ESTABLISHED 33 615029 12561/apache2
Any idea where the problem lies?

As for configuring apache:
You need to start the go application and in the apache configuration reverse proxy the request to the port 8080 ( which the go daemon you wrote listens to ).
The go application needs to be always running so you might want to boot it at system start. Unlike php, which is called from apache, go should run as binary that is always there.
As for your port issue:
Make sure that your application is not yet started and that no other application is listening to port 8080. ( you can use netstat -talpen to see that )
Edit:
Port 8080 is often a http proxy. Is there are proxy or another application running on this point?
Edit:
You can configure your apache like that:
<VirtualHost *:80>
ProxyPreserveHost On
ProxyRequests Off
ServerName www.example.com
ServerAlias example.com
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
</VirtualHost>
You might also want to make the port of your go-application configurable so you don't need to re-compile the code in case you need to change the port. Also you can bind to the localhost interface so, in case you don't have a firewall configured, people can only access the go application over apache and not directly talk with the go application
// define the flag
port := flags.Int("port", 8080, "port to listen on")
// parse the flags
flags.Parse();
// here you might want to add code to make sure the port is valid.
// start webserver
log.Fatal(http.ListenAndServe("localhost:"+strconv.Atoi(*port), nil))

Related

Change ssl port of apache2 server. (ERR_SSL_PROTOCOL_ERROR)

I'm developing apache2 environment on my EC2 instance. For security, I want to change ssl port of apache2.
I've already confirmed default ssl port 443 was working by checking page with chrome browser. But after modifying ports.conf like below, I've got an error, ERR_SSL_PROTOCOL_ERROR when accessing this server like https://xxxxxxx:18443/
Are there any settings for changing ssl port?
listening ports
$ ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:22 *:*
LISTEN 0 64 *:7777 *:*
LISTEN 0 50 127.0.0.1:3306 *:*
LISTEN 0 128 :::22 :::*
LISTEN 0 128 :::18443 :::*
/etc/apache2/ports.conf
#Listen 80
<IfModule ssl_module>
Listen 18443
</IfModule>
<IfModule mod_gnutls.c>
Listen 18443
</IfModule>
environment
OS: ubuntu 14.04 server (Amazon/EC2 AMI)
apache: Apache/2.4.7 (Ubuntu)
EC2 inbound security policy
Custom TCP rule: TCP, 18443, 0.0.0.0/0
Custom UDP rule: UDP, 18443, 0.0.0.0/0
I found an answer by myself. I also need to edit default-ssl.conf. So I summarize all procedures to set up ssl and changing its port. In this example, I change ssl port to 18443 from 443.
$ sudo apt-get install apache2
$ sudo a2enmod ssl
$ sudo a2ensite default-ssl
$ sudo service apache2 restart
$ ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 :::443 :::*
LISTEN 0 128
Then, try to change ssl port.
$ sudo vi /etc/apache2/ports.conf
<IfModule ssl_module>
Listen 18443
</IfModule>
<IfModule mod_gnutls.c>
Listen 18443
</IfModule>
In this setting, I used default-ssl, so I also have to modify this file.
$ sudo vi /etc/apache2/sites-available/default-ssl.conf
<IfModule mod_ssl.c>
<VirtualHost _default_:18443>
...
Then, you restart apache2 and you can access http://xxxxxx:18443/
$ sudo service apache2 restart

Apache2 not accepting connections

We just had a server migration, and things really messed up. First all the data was gone, (Thank god for backups), and now apache just isn't responding (ERR_CONNECTION_TIMED_OUT).
Server is running Ubuntu 14.04.3 LTS
The apache vhost setup is good. It's from the previous setup which worked, and wasn't changed. Either way here is the config:
<VirtualHost *:443>
DocumentRoot "/var/www/versions/prod/Application/UIAdmin"
ServerName adminca.decision.io
ServerAlias adminca.decision.io
SSLEngine On
SSLCertificateFile /etc/apache2/ssl.crt/decisionio.crt
SSLCertificateKeyFile /etc/apache2/ssl.key/decisionio.key
SSLCACertificateFile /etc/apache2/ssl.crt/intermediate.crt
<Directory "/var/www/versions/prod/Application/UIAdmin">
AllowOverride all
allow from all
Options -Indexes
</Directory>
</VirtualHost>
traceroute goes to the right place.
ping goes to the right place.
iptables:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I'm not really a server admin, just got thrown into the position, so I have no idea where to search next! Any help would be helpful. I can post more configs if requested.
Edit
I think it has something to do with the listening ports? Would the tcp6 have any influence on this?
$ netstat -nl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::80 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::443 :::* LISTEN
udp 0 0 0.0.0.0:49630 0.0.0.0:*
udp 0 0 0.0.0.0:68 0.0.0.0:*
udp6 0 0 :::50499 :::*
Edit #2
It seems like wget is getting the application, not too sure where the routing problem exists then...
On the server:
$ wget http://127.0.0.1/ -O -
--2015-10-24 22:00:05-- http://127.0.0.1/
Connecting to 127.0.0.1:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: Login [following]
--2015-10-24 22:00:05-- http://127.0.0.1/Login
wget on my local also goes to the right server, but it times out.
On my local:
$ wget https://adminca.decision.io -O -
--2015-10-24 22:17:07-- https://adminca.decision.io/
Resolving adminca.decision.io... 104.36.151.37
Connecting to adminca.decision.io|104.36.151.37|:443... failed: Operation timed out.
Retrying.
wget on http requests does the same thing as well.

Ubuntu server (apache) won't respond to external requests

I'm fairly new to this so I could be missing something totally obvious, but I can't connect to my server using my external IP. Internally everything works like a dream (10.0.0.28/redmine), but when I try to connect using the external IP the requests time out. I forwarded both my ssh port and port 80 as shown below, but that didn't solve the problem.
My ports now show as being open with www.portchecktool.com.
SSH works fine internally, but when I issue the command shown here it says connection closed by remote host.
ssh {my external ip} -pxxxx -i /home/millerir/.ssh/id_rsa -l imiller
Similarly when I navigate to {my ip}/redmine or {my ip} or {my_ip}:80 or my ddns service address I get connection reset while trying to connect errors from my browser.
I did check that the server was listening on port 80 and my ssh port as shown below.
me#ubuntu-server:/etc/apache2/sites-available$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:xxxx 0.0.0.0:* LISTEN 779/sshd
tcp 0 0 127.0.0.1:57384 0.0.0.0:* LISTEN 1192/redmine
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 862/mysqld
tcp6 0 0 :::xxxx :::* LISTEN 779/sshd
tcp6 0 0 :::80 :::* LISTEN 923/apache2
If anyone could help me that would be greatly appreciated. I'm stuck and kind of clueless.
redmine.conf
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster#localhost
#DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
DocumentRoot /var/www
<Directory /var/www/redmine>
RailsBaseURI /redmine
PassengerResolveSymLinksInDocumentRoot on
</Directory>
</VirtualHost>
http.conf
<Listen 80
<IfModule ssl_module>
Listen 443
</IfModule>
<IfModule mod_gnutls.c>
Listen 443
</IfModule>
So I figured out after digging around that some routers don't like you going "out and back in". I tried connecting to my server using data on my phone and it works now.

amazon ec2 how to setup https?

I have read the amazon ec2 guide for setup https and finished several steps. But it still not working.
sign a SSL certification, I use self-signed cert.
use aws iam to upload the SSL cert to amazon server.
In ec2 control platform, add port 80 and port 443 in the current security group's inbound
create new load balancer, add http with port 80, the port 443 and https with the uploaded cert in the new load balancer, and assign current instance in the load balancer
Last, I have check the instance's security group and make sure it is right. I reboot the instance and the https does not work. The health check can pass in checking port 80. But it does not pass in checking port 443.
Do I miss any step?
I know this post is a year old, but I recently had similar issues and hope that someone might find this useful.
I see you are using a load balancer. You have to do the following:
Step 1
Make sure that port 443 is open on your EC2 instance and not being blocked by a firewall. You can run
sudo netstat -tlnp
on linux to check which ports are open. The output should look something like this:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 937/sshd
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1060/mysqld
tcp6 0 0 :::22 :::* LISTEN 937/sshd
tcp6 0 0 :::443 :::* LISTEN 2798/apache2
tcp6 0 0 :::80 :::* LISTEN 2798/apache2
Step 2
Make sure your security groups are setup as follows:
EC2 (INBOUND)
HTTP TCP 80 LOAD BALANCER
HTTPS TCP 443 LOAD BALANCER
Load Balancer (Outbound)
HTTP TCP 80 EC2 Instance
HTTPS TCP 443 EC2 Instance
Step 3
Make sure your EC2 instance is listening on port 443 (/etc/apache2/ports.conf) :
Listen 80
Listen 443
If you are using a virtual host, make sure it looks like this:
<VirtualHost *:80>
DocumentRoot /var/www/html/mysite.com
ServerName mysite.com
ServerAlias www.mysite.com
<Directory /var/www/html/mysite.com>
AllowOverride All
RewriteEngine On
Require all granted
Options -Indexes +FollowSymLinks
</Directory>
</VirtualHost>
<VirtualHost *:443>
DocumentRoot /var/www/html/mysite.com
ServerName mysite.com
ServerAlias www.mysite.com
SSLEngine on
SSLCertificateFile /usr/local/ssl/public.crt
SSLCertificateKeyFile /usr/local/ssl/private/private.key
SSLCACertificateFile /usr/local/ssl/intermediate.crt
</VirtualHost>
Step 4
Upload your certificate files in .pem format using the following commands:
aws iam upload-server-certificate --server-certificate-name my-server-cert
--certificate-body file://my-certificate.pem --private-key file://my-private-key.pem
--certificate-chain file://my-certificate-chain.pem
Step 4
Create a listener on the Load Balancer which has the EC2 instance attached to it. The listener is for HTTPS and port 443. The listener will ask for a certificate and it will have the one you added from the aws cli already listed. If it is not listed, log out of the AWS console and log back in.
After, this, traffic via HTTPS will start flowing to your EC2 instance.
I had similar issues, and posted my question and answer here: HTTPS only works on localhost

Can't access htdocs on different machine LAN

I'm having a problem in my web-server (centOS 7). I've set up xampp and I want to access htdocs from a different machine. However I get connection expired.
I'm connecting on port 80 and I've checked # netstat -nlp:
tcp6 0 0 :::80 :::* LISTEN 10366/httpd
and # netstat -tul:
tcp6 0 0 :::80 :::* LISTEN 0
I've also checked httpd.conf and it contains:
Listen 80
I've set a firewall exception and I still can't access htdocs. Does anyone know what I'm missing?
EDIT: I've also checked it the port was opened using # telnet myserver.com 80 and I managed to connect to the server using port 80.
EDIT2: After re-setting the firewall restrictions, and enabling any LAN user to access the server, the connection finally happened.