Unable to connect to secure websocket in DigitalOcean - express

I'm having trouble setting up my WebSocket server in Digital Ocean.
I'm changing my actual domain name for domain.com for the question's sake.
I basically have a NodeJs WebSocket server that I'm trying to connect to a react app I'm hosting at Heroku. I'm getting the following error when attempting to connect:
WebSocket connection to 'wss://domain.com/' failed: Error during WebSocket handshake: Unexpected response code: 200
Here's my server entry code:
const PORT = process.env.PORT || 8080
const privateKey = fs.readFileSync('/etc/letsencrypt/live/domain.com/privkey.pem', 'utf-8')
const certificate = fs.readFileSync('/etc/letsencrypt/live/domain.com/cert.pem', 'utf-8')
const credentials = { key: privateKey, cert: certificate }
const server = express()
const httpsServer = https.createServer(credentials, server)
httpsServer.listen(PORT)
this.wss = new WebSocket.Server({ server: httpsServer })
I used cert-bot to secure my connection, as for Heroku is obligatory. So here's my nginx default config file, located at /etc/nginx/sites-available/default
server {
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name domain.com www.domain.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
I also changed a bit my UFW config. Here's the output to sudo ufw status
Nginx Full ALLOW Anywhere
22/tcp ALLOW Anywhere
Nginx Full (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
Just to be clear, I'm NOT using domain.com for real. I just changed it in the current question for privacy concerns, :D
Hope anyone can point me in the right direction. Not really sure where I'm going wrong.

Related

How to serve VueJS 3 app from subfolders in NGINX?

I want to serve multiple VueJS 3 apps from same NGINX server but from different subfolders. I've stumbled upon and tried myriad resource from stack and web but things are not coming together.
I have three apps and three build types.
production: mydomain.com/app1, mydomain.com/app2, mydomain.com/app3
staging: mydomain.com/staging/app1, mydomain.com/staging/app2, mydomain.com/staging/app3
dev: mydomain.com/dev/app1, mydomain.com/dev/app2, mydomain.com/dev/app3
I've tried modifying the vue.config.js, router/index.js and NGINX configuration but nothing seems to click.
I'll sincerely appreciate if someone can share a comprehensive guide to my issue.
Thank you.
try this conf
server {
listen 80;
listen [::]:80;
# SSL configuration
#
#listen 443 ssl;
#listen [::]:443 ssl;
#ssl_certificate /etc/letsencrypt/live/your-domain/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/your-domain/privkey.pem;
#
#ssl_dhparam /etc/letsencrypt/live/dhparam/dhparam.pem;
#ssl_protocols TLSv1.2;
#ssl_prefer_server_ciphers on;
#ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA';
#add_header X-Frame-Options DENY;
root /var/www/proyect-vue/dist;
# Add index.php to the list if you are using PHP
index index.html index.htm index.php;
server_name your-domain;
location / {
try_files $uri $uri/ /index.html;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
location ~ /\.ht {
deny all;
}
}
and now, you can clone repository in /var/www/ with git and type npm run build

Unable to access site using https

I'm currently serving the server using nginx with the following configuration.
server {
server_name www.skipven.xyz skipven.xyz;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ubuntu/ari-bot/aribot;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/ari-bot/ari-bot.sock;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/skipven.xyz/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/skipven.xyz/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.skipven.xyz) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = skipven.xyz) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name www.skipven.xyz skipven.xyz;
return 404; # managed by Certbot
}
When I try to access http://skipven.xyz, It's successfully redirected to https://skipven.xyz . But https://skipven.xyz doesn't return anything. I also can't find any access log of the https request but http request is logged like charm.
Other things to note:
Running sudo nginx -t return the following response:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Running sudo ufw status return the following response:
Status: active
To Action From
-- ------ ----
Nginx Full ALLOW Anywhere
22/tcp ALLOW Anywhere
Nginx Full (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
First make sure that your web application is running, try to curl to unix socket. If it still does not work, which cloud service do you use right now? If you use aws or gce, you still need to configure their firewall.

nginx error - nginx: [emerg] "server" directive is not allowed here in /etc/nginx/sites-available/default:21

I have just edited a server block file on my ubuntu server, and nginx won't start. When I try to debug the sites-available file, it gives me the error above. I can't work out what's wrong with line 21. The file looks like so:
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
root /srv/www/macpherson;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
#server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
server {
listen 80 ;
listen [::]:80 ipv6only=on;
root /srv/www/macpherson;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name macphersoncpa.ca www.macphersoncpa.ca; # managed by Certbot
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/macphersoncpa.ca/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/macphersoncpa.ca/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Can anyone please help me fix this? It's driving me crazy.

Securing Nginx with SSL

I´m securing an Nginx server with SSL and I have a question. I have two virtual servers one for http listening in port 80 and the https listening in 443 like this:
# HTTP server
server {
listen 80;
server_name localhost;
...
# many configuration rules here for caching, etc
}
# HTTPS server
server {
listen 443 ssl;
server_name localhost;
...
}
The question is, do I need to duplicate all the configuration rules that I have in the http version into my https version? Is there any way to avoid duplicate all these rules?
UPDATE
I´m trying to config with an include according to #ibueker answer. Looks easy but somehow is not working. Does the include need to be inside a location? Example attached:
# HTTP server
server {
listen 80;
server_name localhost;
...
include ./wpo
}
Where wpo file is in the same path, and it´s like:
# Expire rules for static content
# RCM: WPO
# Images
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires 1w;
add_header Cache-Control "public";
}
# CSS and Javascript
location ~* \.(?:css|js)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires 1w;
add_header Cache-Control "public";
}
# cache.appcache, your document html and data
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires -1;
}
You can put them in another file and include them for both server blocks.
include /path/to/file;

Host main domain on Apache and some of the sub-domains on Nginx

I recently bought a linode 1GB plan. Installed Apache on it, had my website up and running on it.
However I installed gitlab on the server and mapped it to a sub-domain. Since the installation guides recommended using Nginx as the server, I installed it and is running on port 80. Currently my apache is not running since there are port conflicts.
But I fixed it by editing the /etc/apache2/ports.conf file and instructing apache to serve on a different port.
Now when I visit the main domain that is pointing to a port(8000) and using apache doesn't show up.
gitlab is installed at http://gitlab.myserver.com and I am able to access it., but if I try to navigate to http://myserver.com, I get the content of http://gitlab.myserver.com
My gitlab config serving under Nginx is as follows:
# GITLAB
# Maintainer: #randx
# App Version: 5.0
upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen my_server_ip default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 800; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
My Apache config for myserver.com is as follows:
# domain: myserver.com
# public: /home/me/public/myserver.com/
<VirtualHost *:8000>
# Admin email, Server Name (domain name), and any aliases
ServerAdmin webmaster#myserver.com
ServerName www.myserver.com
ServerAlias myserver.com
# Index file and Document Root (where the public files are located)
DirectoryIndex index.html index.php
DocumentRoot /home/me/public/myserver.com/public
# Log file locations
LogLevel warn
ErrorLog /home/me/public/myserver.com/log/error.log
CustomLog /home/me/public/myserver.com/log/access.log combined
</VirtualHost>
Where am I going wrong?
A good solution is to continue to run Nginx on port 80, while adding proxy directives to Nginx to serve as a proxy for specific domains that are running on Apache. An example Nginx configuration:
server {
server_name www.myserver.com;
server_name myserver.com;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
I do this myself, and it works great.
I solved the issue. The issue was with the following line in my gitlab config:
server {
listen my_ip default_server;
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
......
}
I changed this to the following:
server {
listen 80;
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
......
}
And I followed your instructions on serving sites on apache via nginx by using proxy_pass
Thanks a lot..
Cheers....