I want to serve multiple VueJS 3 apps from same NGINX server but from different subfolders. I've stumbled upon and tried myriad resource from stack and web but things are not coming together.
I have three apps and three build types.
production: mydomain.com/app1, mydomain.com/app2, mydomain.com/app3
staging: mydomain.com/staging/app1, mydomain.com/staging/app2, mydomain.com/staging/app3
dev: mydomain.com/dev/app1, mydomain.com/dev/app2, mydomain.com/dev/app3
I've tried modifying the vue.config.js, router/index.js and NGINX configuration but nothing seems to click.
I'll sincerely appreciate if someone can share a comprehensive guide to my issue.
Thank you.
try this conf
server {
listen 80;
listen [::]:80;
# SSL configuration
#
#listen 443 ssl;
#listen [::]:443 ssl;
#ssl_certificate /etc/letsencrypt/live/your-domain/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/your-domain/privkey.pem;
#
#ssl_dhparam /etc/letsencrypt/live/dhparam/dhparam.pem;
#ssl_protocols TLSv1.2;
#ssl_prefer_server_ciphers on;
#ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA';
#add_header X-Frame-Options DENY;
root /var/www/proyect-vue/dist;
# Add index.php to the list if you are using PHP
index index.html index.htm index.php;
server_name your-domain;
location / {
try_files $uri $uri/ /index.html;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
location ~ /\.ht {
deny all;
}
}
and now, you can clone repository in /var/www/ with git and type npm run build
Related
I have just edited a server block file on my ubuntu server, and nginx won't start. When I try to debug the sites-available file, it gives me the error above. I can't work out what's wrong with line 21. The file looks like so:
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
root /srv/www/macpherson;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
#server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
server {
listen 80 ;
listen [::]:80 ipv6only=on;
root /srv/www/macpherson;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name macphersoncpa.ca www.macphersoncpa.ca; # managed by Certbot
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/macphersoncpa.ca/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/macphersoncpa.ca/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Can anyone please help me fix this? It's driving me crazy.
I'm having trouble setting up my WebSocket server in Digital Ocean.
I'm changing my actual domain name for domain.com for the question's sake.
I basically have a NodeJs WebSocket server that I'm trying to connect to a react app I'm hosting at Heroku. I'm getting the following error when attempting to connect:
WebSocket connection to 'wss://domain.com/' failed: Error during WebSocket handshake: Unexpected response code: 200
Here's my server entry code:
const PORT = process.env.PORT || 8080
const privateKey = fs.readFileSync('/etc/letsencrypt/live/domain.com/privkey.pem', 'utf-8')
const certificate = fs.readFileSync('/etc/letsencrypt/live/domain.com/cert.pem', 'utf-8')
const credentials = { key: privateKey, cert: certificate }
const server = express()
const httpsServer = https.createServer(credentials, server)
httpsServer.listen(PORT)
this.wss = new WebSocket.Server({ server: httpsServer })
I used cert-bot to secure my connection, as for Heroku is obligatory. So here's my nginx default config file, located at /etc/nginx/sites-available/default
server {
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name domain.com www.domain.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
I also changed a bit my UFW config. Here's the output to sudo ufw status
Nginx Full ALLOW Anywhere
22/tcp ALLOW Anywhere
Nginx Full (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
Just to be clear, I'm NOT using domain.com for real. I just changed it in the current question for privacy concerns, :D
Hope anyone can point me in the right direction. Not really sure where I'm going wrong.
Im using nginx as Web Server and Reverse Proxy with SSL enabled.
The Web Server serves an WoltLab Suite Forum 5.0.0 (formerly known as Burning Board) and proxied some subdomains to different hosts like a NodeJS backend, an Tomcat backend and many other different services.
This worked great so far, but now i have the problem that i can no longer use subdomains to accomplish this.
Please don't ask why, please don't.
Now, that i can no longer use subdomains, Im trying to get it working with sub directories.
An example:
I had xyz.example.com point to my nginx server at 12.13.14.15.
nginx proxied all requests to xyz.example.com to 10.20.30.40:1234
Now i want nginx to proxy all requests to example.com/xyz/ to 10.20.30.40:1234
I got this working with Apache Archiva as backend service, but all other services like my NodeJS backend are refusing to work correctly with my current configuration.
It sends me to the BurningBoard wich shows me their Page-Not-Found page.
example.com/xyz/admin/index.php becomes to example.com/admin/index.php wich wont work of course.
The directory that proxies to Archiva has the exact same configuration, just other directory names of course.
The Archiva URL looks like this after i call it from Web:
example.com/repo/ becomes example.com/repo/#welcome and shows me Archivas Welcome Page.
This is exactly what i want for my other services too.
Here are my current configuration files for nginx (sensitive data replaced with X):
<=== sites-available/default ===>
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name XXXX.XX www.XXXX.XX;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
include snippets/ssl-XXXXX.XX.conf;
include snippets/ssl-params.conf;
root /var/www/html;
include /etc/nginx/snippets/proxies.conf;
# the last try_files argument is for SEO friendly URLs for the WSF 5.0
location / {
index index.php;
try_files $uri $uri/ /index.php?$uri&$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
# Lets Encrypt automation
location ~ /.well-known {
allow all;
}
location ~ /\.ht {
deny all;
}
}
<=== snippets/proxies.conf ===>
# Apache Archiva
location /repo {
rewrite ^/repo(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# Git Solution
location /git {
rewrite ^/git(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# Filehosting
location /cloud {
rewrite ^/cloud(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# NodeJS
location /webinterface {
rewrite ^/webinterface(/.*)$ $1 break;
proxy_pass https://XXXXXXXX:XXXXX;
include /etc/nginx/snippets/websocket-magic.conf;
return 404;
}
Any ideas how to solve this problem?
Also, please tell me if you need more informations like nginx' version or the like.
On AWS, I'm trying to migrate a PHP Symfony app running on nginx. I want to be able to test the app by directly talking to the EC2 server and via an ELB (the public route in).
I've setup an elastic load balancer to decrypt all the SSL traffic and pass this on to my EC2 server via port 80, as well as pass port 80 directly onto my EC2 server via port 80.
Initially this caused infinite redirects in my app but I researched and then fixed this by adding
fastcgi_param HTTPS $https;
with some custom logic that looks at $http_x_forwarded_proto to figure out when its actually via SSL.
There remains one issue I can't solve. When a user logs into the Symfony app, if they come via the ELB, the form POST eventually returns a redirect back to
https://elb.mysite.com:80/dashboard
instead of
https://elb.mysite.com/dashboard
which gives the user an error of "SSL connection error".
I've tried setting
fastcgi_param SERVER_PORT $fastcgi_port;
to force it away from 80 and I've also added the
port_in_redirect off
directive but both make no difference.
The only way I've found to fix this is to alter the ELB 443 listener to pass traffic via https. The EC2 server has a self certified SSL certificate configured. But this means the EC2 server is wasting capacity performing this unnecessary 2nd decryption.
Any help very much appreciated. Maybe there is a separate way within nginx of telling POST requests to not apply port numbers?
Nginx vhost config:
server {
port_in_redirect off;
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mysite.com/self-ssl.crt;
ssl_certificate_key /etc/nginx/ssl/mysite.com/self-ssl.key;
# Determine if HTTPS being used either locally or via ELB
set $fastcgi_https off;
set $fastcgi_port 80;
if ( $http_x_forwarded_proto = 'https' ) {
# ELB is using https
set $fastcgi_https on;
# set $fastcgi_port 443;
}
if ( $https = 'on' ) {
# Local connection is using https
set $fastcgi_https on;
# set $fastcgi_port 443;
}
server_name *.mysite.com my-mysite-com-1234.eu-west-1.elb.amazonaws.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log error;
rewrite ^/app\.php/?(.*)$ /$1 permanent;
location / {
port_in_redirect off;
root /var/www/vhosts/mysite.com/web;
index app.php index.php index.html index.html;
try_files $uri #rewriteapp;
}
location ~* \.(jpg|jpeg|gif|png)$ {
root /var/www/vhosts/mysite.com/web;
access_log off;
log_not_found off;
expires 30d;
}
location ~* \.(css|js)$ {
root /var/www/vhosts/mysite.com/web;
access_log off;
log_not_found off;
expires 2h;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev|config)\.php(/|$) {
port_in_redirect off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param HTTPS $fastcgi_https;
# fastcgi_param SERVER_PORT $fastcgi_port;
#fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/vhosts/mysite.com/web$fastcgi_script_name;
include fastcgi_params;
}
}
References:
FastCGI application behind NGINX is unable to detect that HTTPS secure connection is used
https://serverfault.com/questions/256191/getting-correct-server-port-to-php-fpm-through-nginx-and-varnish
http://nginx.org/en/docs/http/ngx_http_core_module.html#port_in_redirect
Finally got a solution via another channel.
The answer is to commented out SERVER_PORT with a # in the file fastcgi_params file.
Much thanks to Maxim from Nginx.
I recently bought a linode 1GB plan. Installed Apache on it, had my website up and running on it.
However I installed gitlab on the server and mapped it to a sub-domain. Since the installation guides recommended using Nginx as the server, I installed it and is running on port 80. Currently my apache is not running since there are port conflicts.
But I fixed it by editing the /etc/apache2/ports.conf file and instructing apache to serve on a different port.
Now when I visit the main domain that is pointing to a port(8000) and using apache doesn't show up.
gitlab is installed at http://gitlab.myserver.com and I am able to access it., but if I try to navigate to http://myserver.com, I get the content of http://gitlab.myserver.com
My gitlab config serving under Nginx is as follows:
# GITLAB
# Maintainer: #randx
# App Version: 5.0
upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen my_server_ip default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 800; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
My Apache config for myserver.com is as follows:
# domain: myserver.com
# public: /home/me/public/myserver.com/
<VirtualHost *:8000>
# Admin email, Server Name (domain name), and any aliases
ServerAdmin webmaster#myserver.com
ServerName www.myserver.com
ServerAlias myserver.com
# Index file and Document Root (where the public files are located)
DirectoryIndex index.html index.php
DocumentRoot /home/me/public/myserver.com/public
# Log file locations
LogLevel warn
ErrorLog /home/me/public/myserver.com/log/error.log
CustomLog /home/me/public/myserver.com/log/access.log combined
</VirtualHost>
Where am I going wrong?
A good solution is to continue to run Nginx on port 80, while adding proxy directives to Nginx to serve as a proxy for specific domains that are running on Apache. An example Nginx configuration:
server {
server_name www.myserver.com;
server_name myserver.com;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
I do this myself, and it works great.
I solved the issue. The issue was with the following line in my gitlab config:
server {
listen my_ip default_server;
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
......
}
I changed this to the following:
server {
listen 80;
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
......
}
And I followed your instructions on serving sites on apache via nginx by using proxy_pass
Thanks a lot..
Cheers....