React-App not showing only on port 80 on Server 2012 R2? - create-react-app

I have a full stack site designed to run on port 80 with the Node backend using port 5000. This site runs without fail on a Windows 10 machine.
When I copy it to a domain server running on 2012 R2 I cannot get it to function on port 80, although port 90 shows with no problems.
IIS is turned off and netstat -aon shows that Node is the PID using port 80. I then tried building the page and serving it with NGINX and am getting the same results, except that NGINX is now the process using port 80.
Here is the code I believe to be relevant but am uncertain of what to do with it.
My .env file for react-app is simple:
PORT=80
When switching to port 90 it functions successfully.
If I attempt to run through NGINX (with which I am unfamiliar) using the following configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# include mime.types;
# default_type application/octet-stream;
# sendfile on;
# keepalive_timeout 65;
# gzip on;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:90;
root C:\intranet\New_Test\frontend\build;
index $uri $uri/ /index.html;
}
location /api {
proxy_pass http://localhost:5000;
}
}
}
I still get nothing.
I have also tried it without forwarding port 80 to port 90 with the same results.
Do I have an incorrect configuration somewhere? The netstat also says that SYSTEM is using port 80 for some reason but it is also using a number of other HTTP ports.
** Edit **
I have since updated my nginx.conf file to this:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# include mime.types;
# default_type application/octet-stream;
# sendfile on;
# keepalive_timeout 65;
# gzip on;
include mime.types;
server {
listen 90;
server_name localhost;
root html;
index /index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api {
proxy_pass http://localhost:5000;
}
}
}
This is working fine to display the site in port 90 but for whatever reason port 80 in inaccessible to me on this machine.

Switched to a different model. Putting this answer to close the question. Went with nssm (https://alex.domenici.net/archive/deploying-a-node-js-application-on-windows-iis-using-a-reverse-proxy - step 5) and hosted the built React portion through IIS and using NSSM to run node as a service. Works well on local machine if I set my REACT_APP_HOST to localhost. Now experimenting with pathing so that the server can be reached from any client, not just a page on the localhost server.

Related

Why am I receiving a 404 when using proxy_pass with NginX?

I'm trying to use Nginx to expose my Web APIs on port 80 using proxy_pass. The Web APIs are written in Node using Express and they are all running on separate port numbers.
I have locations working in the nginx.conf file when pulling static files from the root and /test, but receive a 404 error when trying to redirect to the API. The API I'm testing with runs on port 8080 and I'm able to access and test it using Postman.
This is using Nginx 1.16.1 being hosted on a Windows 2016 Server
http {
include mime.types;
default_type application/octet-stream;
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost crowdtrades.com;
//Root and /test locations are working correctly
location / {
root c:/CrowdTrades;
index index.html index.htm;
}
location /test/ {
root c:/CrowdTrades/test;
index test.html;
}
// #Test2 this is the location I'm not able to get working
location /test2/ {
proxy_set_header Host $host;
proxy_pass http://localhost:8080/api/signup/;
}
}
}
So after trying all kinds of configuration changes and restarting Nginx each time I gave up for the night. My cloud VM is scheduled to shut down at night, when I picked this up in the AM it was working. I have no idea why it's working now but restarting the server seemed to help.

nginx redirect all domain to another port but keep one domain for admin apps

I try to setup a web server with Docker, so I will use the main domain of my server "server.domain.com" for admin use (server.domain.com/phpmyadmin, ect...) and I want to redirect all the other domain to an apache container who listen on port 81.
So I have this code on my default.conf:
server {
listen 80;
listen [::]:80 default_server;
location / {
proxy_pass http://web/;
}
}
main.conf:
server {
listen 80;
listen [::]:80;
server_name server.domain.com;
location /phpmyadmin/ {
proxy_pass http://phpmyadmin/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
(Updated conf)
And my nginx.conf:
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.0;
gzip_min_length 0;
gzip_types *;
gzip_vary on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites/*;
}
a part of ‘docker-compose.yml‘:
nginx:
build: ./server/proxy
ports:
- "80:80"
#volumes:
#- nginx_conf:/etc/nginx/
networks:
- web_network
depends_on:
- web
- phpmyadmin
- panel
At this moment I use "depends_on" for use the name of the container on my config but you talk only about network so I think "depends_on" is not obliged ?
But that gives me an error connection refused.
If I replace the 127.0.0.1 by server.domain.com the first vhost not working and redirect to nginx webRoot.
So I have no idea why ...
Thank you !
As far as I understand this nginx container is listening on port 80 and all connection requests going to your machine will be passed to it. So it's a proxy container only. I have a project with similar implementation. Let's try to make it out.
I suggest that you have 2 conf files for clarity.
1) main.conf - will serve your "server.domain.com"
server {
listen 80;
listen [::]:80;
server_name server.domain.com;
location /phpmyadmin {
proxy_pass http://server.domain.com:82;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
That's all of the configuration you basically need here. Later if you need them, you'll pass headers.
2) default.conf - will serve any other domain
server {
listen 80;
listen [::]:80 default_server;
location / {
proxy_pass http://server.domain.com:81;
}
}
This configuration assumes that:
1) There is a container running apache and requests coming to your machine on port 81 will be passed to apache2 container's port 80 (or whatever it's listening to)
2) There is a container running phpmyadmin and requests coming to your machine on port 82 will be passed to phpmyadmin container's port 80 (or whatever...)
SOME IMPROVEMENTS YOU SHOULD CONSIDER:
1) If you start all those containers with docker-compose you'll be able to set up a virtual network for them. This would allow you to proxy pass requests straight to the container by name. In my project I do it like:
proxy_pass http://adminer;
where adminer is defined as:
adminer:
image: phpmyadmin/phpmyadmin
volumes:
- ./db_interface/conf/config.inc.php/:/etc/phpmyadmin/config.inc.php
networks:
- demo_webnet
- prod_webnet
If you have questions just ask, I'll explain.
2) You could place another nginx server together with your apache2 server in its container. They work nice in bundle. Nginx is better to server statics. Apache2 better suits PHP in your case. I can show you how to do that as well.
IN CASE YOU NEED IT
It looks like you're trying to do something similar to what I did for our company's needs. If you're interested I can give you access to my project. I've built up a whole server infrastructure with docker and it now perfectly deployed on our server. In short it works as follows:
nginx proxy container above all
a container with apache2-nginx-php5.6 for demo apps
a container with apache2-nginx-php7.0 for demo apps
a container with apache2-nginx-php5.6 for production apps
a container with apache2-nginx-php7.0 for production apps
a container with maria db for demo
a container with maria db for production
a container with phpmyadmin to access both db services
Any request comes to nginx proxy.
It matches some virtual host and gets proxied to one of the 4 containers that have apache2 and nginx inside.
Also there is a lot of cool stuff like cron configured to autoreload apache2 and nginx when it detects changes to files, supervising services, https support and so on..
I'm planning to further develop it as an open source project, whoever is interested should let me know.

nginx - insecure http server is redirecting wrongly to an https server

I've got a small website that I'm running on docker, with the 'dev' version on port 9090 and the 'master' version running on port 8080.
I'm using Nginx on the host (not running on docker) to handle the proxying on port 80/443 for requests coming from the internet.
The port 80/443 proxying works perfectly. No problems there.
Problem: When try and create a server running on port 90 (to show the dev version of the site) which is intended to be insecure, this seems to attempt a redirect back to the SSL version of the site, which is confirmed by the browser redirecting to SSL, and i get an error on the page : ERR_SSL_PROTOCOL_ERROR
If i comment out the server running on port 80, this problem goes away, but then i lose my port 80 redirects on the live/master site.
Can anyone see what might be the problem in how i'm setting up the config - thanks! Config below :
server {
listen 443 ssl;
server_name xxxxxxxxxxxxx.co.uk www.xxxxxxxxxxxxx.co.uk;
{some ssl config here}
add_header Strict-Transport-Security max-age=15768000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://localhost:8080;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
server {
listen 80;
server_name xxxxxxxxxxxxx.co.uk www.xxxxxxxxxxxxx.co.uk;
return 301 https://$host$request_uri;
}
server {
listen 90;
server_name xxxxxxxxxxxxx.co.uk www.xxxxxxxxxxxxx.co.uk;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://localhost:9090;
proxy_redirect off;
}
}
Your problem is caused probably by Strict-Transport-Security header set on port 433. Which told browser to force SSL on this server/domain for the time in header.
Try to remove header and try on cache free browser.

Regular connection fine but SSL issue

I'm running my application on CentOS 6.4 with Nginx 1.0.15 and gunicorn 19.1.1. My application works fine if I am just using port 80 and not using SSL. However, when I attempt to add SSL to the site, Nginx redirects to https://, however all I get after the redirect is "web page not available" with no additional information.
upstream apollo2_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/apollo2/run/gunicorn.sock fail_timeout=0;
}
#server {
# listen 80;
# server_name mysub.example.com;
# rewrite ^ https://$server_name$request_uri? permanent;
#}
# This works fine like this, but when I uncomment the above
# and the below ssl information, I get "webpage not available."
server {
listen 80;
# listen 443;
# ssl on;
# ssl_certificate /etc/nginx/ssl/2b95ec8183e5d1asdfasdfsadf.crt;
# ssl_certificate_key /etc/nginx/ssl/exmaple.com.key;
# server_name mysub.example.com;
client_max_body_size 4G;
keepalive_timeout 70;
access_log /webapps/apollo2/logs/nginx-access.log;
error_log /webapps/apollo2/logs/nginx-error.log;
location /static/ {
alias /webapps/apollo2/static/;
}
location /media/ {
alias /webapps/apollo2/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://apollo2_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/apollo2/static/;
}
}
I do not see anything in error logs.
I have checked port 443 here and
it is open: http://www.yougetsignal.com/tools/open-ports/
This is a wildcard certificate that I am using successfully on
another subdomain on a different server running on Debian 7 with Nginx with what I think is the same setup.
What should I be looking at? What am I missing?
I should have also shown my iptables as certainly someone would have figured it out then. I'm no expert in this area, but there was something wrong with my setup that caused the redirection to fail.
I ended up using the example from Linode and now this works.
https://www.linode.com/docs/security/securing-your-server#creating-a-firewall

How can I serve multiple rails apps on single VPS?

I have a VPS on digital ocean. I can able to run multiple rails apps on same VPS using nginx+passenger. Now i want to map domain names. For this what should I do?
My nginx.conf file
server {
listen 80;
server_name localhost;
location ~ ^/uvarsity(/.*|$) {
alias /home/uvarsity/public$1; # <-- be sure to point to 'public'!
passenger_base_uri /uvarsity;
passenger_app_root /home/uvarsity;
passenger_document_root /home/uvarsity/public;
passenger_enabled on;
rails_env production;
}
location ~ ^/uvarsity-landing(/.*|$) {
alias /home/uvarsity-lp/public$1; # <-- be sure to point to 'public'!
passenger_base_uri /uvarsity-landing;
passenger_app_root /home/uvarsity-lp;
passenger_document_root /home/uvarsity-lp/public;
passenger_enabled on;
rails_env production;
}
location / {
root /home/amaravati/public; # <-- be sure to point to 'public'
passenger_enabled on;
}
}
What you want is virtual hosting.
The trick here is to define an upstream section in NGINX to define each application's backend server(s), and then a server section that passes traffic to the upstream.
Here's a very simple example I used to provide a virtual host localhost that redirected to a virtual machine running on VirtualBox. I was using localhost but the only requirement is that your browser requests the host by the name matching the server_name setting in the server block in the nginx config.
upstream apache {
server 192.168.70.1:1025;
}
server {
server_name localhost;
location / {
proxy_pass http://apache;
}
}