Multiple virtual hosts on my workstation, just stopped working. Upon an update of nginx to v1.10.2 and a new Passenger locations.ini file pointer in the nginx.conf file, I'm getting 403 Forbidden permissions errors on all of these vhosts. No clue what to look at.
passenger_root /usr/local/opt/passenger/libexec/src/ruby_supportlib/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/ruby;
But, which ruby:
/Users/rich/.rbenv/shims/ruby
So I changed that directive to the one above. Restart nginx, and still the same. The error reported:
2017/10/23 19:51:36 [error] 10863#0: *61 directory index of "/Library/WebServer/Documents/alpha/public/" is forbidden, client: 127.0.0.1, server: alpha.local, request: "GET / HTTP/1.1", host: "alpha.local"
Permissions haven't changed ever. Not to mention they are relaxed (only seen by me):
drwxrwxrwx 20 rich admin 680B Jun 17 01:52 HQ
cd HQ:
drwxr-xr-x 8 rich admin 272B Jul 12 17:32 public
nginx.conf:
user root admin;
worker_processes 8;
error_log /usr/local/var/log/error.log debug;
pid /usr/local/var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
# index index.html index.erb;
access_log /usr/local/var/log/access.log;
passenger_root /usr/local/Cellar/passenger/5.1.11/libexec/src/ruby_supportlib/phusion_passenger/locations.ini;
passenger_ruby /Users/rich/.rbenv/shims/ruby;
passenger_friendly_error_pages on;
include /usr/local/etc/nginx/servers/*; # see below
}
server {
listen 80;
server_name alpha.local;
include /usr/local/etc/nginx/mime.types;
access_log /usr/local/var/log/access_alpha.log;
error_log /usr/local/var/log/error_alpha.log debug;
error_page 404 /404.html;
root /Library/WebServer/Documents/alpha/public;
passenger_enabled on;
passenger_base_uri /;
location / {
autoindex off;
# try_files $uri $uri/ /index.html?$query_string;
# index /;
# allow 192.168.1.0/24;
}
location = /img/favicon.ico { access_log off;}
}
nginx error log:
2017/10/24 15:35:39 [error] 10868#0: *86 directory index of "/Library/WebServer/Documents/alpha/public/" is forbidden, client: 127.0.0.1, server: alpha.local, request: "GET / HTTP/1.1", host: "alpha.local"
Odd stuff. Any ideas appreciated how to get all this serving again properly. It seems permissions were completely thrown off, and I'm not sure if it was the nginx update or not. Cheers
==============
Update 2: (changed alpha/HQ). Also, replicated on a completely separate box. Homebrew update, trips over nginx's dependency on openssl, which wants to update to version 1.1. I've posted in Github there. While I have no proof, it's the only feedback I have that shows a non-upgrade (still serving 1.12.0 instead of 1.12.2). So I am thinking it is that.
https://github.com/Homebrew/homebrew-core/issues/19810
Fixed. Homebrew issue, conditional if Passenger is installed, choosing version of openssl (openssl, openssl#1.1).
Related
I have a full stack site designed to run on port 80 with the Node backend using port 5000. This site runs without fail on a Windows 10 machine.
When I copy it to a domain server running on 2012 R2 I cannot get it to function on port 80, although port 90 shows with no problems.
IIS is turned off and netstat -aon shows that Node is the PID using port 80. I then tried building the page and serving it with NGINX and am getting the same results, except that NGINX is now the process using port 80.
Here is the code I believe to be relevant but am uncertain of what to do with it.
My .env file for react-app is simple:
PORT=80
When switching to port 90 it functions successfully.
If I attempt to run through NGINX (with which I am unfamiliar) using the following configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# include mime.types;
# default_type application/octet-stream;
# sendfile on;
# keepalive_timeout 65;
# gzip on;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:90;
root C:\intranet\New_Test\frontend\build;
index $uri $uri/ /index.html;
}
location /api {
proxy_pass http://localhost:5000;
}
}
}
I still get nothing.
I have also tried it without forwarding port 80 to port 90 with the same results.
Do I have an incorrect configuration somewhere? The netstat also says that SYSTEM is using port 80 for some reason but it is also using a number of other HTTP ports.
** Edit **
I have since updated my nginx.conf file to this:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# include mime.types;
# default_type application/octet-stream;
# sendfile on;
# keepalive_timeout 65;
# gzip on;
include mime.types;
server {
listen 90;
server_name localhost;
root html;
index /index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api {
proxy_pass http://localhost:5000;
}
}
}
This is working fine to display the site in port 90 but for whatever reason port 80 in inaccessible to me on this machine.
Switched to a different model. Putting this answer to close the question. Went with nssm (https://alex.domenici.net/archive/deploying-a-node-js-application-on-windows-iis-using-a-reverse-proxy - step 5) and hosted the built React portion through IIS and using NSSM to run node as a service. Works well on local machine if I set my REACT_APP_HOST to localhost. Now experimenting with pathing so that the server can be reached from any client, not just a page on the localhost server.
I am trying to install an SSL certificate that I obtained from Godaddy onto my NGINX server. I am positive I have all of the paths correct and from what I understand my server configuration is correct, but still I get the following error.
Feb 20 11:06:35 my.server.com nginx[6173]: nginx: [emerg] cannot load certificate "/etc/ssl/certs/certificate.crt": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/ssl/certs/certificate.crt','r') error:2006D002:BIO routines:BIO_new_file:system lib)
Feb 20 11:00:01 my.server.com nginx[5969]: nginx: configuration file /etc/nginx/nginx.conf test failed
Below is my SSL configuration. I have placed this into a file at the path /etc/nginx/conf.d/ssl.conf.
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name my.server.com;
root /usr/share/nginx/html;
ssl_certificate /etc/ssl/certs/certificate.crt;
ssl_certificate_key /etc/ssl/private/private.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://[MY_IP_ADDRESS]:8443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
This looks to be a permissions issue, but I have ran chown to change the permissions to the root user, and I have changed the file permission to 600 via chmod. Is this not correct? Can someone please give me some guidance on how to resolve this issue?
** UPDATE **
I did check and found that the SSL certs was not owned by the root user. I've modified all SSL files to be owned by the root owner and group, and changed the file permissions to 600 and I've tried 700. I get this output below when I run sudo ls -l
-rwx------. 1 root root 7072 Feb 20 10:41 my.server.com.chained.crt
-rwx------. 1 root root 2277 Feb 20 10:36 my.server.com.crt
-rwx------. 1 root root 4795 Feb 20 10:39 intermediate.crt
I am still getting the same error though. I've also tried both the normal cert and the full chain cert. Does anyone have an idea what is going on?
I finally solved my issue. Turns out when I moved the files (mv) it changed the security context of the files, and thus made them unreadable to nginx. I resolved the issue by running the following command on my root nginx folder.
restorecon -v -R /etc/nginx
I found this from this post.
Thanks for all the help!
The solution: restorecon -v -R /etc/nginx - works for me RHEL 8
Relabeled /etc/nginx/ssl/vhost/server.crt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:httpd_config_t:s0
Relabeled /etc/nginx/ssl/vhost/archive.pem from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:httpd_config_t:s0
Relabeled /etc/nginx/ssl/vhost/intermediate.crt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:httpd_config_t:s0
Just Restart nginx.
I'm trying to use Nginx to expose my Web APIs on port 80 using proxy_pass. The Web APIs are written in Node using Express and they are all running on separate port numbers.
I have locations working in the nginx.conf file when pulling static files from the root and /test, but receive a 404 error when trying to redirect to the API. The API I'm testing with runs on port 8080 and I'm able to access and test it using Postman.
This is using Nginx 1.16.1 being hosted on a Windows 2016 Server
http {
include mime.types;
default_type application/octet-stream;
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost crowdtrades.com;
//Root and /test locations are working correctly
location / {
root c:/CrowdTrades;
index index.html index.htm;
}
location /test/ {
root c:/CrowdTrades/test;
index test.html;
}
// #Test2 this is the location I'm not able to get working
location /test2/ {
proxy_set_header Host $host;
proxy_pass http://localhost:8080/api/signup/;
}
}
}
So after trying all kinds of configuration changes and restarting Nginx each time I gave up for the night. My cloud VM is scheduled to shut down at night, when I picked this up in the AM it was working. I have no idea why it's working now but restarting the server seemed to help.
I have a website that is getting corrupted content error when logging into a user account. This error occurs in Firefox. It only occurs when you log into with a clean cache. After you log in once, you get that error, but you get skip past it. Then if you log out, and log back in again, you won't get that problem anymore until you clear the history. After clearing the history and all the cache and going back to the site, and logging in again, the same error occurs. I have seen a different error message on Microsoft Edge in the same scenario, and in Chrome, but I don't remember what they said (I think Edge said the site could not be reached, and chrome said something else). I have been looking around and trying to figure out what the heck is going on and looking for solutions for it, but I'm lost.
I saw one thing on the internet that said it can be solved by clearing cookies, but I don't keep any cookies on this site. And, in my situation - it is fresh history or clearing history that leads to the problem, so this type of solution seems to be the opposite of my situation.
I don't know if it is my ssl certificate. That I bought from namecheap. I don't know if it is something to do with my nginx file. In there I force all http to redirected to https. I don't know if it something to do with my code itself. My server code is coded in CakePHP 2.
Any ideas? I really need to fix this. I am trying to launch this site soon.
Here is my nginx code:
server {
listen 80;
listen [::]:80;
server_name my_server_name.com;
return 301 https://$server_name/web/$request_uri/;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm index.php;
ssl on;
ssl_certificate /path_to_ssl/cert_chain.crt;
ssl_certificate_key /path_to_ssl/my_server_name.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /path_to_ssl_certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
location /web {
alias /usr/share/nginx/html/web/app/webroot;
try_files $uri $uri/ /web/app/webroot/index.php;
}
server_name my_server_name.com;
location = / {
return 301 https://$server_name/web/$request_uri/;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
#include fastcgi_params;
#fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /.well-known {
allow all;
}
location ~ /.sandbox {
}
location ~* \.(?:manifest:appcache|htm?|xml|json)$ {
expires -1;
}
Update:
I have found another clue to the problem, but have not been able to fix it. In my debug.log I found I was getting this error every time the "Corrupted Content Error" occurred, and at no other times. This is an error in the cakephp AuthComponent:
2017-12-31 09:07:28 Notice: Notice (8): Undefined index: element in [/usr/share/nginx/html/web/app/Controller/Component/AuthComponent.php, line 827]
Trace:
ErrorHandler::handleError() - CORE/Cake/Error/ErrorHandler.php, line 230
AuthComponent::flash() - APP/Controller/Component/AuthComponent.php, line 827
AuthComponent::_unauthenticated() - APP/Controller/Component/AuthComponent.php, line 362
AuthComponent::startup() - APP/Controller/Component/AuthComponent.php, line 304
ObjectCollection::trigger() - CORE/Cake/Utility/ObjectCollection.php, line 128
CakeEventManager::dispatch() - CORE/Cake/Event/CakeEventManager.php, line 243
Controller::startupProcess() - CORE/Cake/Controller/Controller.php, line 678
Dispatcher::_invoke() - CORE/Cake/Routing/Dispatcher.php, line 189
Dispatcher::dispatch() - CORE/Cake/Routing/Dispatcher.php, line 167
require - APP/webroot/index.php, line 110
[main] - ROOT/index.php, line 43
The is saying that $this->flash has no index 'element' for this line:
$this->Session->setFlash($message, $this->flash['element'], $this->flash['params'], $this->flash['key']);
I commented that line out and replaced it with this:
$this->Session->setFlash(
$message,
array_key_exists('element', $this->flash) ? $this->flash['element'] : 'default',
array_key_exists('params', $this->flash) ? $this->flash['params'] : array(),
array_key_exists('key', $this->flash) ? $this->flash['key'] : 'flash'
);
to see if that would fix it.
I still am able to get the corrupted content error if I log out, clear all the cache, reload everything and log back in, but now there is no new error logged in debug.log after making that change. So that's my clue, but I still haven't managed to fix it.
Update: My Request & Response Headers
Here are the Request & Response Headers that firefox shows me when I get this error.
Request Header:
Accept
text/html,application/xhtml+xm…plication/xml;q=0.9,*/*;q=0.8
Accept-Encoding
gzip, deflate, br
Accept-Language
en-US,en;q=0.5
Connection
keep-alive
Content-Length
89
Content-Type
application/x-www-form-urlencoded
Cookie
__cfduid=dc953b88930da52f0ae3f…9-3f87-477c-b65e-380b2034aa54
Host
my_website_url
Referer
https://my_website_url.com/web/users/login
Upgrade-Insecure-Requests
1
User-Agent
Mozilla/5.0 (X11; Ubuntu; Linu…) Gecko/20100101 Firefox/57.0
And Response:
cf-ray: 3d76d2695c6655c4-ORD
content-type
text/html; charset=UTF-8
date
Wed, 03 Jan 2018 14:59:26 GMT
location
///
server
cloudflare
set-cookie
CAKEPHP=le4cq2kpkvjqvt5lvcqel8…400; path=/; secure; HttpOnly
strict-transport-security
max-age=15768000
X-Firefox-Spdy
h2
Although you believe you are not using any cookie, the authentication itself probably stores one cookie. If clearing the cache makes this problem to happen, this means that accessing one resource is problematic but is later solved by another call.
some tips for debugging your config :
As your ssl stapling configuration is incomplete, try removing both directives ssl_stapling and ssl_stapling_verify
Remove the http2 directives in your listeners
The corrupted content can come from an infinite loop and there are 2 301 redirects in your config. Try accessing directly to the https website, both with server_name and my_server_name.com as you have these two names. If we still don't find the cause, remove the redirect in the :80 listener and browse your site without any SSL at all.
I am trying to passwoard protect the default server in my Nginx config. However, no username/password dialog is shown when I visit the site. Nginx returns the content as usual. Here is the complete configuration:
worker_processes 1;
events
{
multi_accept on;
}
http
{
include mime.types;
sendfile on;
tcp_nopush on;
keepalive_timeout 30;
tcp_nodelay on;
gzip on;
# Set path for Maxmind GeoLite database
geoip_country /usr/share/GeoIP/GeoIP.dat;
# Get the header set by the load balancer
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
real_ip_recursive on;
server {
listen 80;
server_name sub.domain.com;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd/sub.domain.com.htpasswd;
expires -1;
access_log /var/log/nginx/sub.domain.com.access default;
error_log /var/log/nginx/sub.domain.com.error debug;
location / {
return 200 '{hello}';
}
}
}
Interestingly, when I tried using an invalid file path as the value of auth_basic_user_file, the configtest still passes. This should not be the case.
Here's the Nginx and system info:
[root#ip nginx]# nginx -v
nginx version: nginx/1.8.0
[root#ip nginx]# uname -a
Linux 4.1.7-15.23.amzn1.x86_64 #1 SMP Mon Sep 14 23:20:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
We are using the Nginx RPM available through yum.
You need to add auth_basic and auth_basic_user_file inside of your location block instead of the server block.
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd/sub.domain.com.htpasswd;
return 200 '{hello}';
}
Did you tried to reload/stop-and-start your nginx after basic auth was added to config? It is necessary to reload nginx with something like:
sudo -i service nginx reload
---- in order to make new settings work.
Also I would double check the URLs that are under your tests.
(Once I tried to test Nginx Basic Auth in an Nginx proxy configuration accessing the actual URL of the resource that was behind the Nginx proxy and not the actual URL of Nginx.)
P.S.
Using an invalid file path as the value of auth_basic_user_file still doesn't cause the configtest to fail in 2018 as well.
Here's my version of Nginx:
nginx version: nginx/1.10.2
Though an invalid file path causes Basic Auth check to be failed and results in:
403 Forbidden
---- HTTP response after credentials provided.
In my case, adding the directives to /etc/nginx/sites-available/default worked, whereas adding the directives to /etc/nginx/nginx.conf did not.
Of course this only happens if you have this in your nginx.conf file:
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
The config is simple (put it under location for specific part of your website, or under server for your whole website):
server {
location /foo/ {
auth_basic "This part of website is restricted";
auth_basic_user_file /etc/apache2/.htpasswd;
}
}