We have a REST API based module deployed on AWS. We have set up a load balancer as the first point of contact which inturn contacts with nginx server which depending on request directs it to either apache or tomcat.
But after some time the communication between nginx and tomcat gets hung up with the following info message on nginx log
2015/07/09 07:07:43 [info] 29889#0: *40458 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while connecting to upstream
After this any REST API call to tomcat fails with 504 on browser, but the communication between nginx and apache works perfectly fine.
We have made all the following changes suggested on various stackoverflow forums such as:-
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;
or
fastcgi_ignore_client_abort on;
Can you guys please suggest what could be wrong on our side?
Related
I have an expressjs server to authenticate login requests from a front app built in svelte.
The front app is running on frontenddomain.com and the expressjs server is running on backenddomain.com
Here is my login post route that authenticate and set cookie:
app.post('/login', (req, res)=>{
// check db,find the user, write a jwt token and put it in a cookie to send it to the
// browser
res.cookie("accesstoken", accessToken)
res.cookie("refreshtoken", refreshtoken)
res.send(...)
}
This server code deployed to an ubuntu server with Nginx running as a proxy_reverse, here is my nginx block configuration:
server {
...
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_cookie_domain localhost .frontenddomain.com;
proxy_cookie_domain ~^(.+)(Domain=frontenddomain.com)(.+)$ "$1
Domain=.frontenddomain.com $3";
}
}
server {
listen 80;
listen [::]:80;
server_name backenddomain.com www.backenddomain.com;
root /var/www/backenddomain.com;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
When I run the server and svelte app (front app) using my local machine, everything works (customer provide credential, cookie is sent to client browser and upon inspecting google dev tools, I confirm that the cookies has been set correctly in the client's browser)
When I deploy my expressjs server to ubuntu (20.04) and use pm2 to run my server, it does start and I can view all my console.log. My front app runs and I go to my login page, enter credential and click submit, the app logs me in (because credentials are correct and user set to true on the front app localstorage) but NO COOKIES are set in the browser.
I read the nginx docs, I read material and posts from different sites on how to set Nginx proxy_reverse cookie domain but unable to fix the problem (the problem is cookies are not set in the browser, the server issues them) but my proxy server is not passing them to the browser.
These questions about proxy_reverse and cookies come up, the poster comeback and post vague answer to their own question and no other answers. It seems like there are not enough technical people out there with knowledge of this issue.
my location code has the proxy_cookie_domain localhost .frontenddomain.com;
How do you set nginx proxy_reverse to pass-set cookies to the browser passed on from upstream server?
So it wasn't related to the nginx block configuration but it was the cookie settings. For cross site cookies to work, it has to be set with sameSite : none (or strict) and secure flags. Make sure that your backend and front has to be using domains (ip is not allowed in the latest draft as of this writing)
You also need both front and back domains to be secured (https) with an ssl.
Your ufw on nginx needs to allow https.
Cookie settings:
res.cookie("name", "value", { sameSite: "none", secure : true })
restart your nginx after updating the server and your nginx conf sites-available
and it should work.
I have a backend server with IP 192.168.1.10 port 8090 the nginx can reach this via example 10.0.0.10 which is then NATed to 192.168.1.10
nginx ----> 10.0.0.10 ----> 192.168.1.10
This works well in general but we are facing a specific problem here it seems with redirections.
The client goes to https://nginx/gateway and the connection is established and the backend server app now requires authentication so the backend server send a 302 redirect back to the client. What I notice in the client web browser is that the address changes and the client tries to establish a connection to http://192.168.1.10:8080 (auth service is located on a different port). The page fails to load as the client has no route to this destination.
My expectation is that nginx takes care of this redirect internally and is simply passing the content to the client using its own https://nginx address.
The config is simple:
location / {
proxy_pass "http://10.0.0.10:8090";
proxy_set_header Host $host:$server_port;
}
Any suggestions? Many thanks!
How and why do I get this error? I have a Boost LG and I've never used nginx or Ubuntu or even heard of them until recently. Is someone accessing or running these in my phone? Please help and thanks a million
When you will find 502 Bad Gateway error :
Nginx running as proxy for Apache web server.
Nginx running with PHP-FPM daemon.
Nginx running with other services as gateway.
Bad buffering/timeout configuration.
Solutions :
Other quick solutions for 502 Bad Gateway error:
1) Increase buffer and timeouts inside http block:
http {
...
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
...
}
2) Ensure your php-fpm service is listening according to what you’ve configured in nginx, it can be either this two options:
Edit www.conf file (in CentOS it is located at /etc/php-fpm.d/www.conf and try with one of this two options:
listen = /var/run/php5-fpm.sock
or
listen = 127.0.0.1:9000
After that, just restart the php-fpm service.
3) Disable APC Cache if used, instead try Xcache, apc can cause this kind of issues under particular enviroments causing segmentation faults.
4) I recently found another cause of 502 Bad Gateway error, check it out here: php5-fpm.sock failed (13: Permission denied) error
Below is my nginx configuration file for Jenkins. Most of it is exactly as per I've read in the documentation.
Config file:
upstream app_server {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
listen 80;
listen [::]:80 default ipv6only=on;
server_name sub.mydomain.net;
location ^~ /jenkins/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server;
break;
}
auth_basic "[....] Please confirm identity...";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
When navigating to http://sub.mydomain.net/jenkins I get prompted for my basic auth with Server says: [....] Please confirm identify....
This is correct, but as soon a I enter the proper credentials I then get PROMPTED AGAIN for basic auth once again, but this time: Server says: Jenkins.
Where is this second hidden basic_auth coming from?! It's not making any sense to me.
Hitting CANCEL on the first prompt I then correctly receive a 401 authorization required error.
Hitting CANCEL on the second basic auth ("Server says: Jenkins") I get:
HTTP ERROR 401
Problem accessing /jenkins/. Reason:
Invalid password/token for user: _____
Powered by Jetty://
Does anyone know what's possibly going on?
Found the solution to my issue by searching for Nginx used as a reverse proxy for any other application with basic_auth.
Solution was the answer found here:
https://serverfault.com/questions/511846/basic-auth-for-a-tomcat-app-jira-with-nginx-as-reverse-proxy
The line I was missing from my nginx configuration was:
# Don't forward auth to Tomcat
proxy_set_header Authorization "";
By default, it appears that after basic auth Nginx will additionally forward the auth headers to Jenkins and this is what was leading to my issue. Jenkins receives the forwarded auth headers and then thinks it needs to authorize itself too?!
If we set our reverse proxy to not forward any authorization headers as shown above then everything works as it should. Nginx will prompt basic_auth and after successful auth we explicitly clear (reset?) the auth headers when forwarding to our reverse proxy.
I had this issue as well, in my case it was caused by having security enabled in jenkins itself, disabling security resolved the issue.
According to their docs:
If you do access control in Apache, do not enable security in Jenkins, as those two things will interfere with each other.
https://wiki.jenkins-ci.org/display/JENKINS/Apache+frontend+for+security
What seems to be happening is that nginx forwards the auth_basic response to jenkins, which attempts to perform auth_basic in response. I have not yet found a satisfying solution to the issue.
I would like to host multiple rails applications using nginx + unicorn which is currently being served using apache + passenger with railsbaseuri. The only reason is being apache needs to be reloaded after every new application is deployed. I would like to know if adding new application is possible in unicorn+nginx without reloading server.
I want to deploy applications on subfolder like host-name/nginx-app1, host-name/nginx-app2 while host-name points to a basic html page.
Read somewhere related to using sockets to handle individual applications and would be looking for some help to implement this. In my case the application is deployed only once with no further iterations. Once i deploy the new application, there should be no downtime in order to make the current application running.
EDIT
config/unicorn.rb file inside the application.
working_directory "/home/ubuntu/application_1"
pid "/home/ubuntu/application_1/pids/unicorn.pid"
stderr_path "/home/ubuntu/application_1/log/unicorn.log"
stdout_path "/home/ubuntu/application_1/log/unicorn.log"
listen "/tmp/unicorn.todo.sock"
worker_processes 2
timeout 30
One way to go about it is hosting the rails applications as UDS. And nginx to have multiple server blocks to read from each UDS (Unix Domain Sockets). Writing the logic adhoc pardon for syntax errors.
e.g. Have a look at this.
http://projects.puppetlabs.com/projects/1/wiki/using_unicorn
You can host app1 using app1.conf for unicorn which will have a line.
listen '/var/run/app1.sock', :backlog => 512
and have multiple nginx upstreams like
upstream app1 {
server unix:/var/run/app1.sock fail_timeout=0;
}
upstream app2 {
server unix:/var/run/app2.sock fail_timeout=0;
}
....
and route requests (proxypass) from a server block based on location or host header
server {
listen 80;
location /app1 {
proxy_pass http://app1;
proxy_redirect off;
}
location /app2 {
proxy_pass http://app2;
proxy_redirect off;
}
}