I am using nginx 1.17.10.1 Unicorn build from http://nginx-win.ecsds.eu/ and Apache/2.4.43 build from Apachelounge on Windows Server 2012 R2.
Nginx serves static files and proxies Apache responses for PHP scripts. Everything was fine until recently.
Two times in a day without any distinct reason the websites stop responding. Memory/CPU/Network usages are ok. Apache starts logging like
XX.XX.XX.XX 0ms [01/Jul/2020:05:05:20 -0700] "-" 408 - "-" "-"
for each request.
Nginx log shows
2020/07/01 06:04:54 [error] 11800#12192: *5002230 WSARecv() failed (10053: An established connection was aborted by the software in your host machine) while reading response header from upstream, client: YY.YY.YY.YY, server: example.com, request: "GET /the/url/here HTTP/2.0", upstream: "http://XX.XX.XX.XX:8181/the/url/here", host: "example.com"
Server reboot doesn't help. I can connect to the backend directly and it serves the response without any problem.
The only way I could resolve the problem was to switch HTTP/2 off in nginx configuration.
So what can cause this behavior?
Related
I have a web app running on Heroku and domain managed by 1und1 (German version of domain registrar 1and1). To make the app available via "example.com" I did the following:
Created www.example.com subdomain in 1und1.
Attached it to www.example.com.herokudns.com as described in Heroku's guides (CNAME www.example.com.herokudns.com).
Ordered SSL certs from 1und1 and used them to setup HTTPS on Heroku side.
Set up HTTP redirect example.com -> https://www.example.com to make top level domain to point to Heroku.
This all worked fine until I tried to get the app by https://example.com - Chrome shows me "This site can’t provide a secure connection" page with ERR_SSL_PROTOCOL_ERROR.
cURL output:
#1.
curl https://example.com
curl: (35) Server aborted the SSL handshake
#2.
curl -vs example.de
Rebuilt URL to: example.de/
Trying <example.de 1und1 IP address here>...
TCP_NODELAY set
Connected to example.de (<example.de 1und1 IP address here>) port 80 (#0)
GET / HTTP/1.1
Host: example.de
User-Agent: curl/7.51.0
Accept: */*
< HTTP/1.1 302 Found
< Content-Type: text/html; charset=iso-8859-1
< Content-Length: 203
< Connection: keep-alive
< Keep-Alive: timeout=15
< Date: Tue, 11 Jul 2017 14:19:30 GMT
< Server: Apache
< Location: http://www.example.de/
...
#3.
curl -vs https://example.de
Rebuilt URL to: https://example.de/
Trying <example.de 1und1 IP address here>...
TCP_NODELAY set
Connected to wavy.de (<example.de 1und1 IP address here>) port 443 (#0)
Unknown SSL protocol error in connection to example.de:-9838
Curl_http_done: called premature == 1
Closing connection 0
So, the question is: how can I set up HTTPS redirect with 1und1 and Heroku?
Answering to my question.
After spending some time to google the issue out I found this article https://ubermotif.com/1and1-nightmare-bad-registrar-can-ruin-day. They faced the same issue. I decided to call to 1und1 support (they only offer calls no chats or email tickets). They told it is their issue, the GUI screwed up and they will put the dns settings to their DB by hands.
The issue is not solved yet, I'm waiting while dns changes will be applied/propagated.
This type of error comes because of server or website. You should try following tips to fix the errors:
Disable QUIC Protocol
Remove or Modify Host file by removing bad programs or the website you searching for Clear SSL state by following steps:
Start Menu > Control Panel > Network and Internet > Network and Sharing Center
Click on Internet Options from the left button When internet properties dialog box will open, go in content tab and select 'Clear SSL' option.
Check system time that it is matching with current time or not
Check Firewall to see your website IP address has been blocked or not, and if blocked then remove from it
Very new to haproxy and loving it, apart from a 504 issue that we're getting. The relevant log output is:
Jun 21 13:52:06 localhost haproxy[1431]: 192.168.0.2:51435 [21/Jun/2017:13:50:26.740] www-https~ beFootprints/foorprints 0/0/2/-1/100003 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 13:54:26 localhost haproxy[1431]: 192.168.0.2:51447 [21/Jun/2017:13:52:46.577] www-https~ beFootprints/foorprints 0/0/3/-1/100005 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 14:15:57 localhost haproxy[1431]: 192.168.0.1:50225 [21/Jun/2017:14:14:17.771] www-https~ beFootprints/foorprints 0/0/2/-1/100004 504 195 - - sH-- 3/3/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 14:22:26 localhost haproxy[1431]: 192.168.0.1:50258 [21/Jun/2017:14:20:46.608] www-https~ beFootprints/foorprints 0/0/2/-1/100003 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Using the following timeout values in the haproxy.cfg
defaults
log global
mode http
option forwardfor
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 100000
Running on Ubuntu 16.04.2 LTS
Any help and comment very much appreciated!
The problem appears to be with the web server. Check the logs, there, and you should find long-running requests.
Here's how I conclude that.
Note sH-- in your logs. This is the session state at disconnection. It's extremely valuable for troubleshooting. The values are positional and case-sensitive.
s: the server-side timeout expired while waiting for the server to send or receive data.
...so, timeout server fired, while...
H: the proxy was waiting for complete, valid response HEADERS from the server (HTTP only).
The server had not finished (perhaps not even started) returing all the response headers to the proxy, but the connection was established and the request had been sent.
HAProxy returns 504 Gateway Timeout, indicating that the backend did not respond in a timely fashion.
If your backend needs longer than 100 seconds (?!) then you need to increase timeout server. Otherwise, your Apache server seems to have a problem being too slow to respond.
I had a similar issue and found the problem was with how I had configured my backend server section.
backend no_match_backend
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:\ example.com
server nginx-example 192.168.0.10 check port 80
My problem is that I did not specify the port for the connection.
When connecting via HTTP it would work but as I have my SSL terminated on my haproxy.
This attempts to connect via 443 to the backends.
As the backends cannot / don't correctly communicate. The setup of the SSL session with haproxy and the backend that causes the gateway to time out.
I need to force unencrypted communications to the backends.
backend no_match_backend
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:\ example.com
server nginx-example 192.168.0.10:80 check port 80
The change might be hard to spot server nginx-example 192.168.0.10 check port 80 now has :80 after the ip 192.168.0.10:80
This problem was made more complicated by my backend servers having SSL redirects configured. So all my requests would arrive as HTTP and be redirected to HTTPS. So it was difficult to identify where the problem was. I
It looked like https requests were being redirected correctly to the backend servers. I need to disable this redirect on the backend servers and move it forward to haproxy config.
I have an ASP.NET Core app running on a server behind a nginx reverse proxy.
The reverse proxy forwards xxx.mydomain.com to https://localhost:5000. If I use Azure AD for authentication I get a 502 Bad Gateway after the sign in procedure. The callback path seems correct /signin-oidc. I added the full address to the portal.
EDIT:
I was able to get the nginx log from the server and I get the following error:
2017/03/05 22:13:20 [error] 20059#20059: *635 upstream sent too big header
while reading response header from upstream, client: xx.xx.xxx.xxx, server:
xxx.mydomain.com, request: "POST /signin-oidc HTTP/1.1", upstream:
"https://192.168.3.20:5566/signin-oidc", host: "xxx.mydomain.com", referrer:
"https://login.microsoftonline.com/5712e004-887f-4c52-8fa1-
fcc61882e0f9/oauth2/authorize?client_id=37b8827d-c501-4b03-b86a-
7eb69ddf9a8d&redirect_uri=https%3A%2F%2...ch%2Fsignin-
oidc&response_type=code%20id_token&scope=openid%20profile&response_mode=form_pos
t&nonce=636243452000653500.NzRjYmY2ZTMtOTcyZS00N2FlLTg5NGQtMTYzMDJi..."
As I read in many other posts I tried to update the buffer sizes etc. but that all didn't work.
I am out of ideas where to look. Any ideas?
To answer this question it was the buffer size set in the nginx reverse proxy.
The problem was that i was running this on my synology and after every reboot the nginx settings will be reset. So what I ended up doing is write a small bash script that was run after the reboot and copied back my edited settings and restarted the reverse proxy.
I had the same issue with a Synology NAS as the reverse proxy for the application using Azure AD.
What I did:
Created a file under /usr/local/etc/nginx/sites-enabled named custom.conf with the following content:
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
Values should be adjusted if needed. These worked fine for me with Azure AD.
This all files under that directory are loaded by nginx.
Simply restart the nginx service using:
synoservicectl --restart nginx
The following errors are being logged in our proxy Apache logs while processing the request with Tomcat Server:
(103)Software caused connection abort: proxy: pass request body failed
proxy: pass request body failed
We've a Apache reverse proxy which serves the request for the client from our Tomcat Server. Sometimes, the request from the proxy returns 502 with the above error. There are no error logs in Tomcat Server Logs correlated with the above errors in Proxy. Also, the request didn't timeout since some of the requests response time is 1 sec and our default timeout is 120 sec.
We've added ProxyBadHeader Ignore to our httpd configuration [Ref: 502 Proxy Error / Uploading from Apache (mod_proxy) to Tomcat 7] and still didn't see any errors in our Tomcat logs.
Have anyone seen this issue earlier?
We recently had this issue after upgrading one of our machines from Tomcat 6 to 7. Someone forgot to change the default apache-tomcat/conf/tomcat-users.xml file from our standard one and so the wrong password was getting checked by the server. Interestingly this results in the 502 Error you saw above. This can be avoided with some decent logging to determine it is actually an auth problem.
I have a VPS server with CentOS and Apache server.
But I want to run my node.js applications too. I am using sails.js
This sails application is trying to listen to port 80 of specified host.
Here is error (after sails lift running):
debug: Starting server in /var/www/user/data/nodeprojects/projectname...
info - socket.io started
debug: Restricting access to host: projectname.com
warn - error raised: Error: listen EADDRINUSE
warn:
warn: Server doesn't seem to be starting.
warn: Perhaps something else is already running on port 80 with hostname projectname.com?
What is the problem? Can I run both apache and nodejs servers on one server with one port (80)?
No, you cannot.
When a server process opens a TCP port to answer requests, it has exclusive use of that port. So, you cannot run both SailsJS and Apache servers on the same port.
Having said that, you can do lots of interesting things with Apache, such as proxying specific requests to other servers running on different ports.
A typical setup would have Apache on port 80 and SailsJS on port 8000 (or some other available port) where Apache would forward requests to certain URLs to SailsJS and then forward the reply from SailsJS back to the browser.
See either configuring Apache on Mountain Lion proxying to Node.js or http://thatextramile.be/blog/2012/01/hosting-a-node-js-site-through-apache for example implementations of this approach.
you cannot use same port for different application. NodeJS can use any open port. What you need todo is port forwarding for your app. :)