I am using Laravel Valet in my development environment and I came across this error today when sending a post request with Livewire.
~/.config/valet/Log/nginx-error.log:
2020/05/17 10:44:27 [error] 3611#0: *1 upstream sent too big header while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /livewire/message/autocomplete.users HTTP/1.1", upstream: "fastcgi://unix:/Users/macuser/.config/valet/valet.sock:", host: "blog.test", referrer: "http://blog.test/dashboard"
To fix it, change the following config files in your environment:
Create file ~/.config/valet/Nginx/all.conf
proxy_buffer_size 4096k;
proxy_buffers 128 4096k;
proxy_busy_buffers_size 4096k;
Append to file /usr/local/etc/nginx/fastcgi_params
fastcgi_buffer_size 4096k;
fastcgi_buffers 128 4096k;
fastcgi_busy_buffers_size 4096k;
After that, valet restart.
The solution was originally posted by colbyalbo.
Related
I have set up API Umbrella in my Ubuntu 20 cloud vm.
Try to access but got 502 Bad gateway like in here:
Obviously, the routing is a failure for some reason.
The output of /var/log/api-umbreall/nginx/current is the following:
2022-09-01T06:08:19.57992 starting nginx...
2022-09-01T06:08:27.48168 2022/09/01 06:08:27 [error] 319#0: *13 [lua] elasticsearch_setup.lua:106: create_aliases(): failed to create elasticsearch index: Unsuccessful response: {"error":{"root_cause":[{"type":"index_already_exists_exception","reason":"already exists","index":"api-umbrella-logs-v1-2022-09"}],"type":"index_already_exists_exception","reason":"already exists","index":"api-umbrella-logs-v1-2022-09"},"status":400}, context: ngx.timer
2022-09-01T06:21:45.17055 2022/09/01 06:21:45 [warn] 318#0: *39756 using uninitialized "x_api_umbrella_request_id" variable while logging request, client: 192.241.213.X, server: mydomain.city, request: "GET / HTTP/1.1", host: "150.230.240.y:443"
2022-09-01T06:32:42.70713 2022/09/01 06:32:42 [error] 318#0: *72162 connect() failed (111: Connection refused) while connecting to upstream, client: 185.14.196.Z, server: mydomain.city, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:14009/", host: "mydomain.city"
I'm hosting a Ghost blog using Digital Ocean. My droplet is Ubuntu Ghost 0.8.0 on 14.04.
Yesterday I successfully installed a TLS/SSL certificate from LetsEncrypt in order to enable HTTPS. The site was working fine then and this morning.
Today I uploaded a new Ghost theme and restarted Ghost in order to access it. I now get the response 502 Bad Gateway when I try to access the site.
Each request for the site adds an instance of the following errors to mynginx error log.
Would someone walk me through what these 2 error messages mean? I'd really appreciate it.
Please note that I've substituted my actual domain name with example.com
2016/06/16 17:28:45 [error] 8125#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: 98.247.253.8, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:2368/favicon.ico", host: “example.com”, referrer: "https://example.com/“
2016/06/16 17:30:14 [error] 8125#0: *18 connect() failed (111: Connection refused) while connecting to upstream, client: 98.247.253.8, server: example.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:2368/", host: “example.com”
I'm totally new to the apache httpd stuff
I setup my host ServerHost1 as a file server with httpd
# httpd -v
Server version: Apache/2.4.6 (Red Hat Enterprise Linux)
Server built: Dec 2 2014 08:09:42
I have put the file TestFile.txt under /var/www/html/TestDir/TestFile.txt
I modified part of the httpd.conf as follow
<Directory />
Order deny,allow
Allow from all
</Directory>
On a test host TestHost1 with full Internet access, I can downloaded my file with wget
TestHost1]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:39:12-- http://ServerHost1/TestDir/TestFile.txt
Resolving ServerHost1 (ServerHost1)... <IP address>
Connecting to ServerHost1 (ServerHost1)|<IP address>|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2859976598 (2.7G) [application/octet-stream]
Saving to: ‘TestFile.txt’
2% [> ] 60,645,376 24.0MB/s
On the host sitting on a semi-isolated network TestHost2, I have to use proxy for wget to work. It works fine with google
TestHost2]# wget google.ca
--2016-03-17 13:53:26-- http://google.ca/
Resolving proxy.com (proxy.com)... <ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.ca/ [following]
--2016-03-17 13:53:26-- http://www.google.ca/
Reusing existing connection to proxy.com:3128.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
[ <=> ] 19,928 --.-K/s in 0.1s
2016-03-17 13:53:27 (159 KB/s) - ‘index.html’ saved [19928]
However when I try to get my file from ServerHost1, it gets ERROR 503: Service Unavailable
TestHost2]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:57:13-- http://ServerHost1/TestDir/TestFile.txt
Resolving proxy.com (proxy.com)...<ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 503 Service Unavailable
2016-03-17 13:57:13 ERROR 503: Service Unavailable.
So the question is
(1) Why am I seeing 503 ServiceUnavailable when the file is apparently available (since I can downloaded from testhost1)?
(2) How do I configure my httpd.conf file so that TestHost2 can wget the file from ServerHost1?
Maybe try with ProxyRequests as described in Apache docs https://httpd.apache.org/docs/2.4/mod/mod_proxy.html
I get this error when I try to use nginx as a reverce proxy server for apache. I guess I have something wrong with nginx side of configuration because when I access the web-host with port I use for apache, everything works nice.
So, here is my nginx configuration file for host:
server{
listen 80;
server_name my-site;
root /usr/share/nginx/www;
index index.html index.htm;
location /static {
alias /var/www/my-site/myapp/static;
}
location /media {
alias /var/www/my-site/myapp/media;
}
location / {
proxy_pass http://my-site:8000;
}
}
Here's my error log for nginx, it wasn't any useful for me though:
2014/02/22 13:58:37 [error] 1398#0: *10 upstream timed out (110: Connection timed out) while connecting to upstream, client: XXX.XXX.XXX.XXX, server: my-site, request: "GET / HTTP/1.1", upstream: "http://XXX.XXX.XXX.XXX:8000/", host: "my-site"
2014/02/22 14:01:40 [error] 1398#0: *14 upstream timed out (110: Connection timed out) while connecting to upstream, client: XXX.XXX.XXX.XXX, server: my-site, request: "GET / HTTP/1.1", upstream: "http://XXX.XXX.XXX.XXX:8000/", host: "my-site"
2014/02/22 14:10:29 [error] 1609#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: XXX.XXX.XXX.XXX, server: my-site, request: "GET / HTTP/1.1", upstream: "http://XXX.XXX.XXX.XXX:8000/", host: "my-site"
2014/02/22 19:07:59 [error] 647#0: *176 upstream timed out (110: Connection timed out) while connecting to upstream, client: XXX.XXX.XXX.XXX, server: my-site, request: "GET / HTTP/1.1", upstream: "http://XXX.XXX.XXX.XXX:8000/", host: "my-site"
2014/02/23 13:40:35 [error] 647#0: *650 upstream timed out (110: Connection timed out) while connecting to upstream, client: XXX.XXX.XXX.XXX, server: my-site, request: "GET / HTTP/1.1", upstream: "http://XXX.XXX.XXX.XXX:8000/", host: "my-site"
I've read somewhere that it may happen when load is too big, but in my case this is definitely not it.
Is your Apache bound to port 8000?
Have you made sure that you can access port 8000 with the external IP address associated to your domain? You can do this by retrieving a page with the http://my-site:8000 URL on the server.
You could try using
proxy_pass 127.0.0.1:8000;
instead of my-site:8000;
This way you don't need the ability to connect to your site via your external IP.
It appears that only one passenger instance serves requests, the others just serve 502 errors. This causes an intermittent error pattern because only the requests that are directed to the second instance fail.
~$ rvmsudo passenger-status
----------- General information -----------
max = 4
count = 2
active = 0
inactive = 2
Waiting on global queue: 0
----------- Application groups -----------
/u/apps/pixie.strd6.com/current:
App root: /u/apps/pixie.strd6.com/current
* PID: 3179 Sessions: 0 Processed: 121 Uptime: 3m 57s
* PID: 3762 Sessions: 0 Processed: 0 Uptime: 2s
This happened after updating to Rails 3.1.0 rc5
2011/07/27 21:37:37 [error] 3125#0: *608 upstream prematurely closed connection while reading response header from upstream, client: 68.226.71.148, server: pixieengine.com, request: "GET /chats/recent HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "pixieengine.com", referrer: "http://pixieengine.com/projects/426/ide"
2011/07/27 21:38:31 [error] 3125#0: *596 upstream prematurely closed connection while reading response header from upstream, client: 76.102.14.57, server: pixieengine.com, request: "GET /chats/recent HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "pixieengine.com", referrer: "http://pixieengine.com/pixel-editor"
2011/07/27 21:39:12 [error] 3125#0: *576 upstream prematurely closed connection while reading response header from upstream, client: 68.8.173.234, server: pixieengine.com, request: "GET /chats/recent HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "pixieengine.com", referrer: "http://pixieengine.com/community/forums/1"
2011/07/27 21:39:12 [error] 3125#0: *687 upstream prematurely closed connection while reading response header from upstream, client: 201.231.103.247, server: pixieengine.com, request: "GET /chats/active_users HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "pixieengine.com", referrer: "http://pixieengine.com/projects/demo/ide"
2011/07/27 21:39:12 [error] 3125#0: *686 upstream prematurely closed connection while reading response header from upstream, client: 201.231.103.247, server: pixieengine.com, request: "GET /chats/recent HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "pixieengine.com", referrer: "http://pixieengine.com/projects/demo/ide"
I solved this by switching to Unicorn.
I wasn't actually able to figure out how to fix passenger, but I was able to verify that it broke because of the transition from Rails 3.0.9 to 3.1.0.rc5.
The temporary fix is to use PassengerSpawnMethod conservative in passenger, which disables forking running processes.