I am working on a project that involves two embedded devices, let's call them A and B. Device A is the controller and B is being controlled. My goal is to make an emulator for device B, i.e., something that acts like B so A thinks it's controlling B but in reality, it is controlling my own emulator. I don't control or can change A.
Control occurs via the controller posting GET commands invoking various cgi scripts so the plan is to install apache on "my" device, setup CGI and replicate the various scripts. I am running apache version 2.4.18 on Ubuntu 16.04.5 and have configured Apache2 so it successfully runs the various scripts depending on the URL. As an example, one of the scripts is called 'man_session' and a typical URL issued by device A looks like this: http://192.168.0.14/cgi-bin/man_session?command=get&page=122
I have build a C/C++ program named 'man_session' and have successfully configured Apache to invoke my script when this URL is submitted. I can see this based on the apache log:
192.168.0.2 - - [24/Jan/2019:14:38:38 +0000] "GET /cgi-bin/man_session?command=get&page=122 HTTP/1.1" 200 206 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
Also, my script writes to stderr and I can see the output in the log file:
[Thu Jan 24 14:46:10.850123 2019] [cgi:error] [pid 23346:tid 4071617584] [client 192.168.0.2:62339] AH01215: Received man_session command 'command=get&page=122': /home/pi/cgi-bin/man_session
So far so good. The problem I am having is that the script does not get invoked when device A makes the request, only when I make the request via a browser (both Chrome and Internet Explorer work) or curl. The browsers run on my Windows PC and curl runs on the embedded device "B" itself.
When I turn on device A, I can see the URL activity on the log but the script does not get invoked. Below is a log entry showing the URL but which that does not invoke the 'man_session' script. It shows a code of 400 which according to the HTTP specification is an error "due to malformed syntax". Other differences are the missing referrer and user-agent information and http 1.0 vs http 1.1, but I don't see why these would matter.
192.168.0.9 - - [24/Jan/2019:14:38:12 +0000] "GET /cgi-bin/man_session?command=get&page=7 HTTP/1.0" 400 0 "-" "-"
Note that device A is 192.168.0.9 and my PC is 192.168.0.2. What am I missing here, why doesn't the above URL invoke the script as when issued by the browser? Is there any place where I can get more information about why the code 400 occurs in this case?
After a lot of back and forth, I finally figured out the issue. Steps taken:
Increased log level to debug (instead of the default 'warn' in apache2.conf
This caused the following error message to show up in the log
[Sat Jan 26 02:47:56.974353 2019] [core:debug] [pid 15603:tid 4109366320] vhost.c(794): [client 192.168.0.9:61001] AH02415: [strict] Invalid host name '192.168.000.014'
After a bit of research, added the following line to the apache2.conf file
HttpProtocolOptions Unsafe
This fixed it and the scripts are now called as expected.
Related
I have a docker compose file which creates node js, php container and one MySql DB. Everything works fine and containers are up. But when I try to check website status it gives me error 503 for a while. Website is up after like 5 minutes. I do not see any error in docker logs. docker stats command output is attached. Memory allocation looks fine.
App docker logs, you see 4 minutes difference there.
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.*.*.2. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.*.*.2. Set the 'ServerName' directive globally to suppress this message
[Wed Aug 19 13:22:09.190559 2020] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.3.18 configured -- resuming normal operations
[Wed Aug 19 13:22:09.190694 2020] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
172.*.*.* - - [19/Aug/2020:13:26:53 -0400] "GET / HTTP/1.1" 200 2327 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"
172.*.*.* - - [19/Aug/2020:13:26:54 -0400] "GET /static/js/main.0c1fa848.chunk.js HTTP/1.1" 200 9534 "https://example.ai/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.39.149 Safari/537.36"
Docker logs does not guarantee that website is running fine, it can check the services status, but website may take some time to come UP. Try to find out how much time it takes after all the services you start manually.
We have two clients, one javascript and one C++. The C++ is using libwebsockets. Both of them try to connect via secure websockets (wss) to our websocket server via port 7000, that is sitting behind an NGINX server. When the javascript client connects, the connection is successful and in the NGINX 7000 port logs, this shows:
[04/May/2018:12:25:30 +0000] "GET / HTTP/1.1" 101 0 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36"
However, when trying to connect with the C++ client, the connection fails and the logs show
[04/May/2018:10:59:40 +0000] "GET / HTTP/1.1" 400 5 "-" "-"
Why is it throwing a 400 instead of a 101 in the second case? We are not sure how to debug this. This is what we use in the websocket client
ws->init("wss://echo.websocket.org:7000", nullptr, "your CA root file path");
The client just says Connection Failed. We tried inspecting data with wireshark and enabling more logs in the NGINX server but we can't figure it out. What can it be about? Could it be because of miss-matched SSL/TLS versions?
Turns out the the client's library was not actually including the port number in the request, even when it's included in the connection URL. So either configuring NGINX to handle this specific case or making the library include the port in the request, solves the probem.
I am using Red5 version 1.0.0(final release) for Java 6 on Windows XP sp3.I am using the installer version downloaded from https://code.google.com/p/red5/. I have a project wherein I am performing live webcam chats between the users. I am using RTMPT (HTTP over RTMP)protocol for that.So I have set up my Red5 server behind the Apache web server.The problem is that everything goes on well for 45-50 seconds and suddenly the RTMPT connection gets closed.I am not using a dedicated rtmpt server,i.e. I have not uncommented the rtmpt bean in the conf files.Rather I have added entries of servlet mappings(for idle,fcs,open etc) in the web.xml of my application. RTMPT is listening on 5080 port.I have tested this with previous versions of Red5 also but the problem is the same.The RTMPT connection closes after some time(within a minute).I had gone through logs but there was found nothing regarding this.Also there was no connection closure due to the inactivity period.Has it something to do with Apache? I am not sure whether server is closing the connection (though I cant find any logs about closing connection) or client closes it.Tried it out with 0.9.0 and 0.9.1 too but nothing to avail.I have heard that there were issues using RTMPT with Red5 on Mac but I am on Windows.Any pointers to this problem? Any help is appreciated.Also here are the error logs that I get on my Apache web server -
[error] (OS 10048)Only one usage of each socket address (protocol/network address/port) is normally permitted. : proxy: HTTP: attempt to connect to red5serverip:5080 (*) failed.
The same log is repeated for four times.
Here are some access logs from Apache too -
"POST /send/IDTK7NOG2PXGB/803 HTTP/1.1" 200 1
"POST /send/IDTK7NOG2PXGB/804 HTTP/1.1" 503 323
"POST /send/YXF4WTFMN8TCM/1391 HTTP/1.1" 200 8285
"POST /send/YXF4WTFMN8TCM/1392 HTTP/1.1" 200 1
"POST /send/YXF4WTFMN8TCM/1393 HTTP/1.1" 200 54
"POST /send/YXF4WTFMN8TCM/1394 HTTP/1.1" 200 1
"POST /send/YXF4WTFMN8TCM/1395 HTTP/1.1" 503 323
"POST /close/IDTK7NOG2PXGB/805 HTTP/1.1" 503 323
"POST /close/YXF4WTFMN8TCM/1396 HTTP/1.1" 503 323
Thanx!
Probably you are running out of tcp ports. A tcp connection will remain 4 minutes in the TIME_WAIT state by default, even if it is already closed. When your RTMPT stream uses 5 connections each second, your system will need at least 5*60*4=1200 ports for each connected user.
Often the firewall is limiting the amount of ports available. You can also decrease the keep-alive time of a tcp socket. If you google around with your apache error message you will find enough info to sort this out.
Your red5 server may be crashed. This occurs when your RAM usage is getting over. In that case you need to manually start red5 again. If this solved your problem, you need to upgrade your RAM. I am using about 8GB RAM after facing this problem several times. Because red5 was written using JAVA, it lacks memory. FFMPEG is good to use in a low memory. But I don't know how to provide chat using ffmpeg, exactly.
The 503 means that the service did not respond; if you are forwarding to red5 via apache, then this means there is a problem there. I would suggest not using the stand alone rtmpt bean; instead use only the servlet and remove apache from the mix to debug the issue.
Currently we have an Apache 2.2.3 server with mod_ssl 2.2.3 running Django, with users authenticating by using a x509 certificate.
So far the system is running perfectly except for a single user, who when trying to upload a file receives 400 Bad Request error, and the contents of the ssl_error_log regarding this operation are:
[<date>] [error] [client <client ip>] request failed: error reading the headers, referer: <referrer url>
The contents of the ssl_access_log are:
<client ip> - - [<date>] "POST <target page> HTTP/1.1" 400 321
Also, the user's browser is Firefox as far as I know.
I am completely unable to reproduce this bug and so far none of the other users have experienced it. Could you point out some reasons for this to happen?
I've experienced connectivity that stops the upstream after an X amount of bytes is sent. X was a pretty low value, as in enough to request some simple pages, but not to deal with ajax requests much less upload files. As far as I recall, this connectivity problem occurred only when tethering (from a specific Android phone, but I didnt even test other phones).
So if the upstream gets interrupted and the upload stalls, it makes sense apache would return this error, according to this post: "Apache waits a time equal to the Timeout directive (defaults to 5 minutes if not defined) for a response from the client. It is likely Apache is waiting for the CRLF that indicates the end of the headers, yet it is never received.."
We have an Tomcat server where we're trying to log the HTTP version which the response is sent with. We've seen a few times that it seems to be HTTP/0.9, which kills the content (not supported I guess?). We would like to get some stats on this by using the access log in apache. However, since the header line for this isn't prefixed by anything, we cannot use the %{xxx}o logging.
Is there a way to get this?
An example:
Response is:
HTTP/1.1 503 This application is not currently available
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=utf-8
Content-Length: 1090
Date: Wed, 12 May 2010 12:53:16 GMT
Connection: close
And we'd like the catch HTTP/1.1 (alternatively, HTTP/1.1 503 This application is not currently available.
Is this possible? We do not have access to the application being served, so we need to do this either as a Java filter, or in the tomcat access log - Preferably in the access log.
Enabling the <Valve className="org.apache.catalina.valves.RequestDumperValve"/> in server.xml writes out the request and response headers for each request.
Example:
19-May-2010 12:26:18 org.apache.catalina.valves.RequestDumperValve invoke
INFO: protocol=HTTP/1.1