I just upgraded Apache webserver from 2.2 to 2.4. After upgrade, I am getting lot of stuck threads in W mode. I am not using php. The stuck threads are happening even on static html pages or while doing load testing via Jmeter.
Because of these stuck threads, it is reaching the MaxRequestWorkers/MaxClients Limit and server becomes non-responsive. Memory is not an issue. Since even during the server crash I had about 10Gb of free memory.
Just to verify if it is Apache, I swithced it back to 2.2 (old server) and no more hung threads!
ServerTokens OS
ServerRoot "/etc/httpd"
PidFile run/httpd.pid
TimeOut 295
KeepAlive Off
MaxKeepAliveRequests 100
KeepAliveTimeout 15
StartServers 20
MinSpareServers 15
MaxSpareServers 40
ServerLimit 1024
MaxClients 2048
MaxRequestWorkers 2048
MaxRequestsPerChild 5000
MaxConnectionsPerChild 5000
Following is the pstack on one fo the threads:
pstack 30078
#0 0x00007f0c6536df4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f0c65369d02 in _L_lock_791 () from /lib64/libpthread.so.0
#2 0x00007f0c65369c08 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x00007f0c58e53f4b in yodysMtxLock () from /etc/httpd/modules/libdms2.so
#4 0x00007f0c58e518e6 in yodSlotLock () from /etc/httpd/modules/libdms2.so
#5 0x00007f0c58e50266 in yodStateIncrementSb8 () from /etc/httpd/modules/libdms2.so
#6 0x00007f0c597ca764 in wl_increment_state_metric () from /etc/httpd/modules/mod_wl_24.so
#7 0x00007f0c597c5af9 in request_handler () from /etc/httpd/modules/mod_wl_24.so
#8 0x00007f0c668dc290 in ap_run_handler ()
#9 0x00007f0c668dc7d9 in ap_invoke_handler ()
#10 0x00007f0c668f0bca in ap_process_async_request ()
#11 0x00007f0c668f0ea4 in ap_process_request ()
#12 0x00007f0c668ed7f2 in ap_process_http_connection ()
#13 0x00007f0c668e5890 in ap_run_process_connection ()
#14 0x00007f0c58c2280f in child_main () from /etc/httpd/modules/mod_mpm_prefork.so
#15 0x00007f0c58c22a55 in make_child () from /etc/httpd/modules/mod_mpm_prefork.so
#16 0x00007f0c58c22ab6 in startup_children () from /etc/httpd/modules/mod_mpm_prefork.so
#17 0x00007f0c58c237c0 in prefork_run () from /etc/httpd/modules/mod_mpm_prefork.so
#18 0x00007f0c668c25be in ap_run_mpm ()
#19 0x00007f0c668bbb46 in main ()
Any guidance will be helpful.
The particular thread is stuck in the proprietary weblogic plugin waiting on a lock. It's surprising it manages to trigger even on static requests. But it's something you'll need to take up with the vendor, nobody else can debug it.
Related
My app does a lot of HTTP requests to a server. After installing a SSL certificate on it, the app broke.
My server is running Ubuntu with Nginx hosting PHP code (using the certificate and working) and "proxy-ing" the app server code written in NodeJS. It was working until I changed to HTTPS.
My POST Request typical usage:
var jsonRequest = //Some JSON.
Map<String, String> headers = {'Content-type': 'application/json'};
var response = await http.post(urls['auth'], body: jsonRequest,headers: headers);
The error I get:
E/flutter (25875): [ERROR:flutter/lib/ui/ui_dart_state.cc(148)] Unhandled Exception: HandshakeException: Handshake error in client (OS Error:
E/flutter (25875): WRONG_VERSION_NUMBER(tls_record.cc:242))
E/flutter (25875): #0 IOClient.send (package:http/src/io_client.dart:33:23)
E/flutter (25875): <asynchronous suspension>
E/flutter (25875): #1 BaseClient._sendUnstreamed (package:http/src/base_client.dart:169:38)
E/flutter (25875): <asynchronous suspension>
E/flutter (25875): #2 BaseClient.post (package:http/src/base_client.dart:54:7)
E/flutter (25875): #3 post.<anonymous closure> (package:http/http.dart:70:16)
...
As #Patrick mentioned in the comments, this is the result of TLS mismatch.
General thumb rule: If the server (API) is based on https (TLS) then the client should connect using https. If the server uses http (non-TLS) then the client should use http to connect to it.
In your case, seems like the API you are trying to hit is an http type hence from your flutter app you should use:
Uri.http(baseUrl, endPointUrl)
I think you are using https:// but havent installed SSL certificate. So try to use http:// instead of https://.
This work in my case.
hope also work on your side.
Add this to your android/app/src/main/AndroidManifest.xml
android:usesCleartextTraffic="true"
the result is this:
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.contact_wsp">
<application
android:label="contact_wsp"
android:usesCleartextTraffic="true"
android:icon="#mipmap/ic_launcher">
<activity
android:name=".MainActivity"
We recently upgraded one of our webservers from PHP 5.3 (Debian Squeeze package, using libmysqlclient and APC) to PHP 5.4 (Debian Wheezy, Dotdeb package, using mysqlnd, Opcache and APCu). After working fine for almost one day, we experienced "mysql server has gone away" errors for every request. All other servers with the same load which still run PHP 5.3 with libmysqlclient using the same MySQL server had no problem at all. On all servers we use:
max_execution_time = 60
default_socket_timeout = 60
On our PHP 5.3 servers we did not change any mysql/my.cnf timeouts. We know about problems with read_timeout (mysql), wait_timeout (mysql), default_socket_timeout (php) and max_execution_time (php), but only in context of batch scripts with long running queries. Our webservers usually respond in about 300ms, so those timeouts should not be an issue here.
It became really strange when we removed the server from loadbalancing, so there was no load anymore, but we still had 180 busy Apache processes. Even a apache2ctl graceful did not change anything, even hours later apache2ctl status said:
Apache Server Status for localhost
Server Version: Apache/2.2.22 (Debian)
Server Built: Jun 16 2014 03:51:14
__________________________________________________________________
Current Time: Tuesday, 22-Jul-2014 10:17:44 CEST
Restart Time: Monday, 21-Jul-2014 18:43:37 CEST
Parent Server Generation: 26
Server uptime: 15 hours 34 minutes 6 seconds
Total accesses: 596973 - Total Traffic: 1.6 GB
CPU Usage: u6288.72 s463.96 cu.01 cs0 - 12% CPU load
10.7 requests/sec - 30.8 kB/second - 2962 B/request
176 requests currently being processed, 99 idle workers
GGGGGG_GGGGGGGGG_GG_GGGGGGGGGGGGGGGGGGGG_GGGGGG_GGGGGGGG_GGGGGGG
GGGGGGGGGG_G_GGGGGGGG_G_GG__GGGGGG_GGGGG_GGG___GG_GGGGGGGG_G_GGG
GGGGGGGGGGGG_G_GG__GG_GGG_GGGGGGGGG__GGG_GGG_G_G_GG_G_GGGGGGGGGG
GGG_GGG_GG_GGG_GG_G_GGG_______________.___._W___________________
____.___________.______.........................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
....
Scoreboard Key:
"_" Waiting for Connection, "S" Starting up, "R" Reading Request,
"W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
"C" Closing connection, "L" Logging, "G" Gracefully finishing,
"I" Idle cleanup of worker, "." Open slot with no current process
Only apache2ctl restart solved the issue and everything worked fine again. The MySQL error is the only "useful" error message we found so far.
Could it be an issue with mysqlnd, opcache or apcu and PHP 5.4.30? Are there any known problems which could result in the behavior we have experienced?
Or do you have an idea how to debug the "mysql server has gone away" issue?
We probably found out why it comes to the "mysql server has gone away" error: On the MySQL server we configured a wait_timeout of 30 seconds, which is less than 60 seconds max_execution_time. So under certain conditions something seems to take more than 30 seconds while we are reading a result-set from MySQL, so the server closes the connection while we are still trying to get data from the server. That leads us to the next questions:
What function consumes so much time while we are in a loop reading a result-set from mysql?
Why does apache2ctl graceful not restart the Apache Processes, even when max_execution_time should abort the scripts after 60 seconds?
I think the answer to both questions is a bug in APCu. Because if I look at the hanging Apache childs, I get FUTEX_WAIT from strace:
[pid 28354] futex(0x7f3a8c3d2094, FUTEX_WAIT, 69, NULL <unfinished ...>
If I look at such a process using gdb it seems to hang at pthread_rwlock_wrlock(), what I get is:
0x00007f3adcd18abd in pthread_rwlock_wrlock () from /lib/x86_64-linux-gnu/libpthread.so.0
OK, pthread_rwlock is used for locking in APCu, and problems with locking mechanism is a really good explanation for what we see here, in that we definitely have code which reads/writes to APCu inside of loops through MySQL result-sets, and if there is a problem with locking (which already has been a problem for us in the past with APC as well) it could take >30 and <60 seconds so the MySQL error is what we see. And after that situation something with APCu goes really wrong, so the php script can't be aborted by max_execution_time and not be restarted by apache2ctl graceful anymore.
In APCu issue tracker I could find very similar issues:
https://github.com/krakjoe/apcu/issues/19
But we found another hint. The crash always happens, when there are about 70k keys in APCu, and it does not depend on apc.shm_size, but we found out that our APCu monitoring script produces "PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 78 bytes)" errors when calling apcu_cache_info() at line 47 at the same time when we see the crash. So we have to look into the script why it consumes so much memory, AFAIR id read's all the data for calculating memory fragmentation, perhaps we should remove that part...
But we had a lot of problems with APC in the past, we switched to APCu/Opcache only because we got seg faults with latest APC and PHP 5.4.30 and the issue mentioned above is open for one year now. We are happy to see recent activity on yac, perhaps lockless is a more stable option. If we can't fix by removing problems from our monitoring script, we will switch to local memcached instances, it will be slower but we know it's very stable.
Tried to install fresh instance of PT on VM running Linux with following versions:
OS: Linux CentOS 6.5 64bit
Database: Oracle 11.2.0.1 64bit
App Serv: Tuxedo 11gR1 11.1.1.3 64bit RP015
Web Serv: WebLogic 10.3.6
JDK: Jrockit for linux jdk 1.6.0_45 R28.2.7-4.1.0 64bit
PT: 8.51.25
I could:
- start DB, able to connect w SQLPLUS, App Designer, Data Mover
- start App Serv (listening on ports 7000, 9000)
- start Web Serv
- Access to Web Page (Web Serv is running fine)
Once I load PIA login page, I got TPESVCERR error (default setting error).
Same error if i try to login via PIA.
Web Serv Log (same error can be seen repeatedly)
SEVERE psft.pt8.net.NetReqRepSvc sendRequest TPESVCERR - server error while handling request
SEVERE psft.pt8.net.NetReqRepSvc sendRequest An error occurred on the application server within Jolt while running the service. Cancel the current operation and retry. If the problem persists contact your system administrator. Error Code:10
SEVERE psft.pt8.net.NetReqRepSvc sendRequest Application Server last connected //192.168.236.129_9000
SEVERE psft.pt8.auth.WebProfile loadProfile ERROR: WebProfile loading internal default settings because of an Exception while communicating with "192.168.236.129:9000"
SEVERE psft.pt8.auth.WebProfile loadProfile TPESVCERR - server error while handling request
SEVERE psft.pt8.net.NetReqRepSvc sendRequest TPESVCERR - server error while handling request
SEVERE psft.pt8.net.NetReqRepSvc sendRequest An error occurred on the application server within Jolt while running the service. Cancel the current operation and retry. If the problem persists contact your system administrator. Error Code:10
SEVERE psft.pt8.net.NetReqRepSvc sendRequest Application Server last connected //192.168.236.129_9000
SEVERE psft.pt8.util.PIAPerfUtil initializePSPerf PerfMon: Unable to retrieve performance monitor MonitorURL, Reason:Error connecting to AppServer, Ppm JoltSession to 192.168.236.129:9000 not created, reason: bea.jolt.ServiceException: TPESVCERR - server error while handling request
App Serv Log (same error can be seen repeatedly)
PSAPPSRV.3767 (1) [06/25/14 09:29:08 GetCertificate](0) Process aborted.
PSPAL: Abort: Unexpected signal received
PSPAL: Abort: Location: /vob/peopletools/src/pspal/exception_sigaction.cpp:494: RecoverableSignalHandler
PSPAL: Abort: Generating process state report to /db/pt851cfg/appserv/psdb1/LOGS/PSAPPSRV.3983/process_state.txt
PSAPPSRV.4171 (0) [06/25/14 09:29:15](0) PeopleTools Release 8.51.25 (Linux) starting. Tuxedo server is APPSRV(99)/1
PSAPPSRV.4171 (0) [06/25/14 09:29:15](0) Cache Directory being used: /db/pt851cfg/appserv/psdb1/CACHE/PSAPPSRV_1/
PSAPPSRV.4171 (0) [06/25/14 09:29:15](2) App server host time skew is DB+00:00:00 (ORACLE PSDB)
PSAPPSRV.4171 (0) [06/25/14 09:29:15](2) (PreloadMemoryCache) No project name set in the configuration file Cache Settings parameter, PreloadMemoryCache. Nothing to preload into memory cache.
PSAPPSRV.4171 (0) [06/25/14 09:29:15](2) Use FTP Library has value : Y
PSAPPSRV.4171 (0) [06/25/14 09:29:15](0) Server started
PSAPPSRV.3983 (1) [06/25/14 09:29:17 GetCertificate](0) Process aborted.
Tuxedo Log (same error can be seen repeatedly)
092925.PSDB!PSAPPSRV.4357.735041312.0: 06-25-2014: Tuxedo Version 11.1.1.3.0, 64-bit
092925.PSDB!PSAPPSRV.4357.735041312.0: LIBTUX_CAT:262: INFO: Standard main starting
092925.PSDB!PSAPPSRV.4357.735041312.0: LIBTUX_CAT:476: WARN: Server 99/2: client process 3487: lost message
092925.PSDB!PSAPPSRV.4357.735041312.0: LIBTUX_CAT:477: WARN: SERVICE=GetCertificate MSG_ID=0 REASON=server died
I don't think there is any problem with connection between Web Serv and App Serv.
(When i shutdown App Serv, Web Serv will complain of App Serv not available)
From the logs, it appears that PSAPPSRV crashed while working on GetCertificate request.
The debug info says "unable to determine location of exception" which is troubling.
Extract from stack trace:
#0 0x0000003b80ae15e3 in select () from /lib64/libc.so.6
#1 0x00007f3ccafc58f7 in PSPAL::DumpProcessState::CallDebugger(int, char const*, PSPAL::ExceptionContext const*, bool) () from /db/pt851/bin/libpspal64.so
#2 0x00007f3ccafc5bd4 in PSPAL::DumpProcessState::GenerateAbortDiagnostics(char const*, PSPAL::ExceptionContext*) () from /db/pt851/bin/libpspal64.so
#3 0x00007f3ccafba1db in PSPAL::Abort(char const*, char const*, int, char const*, PSPAL::ExceptionContext*) () from /db/pt851/bin/libpspal64.so
#4 0x00007f3ccafbfad1 in PSPAL::SigactionSignalHandler::RecoverableSignalHandler(int, siginfo*, void*) () from /db/pt851/bin/libpspal64.so
#5 0x00007f3ccafc03a5 in PSPAL::SigactionSignalHandler::SignalHandler(int, siginfo*, void*) () from /db/pt851/bin/libpspal64.so
#6 <signal handler called>
#7 0x00007f3cca099f0e in CReadSerialObj::Init(void*) () from /db/pt851/bin/libpscmnutils.so
#8 0x00007f3cca09a275 in CReadSerialObj::CReadSerialObj(EOBJECT_TYPE, void*) () from /db/pt851/bin/libpscmnutils.so
#9 0x00007f3cca09a449 in CReadFrame::CReadFrame(EOBJECT_TYPE, void*) () from /db/pt851/bin/libpscmnutils.so
#10 0x00007f3cc8b33b59 in CNetRecvMsg::CNetRecvMsg(void*) () from /db/pt851/bin/libpsnetapi.so
#11 0x00007f3cc8b3e557 in CNetReqRepSvc::CNetReqRepSvc(CNetServer*, tpsvcinfo*, wchar_t const*) () from /db/pt851/bin/libpsnetapi.so
#12 0x00007f3cc082ce5c in CCertificateService::CCertificateService(CNetServer*, IPSSignonPeopleCode*, tpsvcinfo*) () from /db/pt851/bin/libpssecurity.so
#13 0x00007f3cc082f2a7 in CertificateServiceFactory::Create(CNetServer*, IPSSignonPeopleCode*, tpsvcinfo*) const () from /db/pt851/bin/libpssecurity.so
#14 0x0000000000419235 in CAppServer::GetCertificate(tpsvcinfo*) ()
#15 0x000000000041549d in GetCertificate ()
#16 0x00007f3cccdaca4b in _tmsvcdsp () from /db/tuxedo/tuxedo11gR1/lib/libtux.so
#17 0x00007f3cccdd4146 in _tmrunserver () from /db/tuxedo/tuxedo11gR1/lib/libtux.so
#18 0x00007f3cccdaa42c in _tmstartserver () from /db/tuxedo/tuxedo11gR1/lib/libtux.so
#19 0x000000000040a747 in main ()
Have tried many different configuration settings, no joy. Could it be library compatibility issue?
Any help would be appreciated.
8:51 PT version is only certified for Oracle Tuxedo version 10.3.0.0.0
And you are using version 11g, this may be the problem of crash AppServer.
hope this helps :)
In normal mode my apache mod status shows this:
CPU Usage: u118.45 s9.79 cu0 cs0 - 14% CPU load
7.96 requests/sec - 18.1 kB/second - 2331 B/request
1 requests currently being processed, 29 idle workers
._.._._..._.___...._____...._.____.._.._.._....___._._____W.....
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
....................................................
But sometimes (period about 1-3 minutes) my site has "lags", and the apache status looks like this:
CPU Usage: u222.29 s18.89 cu0 cs0 - 9.58% CPU load
7.77 requests/sec - 23.3 kB/second - 3064 B/request
20 requests currently being processed, 10 idle workers
WW.WW_W..WWC.._W.WCWW._._.W....WW_W..W__.W__...._...W...........
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
....................................................
During these moments, requests are usual, as at any time. But there are a lot of them.
I don't have any cron jobs that are this frequent.
Increasing prefork SpareServers mb will help, but i want to know why these waves are happening.
My current config is
Timeout 60
KeepAlive On
MaxKeepAliveRequests 200
KeepAliveTimeout 20
<IfModule prefork.c>
StartServers 100
MinSpareServers 10
MaxSpareServers 30
ServerLimit 500
MaxClients 500
MaxRequestsPerChild 10000
</IfModule>
The hardware is strong enough.
Sorry for my bad English.
Any advice would be helpful.
I've started using ASIHTTPRequest in my iOS project to execute REST server method calls and so far have been very successful with it. I just have one strange intermittent problem. Very occasionally I get the following response from using [ASIHTTPRequest startAsynchronous] :
HTTP/0.9 200 OK
When this occurs my server method doesn't get called. Normally every method call returns with a response starting 'HTTP/1.1'. I'm using HTTPS with a GeoTrust/RapidSSL certificate to secure the connection. Interestingly I've found that I get the same 'HTTP/0.9 200 OK' response if I try to connect to the SSL port (443) but specifying 'http' as the protocol.
Just to add more info - the problem mostly occurs after the app has been left idle for a period of time. E.g. request completes successfully, then leave app idle for a while, then on the next request the problem occurs then app continues to work fine.
Can anybody shed some light on what might be occurring?
Many thanks,
Jonathan
UPDATE : I've pasted below some debug information output by ASIHTTPRequest when the problem occurred :
2012-07-12 09:35:49.376 mytestapp[3038:18f07] [CONNECTION] Closing connection #13 because it has expired
2012-07-12 09:35:49.377 mytestapp[3038:18f07] [CONNECTION] Closing connection #14 because it has expired
2012-07-12 09:35:49.378 mytestapp[3038:18f07] [CONNECTION] Closing connection #15 because it has expired
2012-07-12 09:35:49.380 mytestapp[3038:18f07] [CONNECTION] Request #39 will use connection #16
2012-07-12 09:35:49.381 mytestapp[3038:18f07] [CONNECTION] Request #40 will use connection #17
2012-07-12 09:35:49.382 mytestapp[3038:18f07] [CONNECTION] Request #41 will use connection #18
2012-07-12 09:35:49.529 mytestapp[3038:18f07] [STATUS] Request <ASIHTTPRequest: 0x88a1e00> finished downloading data (0 bytes)
2012-07-12 09:35:49.529 mytestapp[3038:18f07] [STATUS] Request <ASIHTTPRequest: 0x88a1e00> received response headers
2012-07-12 09:35:49.530 mytestapp[3038:18f07] [AUTH] Request <ASIHTTPRequest: 0x88a1e00> has passed Basic authentication
2012-07-12 09:35:49.530 mytestapp[3038:18f07] [CONNECTION] Got no keep-alive header, will keep this connection open for 60.000000 seconds
2012-07-12 09:35:49.530 mytestapp[3038:18f07] [CONNECTION] Request #41 finished using connection #18
2012-07-12 09:35:49.531 mytestapp[3038:18f07] [STATUS] Request finished: <ASIHTTPRequest: 0x88a1e00>
2012-07-12 09:35:49.531 mytestapp[3038:15803] responseHeaders={
}
2012-07-12 09:35:49.531 mytestapp[3038:18f07] [STATUS] Request cancelled: <ASIHTTPRequest: 0x88a1e00>
2012-07-12 09:35:49.532 mytestapp[3038:18f07] [STATUS] Request cancelled: <ASIHTTPRequest: 0x88a0200>
2012-07-12 09:35:49.532 mytestapp[3038:18f07] [STATUS] Request <ASIHTTPRequest: 0x88a0200>: Cancelled
2012-07-12 09:35:49.532 mytestapp[3038:18f07] [CONNECTION] Request #39 failed and will invalidate connection #16
2012-07-12 09:35:49.533 mytestapp[3038:18f07] [STATUS] Request cancelled: <ASIHTTPRequest: 0x88a0a00>
2012-07-12 09:35:49.533 mytestapp[3038:18f07] [STATUS] Request <ASIHTTPRequest: 0x88a0a00>: Cancelled
2012-07-12 09:35:49.533 mytestapp[3038:18f07] [CONNECTION] Request #40 failed and will invalidate connection #17
Unsure about IOS specifics in this case but HTTP 0.9 is completely abandoned. There is clear reason for this - it does not support "Host:" header. This means single IP cannot have virtual hosts at all. Such things became obsolete at end of 90-s.
Such response should never happen in real life. If it still happens, some client made request like "GET / HTTP/0.9". But these clients have disappeared ~15 years ago.
SSL is something HTTP is not much aware of. So I believe this is not related. SSL tunnel is set up and after that plain HTTP is ran.
As conclusion I would say you or someone possibly triggered obsolete method. And IOS maybe just have no idea what to do with it. Maybe IOS method is limited methods with host name included and therefore it does not trigger. Anyway you should not worry about it if client is really speaking 0.9 because it anyway does not get proper answers from most of sites. If client speaking 1.1 and you answer 0.9, then possibly request is misunderstood somehow and fallback mechanism to lowest possible HTTP version occurs. Maybe you forgot to setup host name for request or made syntax error in it?
There are some mentions about this problem on the web, and basically it is related to persistent connections, and when something wrong with a Content-length header and a content itself that returned by some buggy servers. It could screw up the browsers and iOS framework, and they don't actually notice the headers.
Here is the one of the possible explanations.
Try to disable persistent connections, it should help. This advice came from developers of ASIHTTPRequest (quite similar situation).
[httpRequest setShouldAttemptPersistentConnection:NO];
HTTP/0.9 200 OK is a non-existant message header. HTTP/0.9 is defined as the request: GET <Request-URI> HTTP/0.9 <CRLF>, the response to which is [Entity-Body], without any status lines nor header. (<urn:ietf:rfc:1945>)
There's an error in your software somewhere. I guess the request either failed, or it's not yet received.