I am using the formpost method of swift server to file upload. some time file get uploaded. But some time not.
On swift server log I am getting 499 httpcode. Please help me to solve this problem.
499 Client Closed Request (Nginx)
Used in Nginx logs to indicate when the connection has been closed by client while the server is still processing its request, making server unable to send a status code back.
Its your client side issue which has a time limit for processing. If the time exceeds and upload has not happened it terminates the connection/upload process. It has nothing to do with Swift.
Hope it helps.
Related
I have a file on AWS S3 that is public:
https://s3-eu-west-1.amazonaws.com/voxist-greetings/33631222504/33651291239_95113eed-386b-4264-a4cf-46182faae125COUCOU1.wav
Now when RVD try to play it I get:
INFO [org.mobicents.servlet.restcomm.interpreter.VoiceInterpreter] (RestComm-akka.actor.default-dispatcher-8586) MediaGroupResponse, succeeded: false jain.protocol.ip.mgcp.JainIPMgcpException: The IVR request failed with the following error code 312
I don't know why... The same file used to work with another name.
Thanks for any hint on how to debug this.
The problem seems to happen on Media Server side. More specifically, it seems the file cannot be opened for some reason.
Relevant code line can be found here.
Can you please take a tcpdump and share it, so we can see the MGCP Play request?
Hope this helps.
UPDATE:
Here is an example:
The 200 OK simply indicates that the MGCP transaction completed successfully. Now we need to dissect the notification (NTFY) sent from Media Server to RestComm, mainly the ObservedEvents parameter.
If you look at the picture, you will see the event triggered is an OperationFailed (of) with ReturnCode (rc) equal to 312, which is an error.
Relevant link to specs can be found here.
To summarise, Media Server receives the request to play the file (in this case a cached version of it) but if fails to open the URL for some reason.
Is the URL reachable from Media Server side?
I created a mule application and able to run/deploy it on my local machine successfully. When I changed the port to Private and deployed to cloudHub, RAML's console is not loading completed.
Same question is also post in below link.
MULE ESB Server: RAML loading for prolonged time
Could someone please help me out.
How big is your RAML? There was bug found where uploads larger than a certain size were timing out and erroring in the backend.
If you watched the network traffic you would see a 504 error being returned to the request.
That particular one got fixed on 27th January so the issue might be solved now?
I’m trying to get a mod_perl2 application ported to AWS. As part of the port I thought I’d move from Debian Squeeze to Wheezy with the latest stable mod_perl & Apache2 combination.
The application works right up to the point I try and write JSON responses to the client. At this point, each request is canceled on the client and on the server I get the error
Apache2::RequestIO::print: (103) Software caused connection abort
whenever I write to the client, i.e.:
$self->req->print($output);
I’ve tried tcpdumping the response to the client, and I can see it being written out, but no response is received on the client end and it just barfs chips. I can’t find any information on how to get around this.
I found quite a few people asking about this question on the net without many answers. The solution to my problem was very specific but I thought I’d post what I did anyway, it may help someone.
The client was canceling the request before the response was fully written, which was crapping out Apache::RequestIO (for reasons I still don’t know).
I couldn’t work out why I was seeing this behavior.
By using tcpdump I could see that data was being written out to the client – and it looked fine.
By inspecting the page in Chrome and looking at the network stack, I could see that my request for data was being canceled after no response was received (which was odd because the code worked fine on other servers and I could see the response was being written). Debugging was may harder because with Apache crashing out with an error in print IO I couldn’t check if the bytes written equaled the bytes of data. I wasn’t sure if something was getting stuck on the server side.
So, I changed the Content-Type of the response from application/json to text/html, so that I could query the page and just look at the actual response as text. Once I did that, I could see that the response was fine.
I started to look for other causes, and I found that in the migration to the new server, I’d missed altering some URLs in the DB to point to the new server, which meant my application was trying to get some data from the old DB.
This in turn was causing a load of timing issues, which was causing my problems. Once I fixed the config, the problems went away.
I am using apache.net.ftp api to download from and upload to ftp server. Its working fine in normal scenarios.
But the issue starts when there is some latency or connection is closed by the server for some reasons.
Here comes the time-out. I found a parameter 'SO_TIMEOUT' which is considered when reading from socket. So, I used ftpClient.setSoTimeout(time in millis) method to set it which will be used while downloading a file. It worked fine.
I am not getting how to set time-out while uploading the file to the ftp-server.
Thanks in advance.
Check the following things to make sure everything is running fine,and then try again::
Check the Firewall setting,if any which might be blocking the incoming connections and the connection timeout.
Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.