Can uplodading a file using org.apache.net.ftp wait infinitely? - apache

I am using apache.net.ftp api to download from and upload to ftp server. Its working fine in normal scenarios.
But the issue starts when there is some latency or connection is closed by the server for some reasons.
Here comes the time-out. I found a parameter 'SO_TIMEOUT' which is considered when reading from socket. So, I used ftpClient.setSoTimeout(time in millis) method to set it which will be used while downloading a file. It worked fine.
I am not getting how to set time-out while uploading the file to the ftp-server.
Thanks in advance.

Check the following things to make sure everything is running fine,and then try again::
Check the Firewall setting,if any which might be blocking the incoming connections and the connection timeout.

Related

OOM Exception with MTOM client

I am working on transfer large size file, and finally ended with MTOM implementation. we created MTOM enabled web service and client, and tested the client as a plain Java program. and we were able to send 1 GB file successfully. the main point here the heap at client place were not even increasing more than 70 MB.
But when I tried to initiate the same call from web-logic container (means created web client), we end up with below OOM Exception.
at
weblogic.utils.io.UnsyncByteArrayOutputStream.resizeBuffer(UnsyncByteArrayOutputStream.java:59)
at weblogic.utils.io.UnsyncByteArrayOutputStream.write(UnsyncByteArrayOutputStream.java:89)
at javax.activation.DataHandler.writeTo(DataHandler.java:293)
at com.sun.xml.ws.encoding.MtomCodec$ByteArrayBuffer.write(MtomCodec.java:196)
at com.sun.xml.ws.encoding.MtomCodec.encode(MtomCodec.java:163)
at com.sun.xml.ws.encoding.SOAPBindingCodec.encode(SOAPBindingCodec.java:258)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:142)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:86)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:598)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:557)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:542)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:439)
at com.sun.xml.ws.client.Stub.process(Stub.java:248)
at com.sun.xml.ws.client.sei.SEIStub.doProcess(SEIStub.java:135)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:109)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:89)
at com.sun.xml.ws.client.sei.SEIStub.invoke(SEIStub.java:118)
at $Proxy101.uploadFile(Unknown Source)
anyu one have any idea
UPDATE: it seems the MTOM settings are not effective when we run the program in web-logic container ! but still I am not able to find the solution
UPDATE 2: it seems weblogic is not supporting streaming ! I will update the weblogic version and update the ticket, till them wish me luck..
Add this additional Java/JVM Option in setDomainEnv.sh
EXTRA_JAVA_PROPERTIES="-DUseSunHttpHandler=true ${EXTRA_JAVA_PROPERTIES}"
export EXTRA_JAVA_PROPERTIES
switches from weblogic specific (weblogic.net.http.HttpURLConnection) to sun's HTTP handler.
This solved my issue.
Refer:
Changing HttpURLConnection in running jvm
http://atgtipsandtweaks.blogspot.com/2011/11/weblogicjava-httphandler-issues.html
Thanks!

PUT Stream 0 Bytes

I am using Windows Explorer to test the WebDAV implementation I am adapting to our system. The implementation is using IIS Express and is launched by Visual Studio 2013. I turned off Windows Explorer's requirement for SSL with WebDAV so I can test basic authentication (which works).
The problem I am having is with the Write method of the DavFile implementation. I connect to the web folder, navigate to a sub folder, then attempt to copy a JPG file from a folder on my computer's hard drive, into the WebDAV sub folder (using Windows Explorer).
The attempt to copy up a file (854kb) fails. When I set a break point, I notice that the "segment" stream (one of the input parameters on the "write" method, shows 0 (zero) bytes length.
Any tips on how to debug this problem? What is the most likely cause of 0 byte in the stream?
Here are some ideas about how to understand what is going wrong:
Examine the server log for exceptions. By default it is called WebDAVLog.txt and located in \App_Data\WebDAV\Logs\ folder. Are there any exceptions in it? Check your server log and make sure all requests were successful.
Examine WebDAV requests with a Fiddler tool or any other debugging proxy. While all requests that reached the WebDAV server Engine are logged, if the request failed before hitting the Engine you will not see it in a log. Usually this happens if the request failed during authentication stage.
Note that to capture requests using Fiddler on 'localhost' you must use 'localhost.fiddler' instead of 'localhost' when connecting to server, for example: http://localhost.fiddler:1234.
Exclude any client side issues. Finally there could be issues with client software that you are using, including with Microsoft miniredirector. Try to access server from any other machine. To get the idea if the problem is on the client or server side try also to reproduce the issue on ajaxbrowser.com.
You can post a part of the WebDAVLog.txt or fiddler log here or send it to IT Hit, it may give the idea of what is wrong.

ORA-29273: HTTP request failed intermittent error using the utl_http package

I'm using the utl_http package to make HTTP GET requests to an IIS site on the same server (local) as Oracle. Sometimes it works and I get the response, but more often than not it hangs for about 15 seconds and then I get this error:
ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1722 ORA-29263: HTTP protocol error
As a test, I've got a small static text file in the IIS site, so this is how I'm testing it:
select utl_http.request('http://domain.com/test.txt') from dual
I get the same problem if I run it in Oracle Apex instead of direct on the db.
The other thing I've tried is to create a package of my own that does the HTTP request using the long utl_htp.begin_request() method, instead of the utl_http.request() shortcut. This gives the exact same problem (works sometimes but errors mostly - same error).
The pattern I'm seeing is if I wait a while and then try, it works for the first 2-10 times, and then begins erroring. When it does work, I get the response instantly and when it errors, there is always the delay before the error.
If I request the text file URL (or any other resource in the site) using a remote web browser then I get the correct response every time.
I have tried setting a timeout like below but it doesn't have any effect. For example instead of timing out after 3 seconds it continues for 10 or 15 seconds before the error is shown.
UTL_HTTP.set_transfer_timeout(3);
I think I can rule out ACL because it works sometimes.
Does anyone know what might cause this behaviour?
Possible reasons
-> You may have a problem with your TNS-Listener.
From the command prompt window, try to run TNSPING service_name .. try to run it quickly several times and check if it fails in some of them.
I had once a similar problem. Try to re-configure your TNS-Listener.
There must be also an option in which you can give an IP number in the TNS-listener definition. This also solves sometimes these kind of problems.
-> IIS problem.
Read about SET_PERSISTENT_CONN_SUPPORT Procedure:
https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_http.htm#i1027673
Using: utl_http.set_persistent_conn_support(true, 30);
Could you be exceeding the limit of of concurrent HTTP connections? I vaguely remembering that I run into a similar problem when I forgot to close the HTTP connection.

RUN#Cloud consistently throws me out during a heavy operation

I'm using a large app instance to run a basic java web application (GWT + Spring). There's an expensive operation within my application (report) which takes a long time to execute.
I've tried running it with the cloudbees SDK on my local machine with similar settings as it would be on the cloud and it seems to function just fine. It runs in about 3-4 minutes.
On the cloud, it seems to be taking longer. The problem isn't the fact that it takes long. What happens in that cloudbees terminates the session after 5 minutes and gives me an error in my browser saying 'Unable to connect to server. Please contact your administrator'. A report which doesn't take as long runs just fine. My application has a session timeout of 30 minutes, so that isn't a problem either.
What could possibly be going wrong? Is it something to do with cloudbees?
This may be due to proxy buffering of your request through the routing layer (revproxy) - so it most likely isn't a session timeout - but the http connection getting cut.
You can either set proxyBuffering=false via the bees CLI command (eg when you deploy the app) - this will ensure longer running connections can work.
Ideally, however, you could change the app slightly to return to the browser with some token which you can poll with to get completion status, as even with a connection that lasts that long, over the internet it may provide a bad experience vs locally.

Apache upload failed when file size is over 100k

Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.