Weblogic 12.2.1 managed server access.log not updating - weblogic

I have developed some JAX-RS web services and deployed the WAR file to a managed server on WebLogic 12.2.1. When I call a web service, either through a client program, or via web browser, I noticed that nothing is getting updated in E:\MLM\MyDomain\servers\MyAppSrv01\logs\access.log. This file stays empty all the time. When the next day comes (at 12.00am), the file will roll over to access.logNNNNN (e.g. access.log00004) and then I can see some of the GET and POST calls of the previous day appearing in access.logNNNNN. The strange thing is that only some of the web service calls appear in access.logNNNNN, even though I make many calls throughout the testing. What could be the problem?
Thanks in advance.

You are not seeing access logs at Run Time due to Buffer Size defined. To reduce I/O Weblogic will write logs to Buffer first and when the limit reaches it will write to access.log file.
Log Buffer Size
The maximum size (in kilobytes) of the buffer that stores HTTP requests. When the buffer reaches this size, the server writes the data to the HTTP log file. Use the LogFileFlushSecs property to determine the frequency with which the server checks the size of the buffer.
You can set this value to 0 for run-time logging.

Related

Issue when playing DashCast live stream

I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.

OOM Exception with MTOM client

I am working on transfer large size file, and finally ended with MTOM implementation. we created MTOM enabled web service and client, and tested the client as a plain Java program. and we were able to send 1 GB file successfully. the main point here the heap at client place were not even increasing more than 70 MB.
But when I tried to initiate the same call from web-logic container (means created web client), we end up with below OOM Exception.
at
weblogic.utils.io.UnsyncByteArrayOutputStream.resizeBuffer(UnsyncByteArrayOutputStream.java:59)
at weblogic.utils.io.UnsyncByteArrayOutputStream.write(UnsyncByteArrayOutputStream.java:89)
at javax.activation.DataHandler.writeTo(DataHandler.java:293)
at com.sun.xml.ws.encoding.MtomCodec$ByteArrayBuffer.write(MtomCodec.java:196)
at com.sun.xml.ws.encoding.MtomCodec.encode(MtomCodec.java:163)
at com.sun.xml.ws.encoding.SOAPBindingCodec.encode(SOAPBindingCodec.java:258)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:142)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:86)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:598)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:557)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:542)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:439)
at com.sun.xml.ws.client.Stub.process(Stub.java:248)
at com.sun.xml.ws.client.sei.SEIStub.doProcess(SEIStub.java:135)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:109)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:89)
at com.sun.xml.ws.client.sei.SEIStub.invoke(SEIStub.java:118)
at $Proxy101.uploadFile(Unknown Source)
anyu one have any idea
UPDATE: it seems the MTOM settings are not effective when we run the program in web-logic container ! but still I am not able to find the solution
UPDATE 2: it seems weblogic is not supporting streaming ! I will update the weblogic version and update the ticket, till them wish me luck..
Add this additional Java/JVM Option in setDomainEnv.sh
EXTRA_JAVA_PROPERTIES="-DUseSunHttpHandler=true ${EXTRA_JAVA_PROPERTIES}"
export EXTRA_JAVA_PROPERTIES
switches from weblogic specific (weblogic.net.http.HttpURLConnection) to sun's HTTP handler.
This solved my issue.
Refer:
Changing HttpURLConnection in running jvm
http://atgtipsandtweaks.blogspot.com/2011/11/weblogicjava-httphandler-issues.html
Thanks!

PUT Stream 0 Bytes

I am using Windows Explorer to test the WebDAV implementation I am adapting to our system. The implementation is using IIS Express and is launched by Visual Studio 2013. I turned off Windows Explorer's requirement for SSL with WebDAV so I can test basic authentication (which works).
The problem I am having is with the Write method of the DavFile implementation. I connect to the web folder, navigate to a sub folder, then attempt to copy a JPG file from a folder on my computer's hard drive, into the WebDAV sub folder (using Windows Explorer).
The attempt to copy up a file (854kb) fails. When I set a break point, I notice that the "segment" stream (one of the input parameters on the "write" method, shows 0 (zero) bytes length.
Any tips on how to debug this problem? What is the most likely cause of 0 byte in the stream?
Here are some ideas about how to understand what is going wrong:
Examine the server log for exceptions. By default it is called WebDAVLog.txt and located in \App_Data\WebDAV\Logs\ folder. Are there any exceptions in it? Check your server log and make sure all requests were successful.
Examine WebDAV requests with a Fiddler tool or any other debugging proxy. While all requests that reached the WebDAV server Engine are logged, if the request failed before hitting the Engine you will not see it in a log. Usually this happens if the request failed during authentication stage.
Note that to capture requests using Fiddler on 'localhost' you must use 'localhost.fiddler' instead of 'localhost' when connecting to server, for example: http://localhost.fiddler:1234.
Exclude any client side issues. Finally there could be issues with client software that you are using, including with Microsoft miniredirector. Try to access server from any other machine. To get the idea if the problem is on the client or server side try also to reproduce the issue on ajaxbrowser.com.
You can post a part of the WebDAVLog.txt or fiddler log here or send it to IT Hit, it may give the idea of what is wrong.

RUN#Cloud consistently throws me out during a heavy operation

I'm using a large app instance to run a basic java web application (GWT + Spring). There's an expensive operation within my application (report) which takes a long time to execute.
I've tried running it with the cloudbees SDK on my local machine with similar settings as it would be on the cloud and it seems to function just fine. It runs in about 3-4 minutes.
On the cloud, it seems to be taking longer. The problem isn't the fact that it takes long. What happens in that cloudbees terminates the session after 5 minutes and gives me an error in my browser saying 'Unable to connect to server. Please contact your administrator'. A report which doesn't take as long runs just fine. My application has a session timeout of 30 minutes, so that isn't a problem either.
What could possibly be going wrong? Is it something to do with cloudbees?
This may be due to proxy buffering of your request through the routing layer (revproxy) - so it most likely isn't a session timeout - but the http connection getting cut.
You can either set proxyBuffering=false via the bees CLI command (eg when you deploy the app) - this will ensure longer running connections can work.
Ideally, however, you could change the app slightly to return to the browser with some token which you can poll with to get completion status, as even with a connection that lasts that long, over the internet it may provide a bad experience vs locally.

WCF receive timeout

When attempting to connect/communicate with my service i have to wait for almost exactly 20 seconds each time before the exception is fired. Since this all gonna be running on a local network, I would like decrease that timeout period to 5 seconds? I tried decreasing the receiveTimeout on my client, but it didn't work. I looked all over my code for a 20 second timeout variable set, but couldn't find any. What should i be changing?
There are different timeout settings http://msdn.microsoft.com/en-us/library/ms731078.aspx. They can be set for example in a config file (web.config or app.config) see http://msdn.microsoft.com/en-us/library/ms731343.aspx as an example. Under http://msdn.microsoft.com/en-us/library/ms731399.aspx you can choose the binding which you use and set the corresponding setting.
UPDATED: You probably have the timeout set on the TCP level. Try reducing the TcpMaxConnectRetransmissions (Default value 2) or TcpInitialRTT (Default value 3, on NT 4.0 the parameter has the name InitialRTT) parameters in the registry, reboot your computer and try your experiments one more time. About affect of 21 seconds you can read in http://support.microsoft.com/kb/223450, http://support.microsoft.com/kb/175523, http://support.microsoft.com/kb/170359 or http://www.boyce.us/windows/tipcontent.asp?ID=189. You can read a description of the TCP/IP default configuration values at http://support.microsoft.com/kb/314053 (for Windows XP) and http://technet.microsoft.com/en-us/library/cc739819(WS.10).aspx (for Windows Server 2003 with SP2).
What you may actually be seeing is the cold start from your webapp. The Service Not Found exception would fire back pretty quickly unelss you had hit it pretty hard and you started queueing service requests beyond what WCF was configured to do.
However, if you had your website unloaded (appdomain and worker process) it could take 20 seconds to hit to the code that builds the channel to your service. So it may be something masked.
If your website and service are in different application pools then this is maginfied because it has to cold start the website and then coldstart the service, which are done in succession instead of simultaneously.
To somewhat alleviate this you can use a keepalive/ping service. Something that just constantly hits the URL to keep the AppDomain in memory and the worker process alive (if not shared). By default IIS 6 will shutdown the worker process after 20 minutes of inactivity, so when the first request comes in, http.sys starts up a new worker process, which loads the framework, which loads your app, which starts the pipeline, which executes your code, which delivers to your user. :)