how to prevent the stdout.out in weblogic to increasing the size heavily (Windows) - weblogic

I have deployed a system integrated with weblogic, but until now I faced a problem is the weblogic increasing the stdout.out size heavily(by GB per week), it caused the system to load slowly and slowly.
Any way to prevent it increase the size heavily or redirect into .log?
Thanks alot

As David Herget says above, using the WebLogic Scripting Tool (WLST) to redirect StdOut and StdErr did not actually work for me either; I had to also do so through the web console (even though they appear to be set on the console) and restart the relevant jvms.
I can't reply to David's comment above due to being a newbie. [Edited since for clarity]

Not totally sure to understand fully your question.
Are you talking about the {server_name}.out file located in the {Domain_Path}/servers/{server_name}/logs ?
If so, I've never found anyway to rotate those logs automatically so I run a script each day to rotate it (basically copying it to another name, zip it and echoing a NULL in the orginal file...erasing the older one after).
If you are talking about redirecting StdOut to the logs though, that can be done within the console for each server in the logging tab by checking "Redirect stdout logging enabled". Configuration to rotate those logs can also be done within that tab.
On that, StdErr can also be redirected, but not from the console (in WL9). You have to put "RedirectStderrToServerLogEnabled" at true in the MBean tree by wlst (it's located at /Servers/{server_name}/Log/{server_name}
I know the question was ask long time ago but hoping it would help nonetheless

Weblogic provides features of log files rotation based on the size and time interval.
You can try rotating the log files based on the size. You would need to configure the log rotation policy from the admin console. Please refer the below link for further details.
http://docs.oracle.com/cd/E12840_01/wls/docs103/ConsoleHelp/taskhelp/logging/RotateLogFiles.html
If you want to rotate the log files on demand, you can use the below WSLT script.
C:\>java weblogic.WLST
#connect WLST to an Administration Server
wls:/offline> connect('username','password')
#navigate to the ServerRuntime MBean hierarchy
wls:/mydomain/serverConfig> serverRuntime()
wls:/mydomain/serverRuntime>ls()
#navigate to the server LogRuntimeMBean
wls:/mydomain/serverRuntime> cd('LogRuntime/myserver')
wls:/mydomain/serverRuntime/LogRuntime/myserver> ls()
-r-- Name myserver
-r-- Type LogRuntime
-r-x forceLogRotation java.lang.Void :
#force the immediate rotation of the server log file
wls:/mydomain/serverRuntime/LogRuntime/myserver> cmo.forceLogRotation()
wls:/mydomain/serverRuntime/LogRuntime/myserver>
http://docs.oracle.com/cd/E12840_01/wls/docs103/logging/config_logs.html#wp1001654

Related

Weblogic 10.3.6 generates empty heapdump on OutOfMemoryError

I'm trying to generate a full heapdump from Weblogic 10.3.6 due to an OutOfMemoryError generated by a Web Application deployed on the Server.
I've setted the following start script:
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/heapdump
When the OutOfMemoryError occurs, Weblogic generates an empty hprof file (0 bytes size) in /path/to/heapdump folder, and nothing happens: the Server remains in RUNNING mode, even if is not reachable anymore.
The java process is still alive, but with 0% of processor.
Even the server.out log seems completely frozen, without any trace of the OutOfMemoryError.
What's wrong with the configuration?
Probably you can use Java Flight Recorder to save events and check which objects are generating OOM.
(any profiler should work as well).
Been there :( . I remember at the time that we've found it was somewhat logical since there was not enough memory for normal operation, the JVM could not automagically find enough memory to create a heapdump either. If memory serves me well, at that time we did 2 things to debug the memory leak. First we were "lucky" enough that the problem was happening fairly regularly so a close manual monitoring was possible (monitoring of the gc.log looking for repeated FullGC and monitoring of the performance tab in the console). Knowing when the onset of the problem was starting we were doing some kill -3 to get the dump manually. We also used jstack {PID} (JDK 1.6 on Linux) with some luck. With those, at the time, the devs were able to identify the memory leak. Hope that helps.
Okay, your configuration looks alright.. you might want to check if the weblogic process user has the rights to edit the heap dump file.
You can take heap dump by Java tools :
JAVA_HOME/bin/jmap -dump:format=b,file=path_of_the_file
OR
%JROCKIT_HOME%\bin\jrcmd hprofdump filename=path_of_the_file

How to track down long running calls to IIS?

Our users are restless. They keep complaining about woolly, unmeasurable stuff, particularly slowness, without giving specifics, which of course makes it very difficult to track down.
Nonetheless, it is quite possible that they are right, that there are server calls that are taking way too long to come back. So I want to put some kind of sniffer on the web site (we're using ASP.NET MVC 4 on IIS7) that will log any call that takes more than n seconds to turn around, or that returns more than x megabytes of data, along with all request parameters, the response size, and maybe a certain amount of response data.
I haven't a clue how to do this, though. Any suggestions?
here is my take on this:
FRT
While you can use failed request tracing to log slow requests, in my experience is more useful for finding out why a request fails before it hits your application, rather than why its running slowly. 9/10 times its going to simply show you that the slowdown is in your code somewhere.
Log Parser
Yes you can download and analyze iis logs. I use Log Parser Lizard to do the analysis - its a great gui over log parser. Here's a sample of how you might query slow requests over 1000ms:
SELECT
To_String(To_timestamp(date, time), 'dd/MM/yyyy hh:mm:ss') As Time,
cs-uri-stem, cs-uri-query, cs-method, time-taken, cs-bytes, sc-status
FROM
'C:\inetpub\logs\LogFiles\W3SVC1\u_ex140721.log'
WHERE
time-taken > 1000
ORDER BY time-taken desc
New Relic
My recommendation - go easy on yourself and sign up for a free trial. No I don't work for them, but I've used their APM product a lot. Install the agent on the server - set it up. In 10 mins you will be amazed at the data you see about the site. Trust me.
Its designed to work in production environments and gives you amazing depth of info on what's running slow, down to the database query and stack traces. Its pure awesome. Once its setup wait for the next user complaint, log in and look at traces for the time frame.
When your pro trial ends, you can still get valuable data on the free tier, but it will only keep last 24 hours. We purchased licenses -expensive yes, but worth every cent. Why? Time taken to identify root causes was reduced by an order of magnitude, we can get proactive by looking at what is number 2, 3 and 4 on the slow requests list and working those before they become big problems, and finally the alerting makes us much more responsive when things were going wrong.
Code it
You could roll you own. This blog uses Mvc ActionFilters to do the logging. You could also use an HttpModule similar to this post. The nice thing about this approach is you can compile and implement the module separately from your application, and then just drop in the dll and update web.config to wire up the module. I would be wary of these approaches for a very busy site. Also, getting the right level of detail to fully identify the root is challenging.
View Requests
As touched on by Appleman1234, IIS has a little known feature to look at requests currently executing. Its handy for the 'hey its running slow right now' situation. You can use appcmd.exe or the IIS gui to do it. You will need to install the 'Request Monitor' IIS feature for this to work. This approach is ok for rudimentary narrowing of the problem, but does not show you whats running slowly in your controller.
There are various ways you can do this:
Failed Requests Tracing(FRT) – formerly known as Failed Request Event Buffering (FREB) with custom failure condition of takes over a certain time to load / run
Logging request information with IIS logging functionality and then using a tool like LogParserStudio
Using tools like Fiddler or IISMonitor on the IIS server to capture request information
For FRT the official documentation is available here and information how to capture dumps for long running process is avaliable here
For logging request information in IIS information about log file analysis is located here
For information on configuring Fiddler to capture IIS requests find information here
A summary of the steps in the linked resources is provided below.
For FRT
From IIS Manager for a given site,In the Actions pane, under Configure, click Failed Request Tracing and enter desired values in dialog box to enable Failed Request Tracing.
From IIS Manager for a given site, under IIS click Failed Request Tracing Rules, in order to define rules of failure for a given request. In the Actions pane, click Add and follow the wizard.
The logs will go in the directory you specify and are viewable in a web broswer.
For IIS logging
Logging is enabled by default on IIS
From IIS Manager for a given site,under IIS click Logging, and in the Actions Pane, click Enable to enable logging if it isn't already.
From IIS Manager for a given site,under IIS click Logging, and then configure as desired and click apply.
Install LogParser, .Net 4.x and LogParserStudio (if you need additional steps see here
Open LogParserStudio and add logs to it, you then can use SQL queries to get information from the log files.
For Fiddler
You need to change the user that IIS runs as to a user that can launch applications, like Fiddler (instead of Network Service), and then launch Fiddler with that user.
Also see Monitor Activity on a Web Server (IIS 7) for further information.

Read-only web console access in ActiveMQ

I'm using ActiveMQ 5.10 and would like to create a user that has read-only access through the web console.
Red Hat published this article, mentioning that it's not really read only due to a bug in ActiveMQ.
According to the bug report AMQ-4567, the bug is fixed as of ActiveMQ 5.9. However, I'm not seeing it work appropriately.
I have tried a number of different configurations, with the most recent being two separate JAAS implementations, one for Jetty and one for ActiveMQ. The relevant property files are excerpted below.
I can mostly log in to the web console using the "system" user. But the guest user doesn't work at all. The application user (appuser) doesn't need access to the web console at all.
My authN/authZ needs are pretty trivial: one admin user, one application account, and one read-only monitoring account.
Is there any good way to get this working with a recent version of ActiveMQ (>= 5.9.0)?
groups.properties
admins=system
users=appuser,admin
guests=guest
users.properties
system={password redacted}
appuser=appuser
guest=guest
jetty-realm.properties
system: MD5:46cf1b5451345f5176cd70713e0c9e07,user,admin
guest: guest,guest
As an aside, I used the Jetty tutorial and the Rundeck instructions to figure out the jetty-realm.properties file and chapter 6 of ActiveMQ in Action to work out the ActiveMQ JAAS.
I was finally able to get to what I wanted by deploying the web console to an external Tomcat instance. I assume that when it runs out of process, it can't bypass security and so has to use whatever credentials you provide. In this case, I gave the Tomcat instance the read-only JMX user credentials.
It's not great, as there is no security trimmed UI. You can still attempt to create new destinations, delete destinations, etc. When you try with a read-only user, you get an error. That gets a "D" for UX, but a "B" for security.

Removing JVM properties from WAS 7 from the filesystem

I recently was modifying some of my server properties in Rational Application Developer to try and increase the memory of my JVM on startup. I forgot to take a backup before doing this, and by adding in an incorrect JVM variable, it seems I have broke my server in an unworking state. Whenever I try and startup my server to do any configuration changes, the JVM refuses to start with invalid params being passed in.
Is there a way to reset any JVM changes for WebSpehere Application Server v7.0 through the filesystem, or a way to do it without needing the server running already? I have been looking around in the wasProfile hoping to stumble onto a file where my settings ultimately live, but have had no luck.
It should be possible to write a wsadmin script to view/adjust the JVM options, but if you're on a non-z/OS platform, the fastest way to get back to working is probably to edit PROFILE_HOME/config/cells/CELL/nodes/NODE/servers/SERVER/server.xml; the JVM settings are typically written at the very end.

XPages JVM Error: err.PersistenceServiceResourceProvider.Errorwritingtopersistedcontenttor

We are running a Domino 8.5.3 and the server log is constantly issuing these errors:
HTTP JVM:
!err.PersistenceServiceResourceProvider.Errorwritingtopersistedcontenttor!
We have not been able to isloate it to a particular page. Eventually, the HTTP task will crash and we need to reboot the server and recompile all the databases on the server. We are using the CKEditor to generate the HTML content. You help would be most appreciated.
We used to get this exact error a lot which appeared to be caused by inline images uploaded via the CKEditor like Paul mentioned.
I don't know why but we fixed it by changing the directory domino uses for uploads via the
xsp property xsp.persistence.dir.xspupload (formally xsp.upload.directory)
changing it to something like c:\temp rather than the windows default made the problem go away. could have been a co-incidence but may have been something in windows interfering
I haven't seen that error before but it reads like it might be something to do with asking the server to make a lot of data available to the xpages between calls to the server- effectively, session and application scope data.