How mule decides that an event should be logged in app logs or runtime log ?
In runtime log all the logs are there even which are in app logs.
All that goes to standard output will be printed in the Mule Runtime logs (ie mule_ee.log). If the applications are configured to use stdout or console appender, then you'll get them in mule_ee.log. This is a bad practice. Applications should use their own log files. The default log4j configuration for applications created in Studio is a log per app.
Related
I have multiple instances of MFP running, but my problem is that all the instances are writing logs to a single log file. How can I write logs to diff location for diff instances?
Assuming that by "multiple instances of MFP" you are referring to multiple MFP runtimes deployed on the same JVM,it is normal to see all logging appearing in the same log - SystemOut.log for WAS , messages.log for WebSphere Liberty etc.
This is because MFP is a layer that is deployed on top of the Application server and all logging from MFP is directed to the standard logging of the JVM. As such, if you deploy multiple runtime wars on the same JVM, it is normal to expect all logging from all the runtimes appear in the same log. This is not different from different EAR/WARs deployed on the same Application server, logging into the same log file.
If you wish to have different logs for different runtimes, deploy them in different JVM instances.
I am new to Mule and followed this blog to create a "websphere-mq connector" through the jms connector. I am using the community edition.
In order to connect to the websphere mq server, I must run the application under a specific Windows username. Running the mule application in Mule Design under the specific username, I am able to connect and receive messages. However, I am unable to connect to the websphere mq server through the standalone application running on a windows server. I changed the user on the service that is running mule to the specific user but am unable to get authorization to the websphere mq server.
Any additional insight would be much appreciated.
I would suggest reviewing the "Getting going without turning off security" article for an introduction to MQ security. This might help get the MQ system correctly configured.
The stand alone application runs the Tanuki Software wrapper as the user assigned to the environment variable %USERNAME% in windows. Even though I updated the user in the Mule service to run as the approved user, the wrapper will take the environment variable.
To solve the problem, I updated the wrapper.conf file to include the following:
set.USERNAME=<approvedUsername>
the environment variable %USERNAME% is now set to the approved username, in which mule will allow the JMS connector to the authenticate with the correct username.
I configured the Operational analytics for the MobileFirst 7.0
Configured the JDNI as per the IBM document and created client side log profile in Admin Operation Console. But it always shows 0 data. Not load any client logs / server logs.
Log receiver adapter has been built and deployed in the operation console. Client has the method to push the logs to server via WL.Logger.send(). I see the client log console and logcat, the logs has been pushed to server. In server log also, i see the invoke success log for logReceiverAdapter call.
In Operation Analytics console JNDI, the Queue and Size has been set to 1.
This was identified to be a defect in the product and will be resolved as part of APAR #PI42509 WHEN USING SSL ON WEBSPHERE THE ANALYTICS DATA IS NOT RECEIVED BY THE ANALYTICS SERVER BECAUSE OF THE SSL CONFIGURATION USED
There is no local workaround.
Continue to follow up on the issue in the PMR (support ticket) you have opened.
I have deployed a multi node application to cloud foundry, all connected via a shared rabbitmq service. The application consists of:
A grails app.
3 standalone spring-integration-amqp java apps.
All are communicating to rabbit via spring-integration-amqp, using cloud:rabbit-connection-factory.
All of the applications have the same rabbitmq service bound.
All of the applications start correctly and seem to connect to rabbit ok.
The behaviour I am seeing is that the grails app is timing out whilst waiting for a response from one of the standalone apps. This is consistent with me only starting up the grails app locally and not the message consumers.
What I am struggling with is how to debug where the problem is.
I can't see any errors in the logs
It doesn't seem possible to tunnel to the rabbitmq service in order to query the state of the queues etc.
Any ideas?
Are you pushing to cloudfoundry.com or micro cloudfoundry?
To answer your questions:
Have you tried using "vmc file"? For java web applications cloudfoundry uses tomcat as the app server and you can use that command to navigate to tomcat/logs to have a look. Maybe some stdout was redirected there.
Did you have Caldecott installed? If you did not read this doc, here it is: http://docs.cloudfoundry.com/tools/vmc/caldecott.html
I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.