I'm new to Servlet containers and have created a web application using Tomcat 6.0.26. I have 'TODO: log' scattered throughout my code. I see there exists:
myServlet.getServletContext().log()
which appears to write to a file prefixed with 'localhost' in the Tomcat '/logs' directory. I don't need any advanced logging capability, but I'd like a date, time, message and stack trace at minimum. In addition, I've created some classes used by my various servlets that need logging capabilities as well. Do I need to inject a SevletContext into these classes so they can log?
It appears that log4j from Apache is a popular logging package, but I'm not sure if it worth the trouble of setting it up.
What would be the recommended way of logging for my needs?
You don't want to tie up all your business and data access code with the ServletContext (I of course assume that your business and DB code is not tight coupled inside a servlet class, but just live in their own layer of classes, without any javax.servlet references). So I wouldn't recommend to use ServletContext#log(). It's also very seldom used in real world.
You're right that log4j is popular, even though it's been succeeded by logback. Setting up log4j doesn't need to be that troublesome. I suggest to start with a properties file which is less hard to understand than a XML file. You can always upgrade to a XML file once you understand what's going on in log4j configuration.
Create a file named log4j.properties, put it somewhere in the root of the classpath, e.g. /WEB-INF/classes (or if you're using an IDE, the root of the src folder, it will eventually land in the right place). You can also keep it outside the webapp and add its path to the server's runtime classpath by specifying its path in shared.loader property of Tomcat/conf/catalina.properties. Finally fill it as follows:
# Set root logger level and appender name.
log4j.rootLogger = TRACE, console
# Specify appenders.
log4j.appender.console = org.apache.log4j.ConsoleAppender
log4j.appender.file = org.apache.log4j.DailyRollingFileAppender
# Configure console appender.
log4j.appender.console.layout = org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern = %d{ABSOLUTE} [%t] %-5p %m%n
# Configure file appender.
log4j.appender.file.File = /webapp/logs/web.log
log4j.appender.file.DatePattern = '.'yyyy-MM-dd
log4j.appender.file.layout = org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern = %d{ABSOLUTE} [%t] %-5p %m%n
This kickoff example by default logs at a level of TRACE. You can change it to DEBUG or INFO like so:
# Set root logger level and appender name.
log4j.rootLogger = INFO, console
This example also by default uses the console appender. It will log to the standard output as it is configured by Tomcat which by default lands in the /logs/localhost.yyyy-MM-dd.log file.
You can however change it to use the file appender like so:
# Set root logger level and appender name.
log4j.rootLogger = INFO, file
The ConversionPattern settings can be found in detail in the PatternLayout javadoc.
Hope this helps to get you started.
In addition to all the stuff BalusC has mentioned above, in your java code (servlets/beans/whatever) just import and initialize the Logger
package my.app;
import org.apache.log4j.Logger;
private static Logger logger = Logger.getLogger("classname");
// use any string you want
Then at any logging point you can log at various levels like
if (logger.isDebugEnabled()) {
logger.debug("Some log at this point");
}
...
logger.info("Some info message here");
These will appear in the logs depending on whether you set DEBUG or INFO in the log4j.propeties
Take a look at the examples on http://www.java2s.com/Code/Java/Language-Basics/Examplelog4jConfigurationFile.htm and all the Related examples in the same article
Related
Custom logger plugin is written using osquery-go. When the osquery daemon is auto-loaded with this extension, then the logs are received by the custom logger plugin.
But if the osqueryd is running as a daemon and custom logger plugin is run independently, then it is not receiving the logs from osqueryd.
Implemented a custom logger plugin using osquery-go. https://github.com/osquery/osquery-go#creating-logger-and-config-plugins.
After receiving the log, it just prints the event.
Build this logger with .ext and changed the owner to 'root' & gave appropriate permissions
Configured osqueryd to capture file-events
Started the osquery daemon.
Ran the .ext --socket /var/osquery/osquery.em --timeout 3
In the /var/log/osquery/osqueryd.INFO can see that registered to osquery daemon.
When any file activity is done, can see the FILE_EVENTS in /var/log/osquery/osqueryd.results.log
but
same result is not seen in the custom logger plugin which is also registered to osquery daemon.
If the osquery daemon is run using auto load extension, then the extension receives the FILE_EVENTS log.
When osqueryd and extension are running as separate process, then why isn't the osqueryd not redirecting the logs to extension?
Environment: MacOS Monterey. Have added both osquery and the custom logger extension in Security Preferences -> Full Disk Access
Generally, I would expect this pattern to work... However, I see a couple of things you did not discuss.
Running the extension registers it with osquery. As you point out, it's in the logs. You should be able to confirm this inside osquery, with select * from osquery_registry where registry = 'logger';. (Note that you need to use osqueryi --connect to connect to the socket of the osqueryd to see what's registered with it)
However, just being registered with osquery does not configure osquery to send logs there. You will also need to configure the logger appropriately. Take a look at the CLI flags --logger_plugin and --extensions_require. The former sets the logger to use, and the latter tells osquery to wait for an extension. Otherwise, osquery will try to configure the logger before your extension is in place.
It looks like cordapp outputs console log in log4j format layout. I need to have it in logstash json layout. I have already implemented a library that outputs in logstash json layout and this library works well with spring boot or regular application. However when used it with Cordapp, the cordapp log4j layout always overrides.
Details on how I am trying to implement this:
I have extended log4j..ConfigurationFactory to create a CustomConfigurationFactory. The main purpose of the CustomConfigurationFactory is to implement Logstash layout and custom rolling of the the log file. There is few more meta data that is included with every log statement. It uses the org.slf4j.Logger in the the background along with the custom configuration to log statements in the custom format and implement our custom rolling. This is created as an independent library so it can be used across all our application.
I am using this custom logging library for logging purpose. It works for the accompanying Spring boot application that interacts with the Corda nodes however the logging on the Corda node itself is in the default log4j format.
Any suggestions?
Corda uses log4j for logging, you could provide your custom format in the log4j configuration files. You could find more details on logging here in Corda documentation: https://docs.corda.net/docs/corda-os/4.6/node-administration.html#logging
I have a web service. Its domain and port can be changed. So I want to read port and domain from file or db. When this information change, I update them in db or file.
Adapter XML:
<domain>${adp.hostname}</domain>
<port>${adp.port}</port>
worklight.properties:
adp.hostname=localhost
adp.port=10080
This is working fine. But I'd like to take adp.hostname and adp.port from file or db.
Something to remember about adapters is that you cannot change in real-time any of the properties set in the adapter XML once it is deployed.
Once the adapter is deployed, it is transformed into an object and is stored in memory. At this time, then, you can no longer interact with its "setup".
The only thing you can do, is to decide what will be the value of these properties before your deploy the adapter. For example, a different set of properties for QA/TEST/UAT/PROD environments...
To setup external properties, starting Worklight 6.0 and above, you can read this documentation topic: Configuring an IBM Worklight project in production by using JNDI environment entries
Specifically for Tomcat in its server.xml:
<Context docBase="app_context_path" path="/app_context_path">
<Environment name="publicWorkLightPort" override="false"
type="java.lang.String" value="9080"/>
</Context>
You change app_context_path to your project's context (project name)
You add environment child elements for each property you need
Important to remember: these properties must also exist in worklight.properties; those will be the default properties, and if using the above example they will be over-written and the environment properties will be used instead.
In the example above you can see that it will replace the default property publicWorkLightPort.
I have an application that was migrated from Glassfish to Weblogic, and it uses java.util.logging as logging framework.
The only way I have found to make the logs work is by editing the logging.properties file of the JVM and restart the server. This solution is awkward and gives problems because the log is written to a different file than the standard ones for weblogic, so we have to look at too many files for a log in a clustered environment. Besides, for some reason this does not work on some Windows systems.
Is there a way to keep using standard java logging to write messages to weblogic's standard log files? I tried the instructions on this page but it doesn't work either.
WebLogic Server ships with a JDK logging handler which will pick up log messages emitted from JDK logging framework and direct them into the WebLogic Server logging system.
Set the default logging level for new ServerLoggingHandler instances in logging.properties as well as adding the ServerLoggingHandler to the handlers.
handlers = weblogic.logging.ServerLoggingHandler
weblogic.logging.ServerLoggingHandler.level = ALL
http://docs.oracle.com/cd/E14571_01/web.1111/e13739/logging_services.htm#CHDBBEIJ
To direct the JDK logging framework to use the logging.properties file, the standard System property java.util.logging.config.file is used. With WebLogic Server, this can be easily accomplished by setting the JAVA_OPTIONS System property with the corresponding value.
$ export JAVA_OPTIONS="-Djava.util.logging.config.file=/Users/xxx/Projects/Domains/wls1035/logging.properties"
Some more hints here: http://buttso.blogspot.de/2011/06/using-slf4j-with-weblogic-server.html
I'm troubleshooting a Mapper problem and I'm running into an issue trying to use a Mapper class inside of the Scala/Lift console. Our MetaMappers have their datasource configured through a ConnectionIdentifier that points to a JDBC datasource configured in JNDI. This works great when bootstrapping through Jetty.
When loading the console and running (new bootstrap.liftweb.Boot).boot to initialize, Schemifier.schemify fails JNDI configuration is not available.
scala> (new bootstrap.liftweb.Boot).boot
java.lang.NullPointerException: Looking for Connection Identifier ConnectionIdentifier(jdbc/svcHub) but failed to find either a JNDI data source with the name jdbc/svcHub or a lift connection manager with the correct name
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$.newConnection(DB.scala:134)
at net.liftweb.mapper.DB$.getConnection(DB.scala:230)
at net.liftweb.mapper.DB$.use(DB.scala:581)
at net.liftweb.mapper.Schemifier$.schemify(Sche...
Essentially, I'd like to have full MetaMapper functionality from within the console. My question is: What's the best way to bootstrap a Lift app from the console such that the JNDI-based dependencies can also be fulfilled outside of a JNDI-capable web container?
Under a application server it's likely that the server will provide a JNDI context for you. In a standalone application you must provide a JNDI Context your self. For that you can use a javax.naming.InitialContext.
There is a nice example using Apache's DBCP here: http://commons.apache.org/dbcp/guide/jndi-howto.html. Of course, will you have to fix the Datasource objects to the implementation you are using.
This will be enough (not very elegant, though) for simple JNDI usage.