I have a gemfire function which is ment to be deployed in a gemfire cluster. What is the way to write log from function, so that it goes to server log file.
My gemfire version is 8.2.0
You should use either the LogService.getLogger(String) or LogService.getLogger() method to get a Logger instance. The latter is a convenience method and sets the name of the returned Logger to the name of the calling class. The Logger returned by these methods is a log4j Logger.
I actually figured it out. From Gemfire 8.1.0 the log library they use has changed and it now uses Apache Log4j2. Logs done through this logger goes to the server log file.
Related
Custom logger plugin is written using osquery-go. When the osquery daemon is auto-loaded with this extension, then the logs are received by the custom logger plugin.
But if the osqueryd is running as a daemon and custom logger plugin is run independently, then it is not receiving the logs from osqueryd.
Implemented a custom logger plugin using osquery-go. https://github.com/osquery/osquery-go#creating-logger-and-config-plugins.
After receiving the log, it just prints the event.
Build this logger with .ext and changed the owner to 'root' & gave appropriate permissions
Configured osqueryd to capture file-events
Started the osquery daemon.
Ran the .ext --socket /var/osquery/osquery.em --timeout 3
In the /var/log/osquery/osqueryd.INFO can see that registered to osquery daemon.
When any file activity is done, can see the FILE_EVENTS in /var/log/osquery/osqueryd.results.log
but
same result is not seen in the custom logger plugin which is also registered to osquery daemon.
If the osquery daemon is run using auto load extension, then the extension receives the FILE_EVENTS log.
When osqueryd and extension are running as separate process, then why isn't the osqueryd not redirecting the logs to extension?
Environment: MacOS Monterey. Have added both osquery and the custom logger extension in Security Preferences -> Full Disk Access
Generally, I would expect this pattern to work... However, I see a couple of things you did not discuss.
Running the extension registers it with osquery. As you point out, it's in the logs. You should be able to confirm this inside osquery, with select * from osquery_registry where registry = 'logger';. (Note that you need to use osqueryi --connect to connect to the socket of the osqueryd to see what's registered with it)
However, just being registered with osquery does not configure osquery to send logs there. You will also need to configure the logger appropriately. Take a look at the CLI flags --logger_plugin and --extensions_require. The former sets the logger to use, and the latter tells osquery to wait for an extension. Otherwise, osquery will try to configure the logger before your extension is in place.
I am curious how should I force jboss 7.1.1 to use Apaches Log4j 2 instead of org.jboss.as.logging, because I would like to do some performance comparison of log4j2 and jboss.as.logging (I have given up on log4j because it seems to have similar performance as jboss.as.logging).
Log4j2 Official website: http://logging.apache.org/log4j/2.x/
I suppose I need to create a new module for the log4j2 library in jboss modules.
Then what? Do I need any changes in standalone.xml? Any changes for jboss-deployment-structure.xml?
How can I tell jboss where to search for the log4j2 library?
Thanks for any suggestions. I am a bit stuck here.
JBoss Logging is just a logging facade similar to slf4j. JBoss AS 7 uses JBoss Log Manager for it's log manager.
Without changing some code you and removing the logging subsystem you cannot use another log manager like log4j2 for the server wide log manager. You'd have to make some changes here and remove the STDIO stuff. It's probably not worth the effort TBH.
JBoss Log Manager is fairly fast. You could try using an async-handler to see if that helps performance at all. It probably won't make a significant difference though if you're just using a standard console-handler and file-handler of some sort.
Some ruff measuring results between JBoss Default Logging and Log4J 2 (by configuring it native and therefore skipping the LogManager), by logging with 10 Threads at the same time:
JBoss Default Log Async Rolling File Appender -> 200.000 Logs/minute
Log4j 2 ASync File Appender -> 5.000.000 Logs/minute
These are really only ruff results, the second case uses a different logger and does not use the Log Manager, these things must be measured independently ... maybe I will do that too. Nevertheless, the bottom line is that default logging is damn slow.
Is it possible to enable logging in Apache Ace? If yes, How?
In the source code, i can see that the LogService is used to write messages to the log. But i am not able to locate the logs when i start the ace devserver.
The LogService is a standard compendium service, and you can use any implementation to actually record the log statements. We use the one from Apache Felix, and there are shell commands to actually retrieve log statements (hint, the command is called "log"). This implementation does not write them to disk though. Based on the specification, it would be easy to do this yourself though. A LogReader exists to read from, and you can register yourself as a LogListener.
I have an application that was migrated from Glassfish to Weblogic, and it uses java.util.logging as logging framework.
The only way I have found to make the logs work is by editing the logging.properties file of the JVM and restart the server. This solution is awkward and gives problems because the log is written to a different file than the standard ones for weblogic, so we have to look at too many files for a log in a clustered environment. Besides, for some reason this does not work on some Windows systems.
Is there a way to keep using standard java logging to write messages to weblogic's standard log files? I tried the instructions on this page but it doesn't work either.
WebLogic Server ships with a JDK logging handler which will pick up log messages emitted from JDK logging framework and direct them into the WebLogic Server logging system.
Set the default logging level for new ServerLoggingHandler instances in logging.properties as well as adding the ServerLoggingHandler to the handlers.
handlers = weblogic.logging.ServerLoggingHandler
weblogic.logging.ServerLoggingHandler.level = ALL
http://docs.oracle.com/cd/E14571_01/web.1111/e13739/logging_services.htm#CHDBBEIJ
To direct the JDK logging framework to use the logging.properties file, the standard System property java.util.logging.config.file is used. With WebLogic Server, this can be easily accomplished by setting the JAVA_OPTIONS System property with the corresponding value.
$ export JAVA_OPTIONS="-Djava.util.logging.config.file=/Users/xxx/Projects/Domains/wls1035/logging.properties"
Some more hints here: http://buttso.blogspot.de/2011/06/using-slf4j-with-weblogic-server.html
I'm troubleshooting a Mapper problem and I'm running into an issue trying to use a Mapper class inside of the Scala/Lift console. Our MetaMappers have their datasource configured through a ConnectionIdentifier that points to a JDBC datasource configured in JNDI. This works great when bootstrapping through Jetty.
When loading the console and running (new bootstrap.liftweb.Boot).boot to initialize, Schemifier.schemify fails JNDI configuration is not available.
scala> (new bootstrap.liftweb.Boot).boot
java.lang.NullPointerException: Looking for Connection Identifier ConnectionIdentifier(jdbc/svcHub) but failed to find either a JNDI data source with the name jdbc/svcHub or a lift connection manager with the correct name
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$.newConnection(DB.scala:134)
at net.liftweb.mapper.DB$.getConnection(DB.scala:230)
at net.liftweb.mapper.DB$.use(DB.scala:581)
at net.liftweb.mapper.Schemifier$.schemify(Sche...
Essentially, I'd like to have full MetaMapper functionality from within the console. My question is: What's the best way to bootstrap a Lift app from the console such that the JNDI-based dependencies can also be fulfilled outside of a JNDI-capable web container?
Under a application server it's likely that the server will provide a JNDI context for you. In a standalone application you must provide a JNDI Context your self. For that you can use a javax.naming.InitialContext.
There is a nice example using Apache's DBCP here: http://commons.apache.org/dbcp/guide/jndi-howto.html. Of course, will you have to fix the Datasource objects to the implementation you are using.
This will be enough (not very elegant, though) for simple JNDI usage.