Change the log format layout for Cordapp - kotlin

It looks like cordapp outputs console log in log4j format layout. I need to have it in logstash json layout. I have already implemented a library that outputs in logstash json layout and this library works well with spring boot or regular application. However when used it with Cordapp, the cordapp log4j layout always overrides.
Details on how I am trying to implement this:
I have extended log4j..ConfigurationFactory to create a CustomConfigurationFactory. The main purpose of the CustomConfigurationFactory is to implement Logstash layout and custom rolling of the the log file. There is few more meta data that is included with every log statement. It uses the org.slf4j.Logger in the the background along with the custom configuration to log statements in the custom format and implement our custom rolling. This is created as an independent library so it can be used across all our application.
I am using this custom logging library for logging purpose. It works for the accompanying Spring boot application that interacts with the Corda nodes however the logging on the Corda node itself is in the default log4j format.
Any suggestions?

Corda uses log4j for logging, you could provide your custom format in the log4j configuration files. You could find more details on logging here in Corda documentation: https://docs.corda.net/docs/corda-os/4.6/node-administration.html#logging

Related

Missing fields in the transformed JSON output

I am performing XML to JSON transformations using Transform Message connector. I have created a mule-plugin for the transformations code and added it as a dependency to my application. When I deploy the application in anypoint studio(4.3.0) it is working as expected i.e. I am getting the full payload transformed to JSON. But, when I deploy the same application to ONPREM some fields of input(XML) are missing in the output(JSON). In case of the ONPREM application I am sending the message(XML-payload) via JMS(1.7.1)-Publish by publishing it to a JMS queue where my application is listening using JMS-On New Message and using the transformations mule-plugin(added as a dependency) to transform the XML to JSON and publishing via JMS-publish to a queue where another API is listening.
I observed that when I am dividing parts of dwl in modules and importing them in a main dwl and deploying at ONPREM the fields are missing. But, when I am using all the module's dwl code in same dwl file I am getting all the fields.
Please Help me with this.
Issue Resolved. There was a difference between Studio Runtime and ONPREM Runtime. When I patched the ONPREM with latest update. The Issue got Resolved.

How can I configure JAX-RS endpoints programmaticaly?

I'm trying to get rid of XML in my project.
I already tried to add this:
JAXRSServerFactoryBean sf = new JAXRSServerFactoryBean();
sf.setResourceClasses(CustomerService.class);
sf.setAddress("http://localhost:9000/");
sf.create();
to my Activator class, but my bundle wont start with this.
So, how usually people configuring endpoints?
(Sorry, no code, just some high level insights from my experience/projects)
I use jersey and its integration into the OSGi environment. I.e. org.glassfish.jersey.servlet.ServletContainer to which I register all jax-rs resources. This way, I can use whatever HTTP server implementation is available (for example, jetty) and configure it via the OSGi system environment properties.
For simplicity, I re-register annotated OSGi (declarative) services as singleton resources/endpoints into that ServletContainer.
Maybe, CXF has also a similar approach.

How to deploy a Camunda modeled diagram into Camunda Tomcat

I am trying to set up a BPMN workflow with Camunda. For this, I already made a diagram using the Camunda modeler. Now I want to open this BPMN diagram in Camunda. Camunda's Tomcat is installed and running, but I can't manage to upload/ find the diagram in Camunda's Tomcat. I am currently trying this on my local machine.
Anyone who knows how to get a BPMN diagram into Camunda's Tomcat?
In addition to the ways to deploy described by #MuffinMICHI you can also deploy your diagram via the REST API. You just make POST request to /engine-rest/deployment/create
You set Content-Type to:
application/x-www-form-urlencoded
You set these parameters:
deployment-name: <SOME NAME>
deployment-source: <SOME NAME>
data: <UPLOAD THE DIAGRAM HERE>
diagram (optional): <UPLOAD IMAGE FOR DIAGRAM>
There are two ways how you can upload your diagram to your BPMN engine.
In the Camunda Modeler, there is a little upwards-pointing arrow in the menu bar. There you can specify where your engine is running and
upload the diagram directly from the modeler.
https://docs.camunda.org/get-started/quick-start/service-task/
If you also have some JavaDelegate-classes you want to deploy with
your diagram, you can pack all these things in a WAR-file and put it
in the webapps-folder of your Tomcat which will then
automatically deploy your file.
https://docs.camunda.org/get-started/java-process-app/service-task/
The provided links guide you to the official Camunda documentation where all these things are explained in detail.
a) You can deploy directly from the modeler to the server.
https://docs.camunda.org/get-started/quick-start/deploy/
In the latest release the feature has improved further:
https://blog.camunda.com/post/2019/10/camunda-modeler-3.4.0-released/
On a local setup use rest endpoint http://localhost:8080/engine-rest if using on of the prepacked distributions or http://localhost:8080/rest if using Spring boot.
b) Process and decisions models (bpmn, dmn) can be auto-deployed. For instance placing the files into the src/main/resources folder (on a default Spring boot setup) will auto-deploy during startup.
c) There are other auto-deploy configuration options: https://docs.camunda.org/manual/latest/user-guide/spring-framework-integration/deployment/
d) You can use the REST-API, for instance with Postman to deploy.
https://docs.camunda.org/manual/latest/reference/rest/deployment/post-deployment/
Examples:
https://github.com/rob2universe/camunda-rest-postman
https://forum.camunda.org/t/process-deployment-to-rest-api-through-postman/10630
Deploy Camunda Process:-
https://docs.camunda.org/get-started/quick-start/deploy/
you can also use the play button to deploy if you are deploying the process for the first time.
camunda-spring-boot-starter is configured to use the SpringProcessEngineConfiguration auto deployment feature by default.
https://docs.camunda.org/manual/7.9/user-guide/spring-boot-integration/process-applications/

Integration of log4j v2 into JBoss 7.1.1

I am curious how should I force jboss 7.1.1 to use Apaches Log4j 2 instead of org.jboss.as.logging, because I would like to do some performance comparison of log4j2 and jboss.as.logging (I have given up on log4j because it seems to have similar performance as jboss.as.logging).
Log4j2 Official website: http://logging.apache.org/log4j/2.x/
I suppose I need to create a new module for the log4j2 library in jboss modules.
Then what? Do I need any changes in standalone.xml? Any changes for jboss-deployment-structure.xml?
How can I tell jboss where to search for the log4j2 library?
Thanks for any suggestions. I am a bit stuck here.
JBoss Logging is just a logging facade similar to slf4j. JBoss AS 7 uses JBoss Log Manager for it's log manager.
Without changing some code you and removing the logging subsystem you cannot use another log manager like log4j2 for the server wide log manager. You'd have to make some changes here and remove the STDIO stuff. It's probably not worth the effort TBH.
JBoss Log Manager is fairly fast. You could try using an async-handler to see if that helps performance at all. It probably won't make a significant difference though if you're just using a standard console-handler and file-handler of some sort.
Some ruff measuring results between JBoss Default Logging and Log4J 2 (by configuring it native and therefore skipping the LogManager), by logging with 10 Threads at the same time:
JBoss Default Log Async Rolling File Appender -> 200.000 Logs/minute
Log4j 2 ASync File Appender -> 5.000.000 Logs/minute
These are really only ruff results, the second case uses a different logger and does not use the Log Manager, these things must be measured independently ... maybe I will do that too. Nevertheless, the bottom line is that default logging is damn slow.

How to use java.util.logging in Weblogic?

I have an application that was migrated from Glassfish to Weblogic, and it uses java.util.logging as logging framework.
The only way I have found to make the logs work is by editing the logging.properties file of the JVM and restart the server. This solution is awkward and gives problems because the log is written to a different file than the standard ones for weblogic, so we have to look at too many files for a log in a clustered environment. Besides, for some reason this does not work on some Windows systems.
Is there a way to keep using standard java logging to write messages to weblogic's standard log files? I tried the instructions on this page but it doesn't work either.
WebLogic Server ships with a JDK logging handler which will pick up log messages emitted from JDK logging framework and direct them into the WebLogic Server logging system.
Set the default logging level for new ServerLoggingHandler instances in logging.properties as well as adding the ServerLoggingHandler to the handlers.
handlers = weblogic.logging.ServerLoggingHandler
weblogic.logging.ServerLoggingHandler.level = ALL
http://docs.oracle.com/cd/E14571_01/web.1111/e13739/logging_services.htm#CHDBBEIJ
To direct the JDK logging framework to use the logging.properties file, the standard System property java.util.logging.config.file is used. With WebLogic Server, this can be easily accomplished by setting the JAVA_OPTIONS System property with the corresponding value.
$ export JAVA_OPTIONS="-Djava.util.logging.config.file=/Users/xxx/Projects/Domains/wls1035/logging.properties"
Some more hints here: http://buttso.blogspot.de/2011/06/using-slf4j-with-weblogic-server.html