In my application I am using structlog as a log system. My application also uses PythonRQ. How can I make PythonRQ to use the log system I am already using in my application so that all my application logs follows the same pattern?
RQ is using standard library logging for its log output.
Therefore you can achieve that by using of the approaches listed in https://www.structlog.org/en/stable/standard-library.html
Related
Forms 12.2.1.4. Using web.show_document I see different behavior when running the Forms using Java Plugin (JPI) or using Java Webstart (JWS).
Same form, when run using JPI, web.show_document tries to open: http://server:port/forms/ + (uri you send in web.show_document ('uri').
That same form, when run using JWS, tries to open: http://server:port/forms/java/ + (uri you send in web.show_document('uri')
So:
1.- JWS uses as base url http://server:port/forms/java, while JPI http://server:port/forms/
Do you know the reason? I have a testcase and reproduce internally....I see no differences in configuration between JPI and JWS config.
2.- Another option to solve this could be use a different web.show_document call depending on wheter form is being run using JPI or JWS..... Is there a way to check at runtime if forms is being run using JWS or JPI?
I don't see it possible using get_application_property().
Thanks in advance.
Using below code solved my problem:
WEB.SHOW_DOCUMENT('/'||:block3.item4);
I have a spark streaming program running on Yarn Cluster in "yarn-cluster" mode. (-master yarn-cluster).
I want to fetch spark job statistics using REST APIs in json format.
I am able to fetch basic statistics using REST url call:
http://yarn-cluster:8088/proxy/application_1446697245218_0091/metrics/json. But this is giving very basic statistics.
However I want to fetch per executor or per RDD based statistics.
How to do that using REST calls and where I can find the exact REST url to get these statistics.
Though $SPARK_HOME/conf/metrics.properties file sheds some light regarding urls i.e.
5. MetricsServlet is added by default as a sink in master, worker and client driver, you can send http request "/metrics/json" to get a snapshot of all the registered metrics in json format. For master, requests "/metrics/master/json" and "/metrics/applications/json" can be sent seperately to get metrics snapshot of instance master and applications. MetricsServlet may not be configured by self.
but that is fetching html pages not json. Only "/metrics/json" fetches stats in json format.
On top of that knowing application_id pro-grammatically is a challenge in itself when running in yarn-cluster mode.
I checked REST API section of Spark Monitoring page, but that didn't worked when we run spark job in yarn-cluster mode. Any pointers/answers are welcomed.
You should be able to access the Spark REST API using:
http://yarn-cluster:8088/proxy/application_1446697245218_0091/api/v1/applications/
From here you can select the app-id from the list and then use the following endpoint to get information about executors, for example:
http://yarn-cluster:8088/proxy/application_1446697245218_0091/api/v1/applications/{app-id}/executors
I verified this with my spark streaming application that is running in yarn cluster mode.
I'll explain how I arrived at the JSON response using a web browser. (This is for a Spark 1.5.2 streaming application in yarn-cluster mode).
First, use the hadoop url to view the RUNNING applications. http://{yarn-cluster}:8088/cluster/apps/RUNNING.
Next, select a running application, say http://{yarn-cluster}:8088/cluster/app/application_1450927949656_0021.
Next, click on the TrackingUrl link. This uses a proxy and the port is different in my case: http://{yarn-proxy}l:20888/proxy/application_1450927949656_0021/. This shows the spark UI. Now, append the api/v1/applications to this URL: http://{yarn-proxy}l:20888/proxy/application_1450927949656_0021/api/v1/applications.
You should see a JSON response with the application name supplied to SparkConf and the start time of the application.
I was able to reconstruct the metrics in the columns seen in the Spark Streaming web UI (batch start time, processing delay, scheduling delay) using the /jobs/ endpoint.
The script I used is available here. I wrote a short post describing and tying its functionality back to the Spark codebase. This does not need any web-scraping.
It works for Spark 2.0.0 and YARN 2.7.2, but may work for other version combinations too.
You'll need to scrape through the HTML page to get the relevant metrics. There isn't a Spark rest endpoint for capturing this info.
Is it possible to enable logging in Apache Ace? If yes, How?
In the source code, i can see that the LogService is used to write messages to the log. But i am not able to locate the logs when i start the ace devserver.
The LogService is a standard compendium service, and you can use any implementation to actually record the log statements. We use the one from Apache Felix, and there are shell commands to actually retrieve log statements (hint, the command is called "log"). This implementation does not write them to disk though. Based on the specification, it would be easy to do this yourself though. A LogReader exists to read from, and you can register yourself as a LogListener.
I have an application that was migrated from Glassfish to Weblogic, and it uses java.util.logging as logging framework.
The only way I have found to make the logs work is by editing the logging.properties file of the JVM and restart the server. This solution is awkward and gives problems because the log is written to a different file than the standard ones for weblogic, so we have to look at too many files for a log in a clustered environment. Besides, for some reason this does not work on some Windows systems.
Is there a way to keep using standard java logging to write messages to weblogic's standard log files? I tried the instructions on this page but it doesn't work either.
WebLogic Server ships with a JDK logging handler which will pick up log messages emitted from JDK logging framework and direct them into the WebLogic Server logging system.
Set the default logging level for new ServerLoggingHandler instances in logging.properties as well as adding the ServerLoggingHandler to the handlers.
handlers = weblogic.logging.ServerLoggingHandler
weblogic.logging.ServerLoggingHandler.level = ALL
http://docs.oracle.com/cd/E14571_01/web.1111/e13739/logging_services.htm#CHDBBEIJ
To direct the JDK logging framework to use the logging.properties file, the standard System property java.util.logging.config.file is used. With WebLogic Server, this can be easily accomplished by setting the JAVA_OPTIONS System property with the corresponding value.
$ export JAVA_OPTIONS="-Djava.util.logging.config.file=/Users/xxx/Projects/Domains/wls1035/logging.properties"
Some more hints here: http://buttso.blogspot.de/2011/06/using-slf4j-with-weblogic-server.html
im just starting to learn flex and im trying to understand how Flex does remoting? From what i have read it looks like Flex provides a LifeCycle data services war which sits on your server and intercepts your remote calls , is this close?
Im concerned that if i use this option that 1. Ill have to add an extra war to my server - the lifecycle data service war and 2. That i will have to pay for a license for each instance i use on each cpu.
Is there an easier [free] option out there which i can use to call my remote java objects from within my flex mxml?
Blaze DS is based off the same APIs / code base as LiveCycle Data Services and is completely free and open source:
http://opensource.adobe.com/wiki/display/blazeds/BlazeDS/
There are a number of other options available including:
Granite DS
WebOrb
FluorineFX (for .NET)
There are also solutions for PHP, Python and Ruby, although I can't recall their names right now.
Easiest option is to send xml from the server and to use it in Flex.
However if you want to use java Objects you can go for BlazeDs. It would require you to put an extra jar files (an no incensing) Also there are other options available as WebOrb for java nr meraapi.