I've created a POC for the Watson TTS service in Eclipse using the Java SDK 3.3.0. The app server is Tomcat v8.0 running locally through Eclipse on a Win10 PC. Everything works fine, i.e., it is able to retrieve an audio stream, but when I stop Tomcat I'm seeing warnings about memory leaks. Here are two of the messages:
The web application [testapp] appears to have started a thread named [OkHttp ConnectionPool] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:461)
okhttp3.ConnectionPool$1.run(ConnectionPool.java:66)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745)
There is a similar message for [Okio Watchdog].
I've looked through the SDK and can't find anything about exiting the TextToSpeech connection gracefully. Is this cause for concern? If I add this service to the production website it will be running in a Sun Solaris 10 environment also with Tomcat8.
After some investigations, I realized that those warnings are generated because the IBM Watson Java SDK uses OkHttp, and it creates asynchronous threads to handle the connection pool and the different request.
There are good reasons of why this is the way it is and they also suggest how to reuse the OkHttpClient to create fewer threads. I'm working on that as part of #686.
If you want to know more take a look at this issue in the OkHttp repository.
I'm about to release a new version of the Java SDK (v3.8.1) which reuses the OkHttpClient instance and therefore creates fewer threads.
Related
I have created a war application and it was tested both in weblogic 12c and Jboss EAP 6.2 successfully.
I changed application server, i moved to JBoss EAP 6.3 and the application could not work properly. Suddenly Jboss stopped to serve any requests and the existing requests were waiting for ever.
I have started Jboss in debug mode from netbeans and i run my application in debug mode.
I have noticed that every time the server was stopping at the System.out.println(); command.
After server's crush/stuck, i interrupt the last thread which was at the log file and upon the interruption i see at the netbean's debugging console a notice:"stopped at AppenderSkeleton.java:231" The previous call at the code is a the line that calls system.out.println.
When i removed all the system.out.println from my code and i left only my log4j the application did not stuck again. I am still testing because i don't know for sure if this is the problem.
Does anyone else had the same problem? When the System.out.println was called one a time it seems that there is no problem, but when this method is called from multiple methods then it seems to stuck.
You probably use a custom log4j configuration in your deployment. It requires a special care as explained by the JBoss logging developer James Perkins in this JBoss forum comment.
Your problem could be related to changes between EAP 6.2 and EAP 6.3 introduced by following bugfix:
Bugzilla: System.out.println() doesn't work when using per-deployment logging
Other users experience similar issue as described in
Bugzilla: ConsoleAppenders can deadlock if included in application log4j configs
If you have some additional info, feel free to comment on existing bugzillas or create a new one where you describe your application (mainly logging) configuration.
According to below, we can call Java code from javascript adapter.
Calling Java code from a JavaScript adapter
http://www-01.ibm.com/support/knowledgecenter/?lang=en#!/SSZH4A_6.2.0/com.ibm.worklight.dev.doc/devref/t_calling_java_code_from_a_javas.html
We plan to install worklight server on WAS full profile. WAS full profile supports two-phase commit.
Transaction support in WebSphere Application Server< br/>
http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/cjta_trans.html?cp=SSAW57_8.5.5%2F3-2-7-3&lang=en
To call java code from adapter, we need to deploy it on "Worklight server". Can we use two-phase commit in java code? Are there any limitations when using java code on worklight server?
Thanks in advance!
The only limitation I am aware of is that the WAS security context is not propagated to Worklight adapter's thread. But generally speaking, the same capabilities exist and the same servlet API is available.
You can read more about Java Vs JavaScript in adapters, in this question: Worklight Adapters - Java vs JavaScript
That said, two-phase commit was never tested in practice, so it may work and may not work... for the same reason as the security context mentioned above. As a transaction is usually associated with a thread, and that thread is not available for Worklight adapters which are using their own thread pool.
This limitation mentioned above may be removed in a future release of Worklight, which in turn may make it possible to use the two-phase commit feature.
I am using Glassfish v3.0.1 for my project. However, Glassfish seems to be down many times. Therefore, I want to develop a mechanism that notifies me whenever Glassfish is down. Is there any option in Glassfish? If not, how can I achieve this? Further, how can I understand why Glassfish goes down? I cannot find proper explanations in logs.
I'm not aware of any options in Glassfish itself and I doubt there are any (it's usually hard for a process to know when it's dead :-). Write a script that tries to connect to the service (for example, using wget or curl) or use a system monitoring tool that watches processes.
To find out why Glassfish terminates, you must debug the problem. Here are some tipps:
Add/enable more logging
Search the source code for System.exit(). This can terminate an Java app without any trace of why it happens. (this might help, too)
Check the standard output of the process
Look for crash dumps; see the documentation of the Java VM which you're using.
Is it possible (...knowing full well that this is crazy and seriously ill-advised...) to have a J2EE application running in a Java app server (using weblogic presently), and have a native executable process started, used, and stopped as part of this Java application's lifecycle? (Note: this is not JNI, it's actually a separate native process. It's unix/linux, but should also run on windows.) I haven't found any docs on the subject -- and for good reason, probably.
Background: The native process is actually some monolithic 3rd party software package that is un-hackable and there's no API other than stdin/stdout. The Java app requires the native app to perform certain services. I can easily wrap the native process via ProcessBuilder and start/stop and communicate with it (using stdin/stdout). For testing purposes I have a simple exe (C++) that communicates via stdin/stdout and can receive "start", "shutdown" and performs a simple "echo" service. (The "start" is a no-op, but simply returns "ok" if the native process started successfully.)
So, ideally, when the app server is started/shutdown, and/or the deployed Java app is started/shutdown, the associated native process can also be started/shutdown. And ideally, this can happen cleanly & reliably (no lingering processes after shutdown, all startup failures logged, the lifecycle timing issues synchronized).
If this actually worked, then "part 2" of the question would be if this could actually work in a cluster/failover environment. The native process could be tied to a platform and software-specific monitoring & management service, but I'd like to have everything bundled and managed with the Java app, if possible.
If Glassfish or any other OSGi type environment would make this simpler, please feel free to let me know (it could be an option... I'd prefer Glassfish, but WLS is the blanket mandate.)
I'm trying to put together a proof-of-concept, but any clear answer "yes, I've done it" or "no, it won't work" would be much appreciated & a huge time-saver (with supporting doc links, if you have them).
Edit: just to clarify (the subject may be misleading): there is a considerable Java application running as well (which I've written & can freely modify as necessary); the 3rd party native process just performs a service that the Java application requires. I'm not merely trying to manage a native process via an app server.
The answer to part 1 is yes, it is absolutely possible to have a Java application server manage a native system process. It sounds like you've pretty much figured this out for yourself, if you're thinking about using a ProcessBuilder to spawn the external program and interact with it. That's pretty much the way to do it.
I have used exactly that kind of setup in the past to implement a media transcoding service on top of a Java server (the Java server spawned transcoding jobs via ffmpeg processes, monitoring their status and reporting back to the rest of the application on success/failure/etc.). How cleanly it can all be done depends upon how you implement it and upon the behavior of your external app (i.e. is it guaranteed to respond gracefully and quickly to a shutdown request?), but it will be very difficult (if not impossible) to get it completely perfect. At a minimum, if someone does a kill -9 on your Java server process, there is no way for you to gracefully shut down the native process, at least not until the server is restarted and you see that the native process is already running.
The second part depends upon exactly what you mean by "work in a cluster/failover environment". In terms of managing the native process, if you can start it and interact with it in Java then you can also manage it in Java. But if you mean you want perfect failover behavior such that if the node with the native process on it goes down then a new node automatically resumes the process in the exact same state as it was before, then that may be very difficult or even impossible. But, if you abstract out interactions with the external process so that it just appears as a service that your Java code interacts with (for instance, perhaps by sending requests to some facade class that understands how to interact with and manage the external process) then you should be able to get some fairly good results.
The transcoding service that I implemented ran in a clustered environment (using JBoss/Tomcat), and the way it worked was that when a transcoding job was requested a message would be dispatched. This message would be received by a coordinating class that would manage the queue of transcode requests, spawning jobs as worker processes became available. The state of the queue was replicated across the cluster, so if the node running the ffmpeg processes went down the currently scheduled jobs would be remembered, and then resumed as soon as a suitable node was available again (the transcoding service was configurable so that it could be enabled/disabled per node). In practice the system proved to be quite robust.
I want to get a heap dump (suspected memory leak) of a certain Java process. However, when I start the jvisualvm tool, I cannot see any of the running Java processes.
I have Google'd around about this and have already found a couple of articles saying that you have to run the Java processes using the same JDK that you start the jvisualvm tool with in order for it to be able to see them. However, as far as I can see, this is already the case. I'm doing everything locally (I have remote access to the machine).
A couple of things to consider:
The processes are running on a firewalled Windows 2008 server
The processes are running using renamed versions of the JDK java.exe executable
As far as I can see the processes are running using the 1.6.0_18 JDK
One of the running processes starts an RMI registry
I'm waiting on a virtualized copy of the server so I can mess around with it (this is a production server). But in the meanwhile; any ideas as to why I cannot see any of the processes in jvisualvm (or jconsole for that matter)?
Well after I did a little research, it would appear that Peter's comment was correct. Because the JVM processes were launched by another user (the NETWORK SERVICE account because they were being started by a Windows service) they didn't show up in jvisualvm.
Workaround
Since I have access to the application configuration, I have found the following workaround, which involves explicitly enabling unsecured JMX for the target JVM:
Add the following JVM parameters:
-Dcom.sun.management.jmxremote.port=3333 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
Add the remote process to jvisualvm using JMX by click File -> Add JMX Connection. You can connect to the process using port 3333. Obviously you can change the port if you want.
Link to article explaining this in a little more detail: http://download.oracle.com/javase/6/docs/technotes/guides/visualvm/jmx_connections.html
Notes
It's probably not a good idea to keep the JVM settings permanently, as they would allow anyone to connect to the JVM via JMX.
You can also add authentication to the JMX JVM parameters if you want to.
The simplest way is to execute jvisualvm as administrator (win: "run as administrator"). Which is not ideal but works. All java processes are visible then.