Is it possible (...knowing full well that this is crazy and seriously ill-advised...) to have a J2EE application running in a Java app server (using weblogic presently), and have a native executable process started, used, and stopped as part of this Java application's lifecycle? (Note: this is not JNI, it's actually a separate native process. It's unix/linux, but should also run on windows.) I haven't found any docs on the subject -- and for good reason, probably.
Background: The native process is actually some monolithic 3rd party software package that is un-hackable and there's no API other than stdin/stdout. The Java app requires the native app to perform certain services. I can easily wrap the native process via ProcessBuilder and start/stop and communicate with it (using stdin/stdout). For testing purposes I have a simple exe (C++) that communicates via stdin/stdout and can receive "start", "shutdown" and performs a simple "echo" service. (The "start" is a no-op, but simply returns "ok" if the native process started successfully.)
So, ideally, when the app server is started/shutdown, and/or the deployed Java app is started/shutdown, the associated native process can also be started/shutdown. And ideally, this can happen cleanly & reliably (no lingering processes after shutdown, all startup failures logged, the lifecycle timing issues synchronized).
If this actually worked, then "part 2" of the question would be if this could actually work in a cluster/failover environment. The native process could be tied to a platform and software-specific monitoring & management service, but I'd like to have everything bundled and managed with the Java app, if possible.
If Glassfish or any other OSGi type environment would make this simpler, please feel free to let me know (it could be an option... I'd prefer Glassfish, but WLS is the blanket mandate.)
I'm trying to put together a proof-of-concept, but any clear answer "yes, I've done it" or "no, it won't work" would be much appreciated & a huge time-saver (with supporting doc links, if you have them).
Edit: just to clarify (the subject may be misleading): there is a considerable Java application running as well (which I've written & can freely modify as necessary); the 3rd party native process just performs a service that the Java application requires. I'm not merely trying to manage a native process via an app server.
The answer to part 1 is yes, it is absolutely possible to have a Java application server manage a native system process. It sounds like you've pretty much figured this out for yourself, if you're thinking about using a ProcessBuilder to spawn the external program and interact with it. That's pretty much the way to do it.
I have used exactly that kind of setup in the past to implement a media transcoding service on top of a Java server (the Java server spawned transcoding jobs via ffmpeg processes, monitoring their status and reporting back to the rest of the application on success/failure/etc.). How cleanly it can all be done depends upon how you implement it and upon the behavior of your external app (i.e. is it guaranteed to respond gracefully and quickly to a shutdown request?), but it will be very difficult (if not impossible) to get it completely perfect. At a minimum, if someone does a kill -9 on your Java server process, there is no way for you to gracefully shut down the native process, at least not until the server is restarted and you see that the native process is already running.
The second part depends upon exactly what you mean by "work in a cluster/failover environment". In terms of managing the native process, if you can start it and interact with it in Java then you can also manage it in Java. But if you mean you want perfect failover behavior such that if the node with the native process on it goes down then a new node automatically resumes the process in the exact same state as it was before, then that may be very difficult or even impossible. But, if you abstract out interactions with the external process so that it just appears as a service that your Java code interacts with (for instance, perhaps by sending requests to some facade class that understands how to interact with and manage the external process) then you should be able to get some fairly good results.
The transcoding service that I implemented ran in a clustered environment (using JBoss/Tomcat), and the way it worked was that when a transcoding job was requested a message would be dispatched. This message would be received by a coordinating class that would manage the queue of transcode requests, spawning jobs as worker processes became available. The state of the queue was replicated across the cluster, so if the node running the ffmpeg processes went down the currently scheduled jobs would be remembered, and then resumed as soon as a suitable node was available again (the transcoding service was configurable so that it could be enabled/disabled per node). In practice the system proved to be quite robust.
Related
I am using Glassfish v3.0.1 for my project. However, Glassfish seems to be down many times. Therefore, I want to develop a mechanism that notifies me whenever Glassfish is down. Is there any option in Glassfish? If not, how can I achieve this? Further, how can I understand why Glassfish goes down? I cannot find proper explanations in logs.
I'm not aware of any options in Glassfish itself and I doubt there are any (it's usually hard for a process to know when it's dead :-). Write a script that tries to connect to the service (for example, using wget or curl) or use a system monitoring tool that watches processes.
To find out why Glassfish terminates, you must debug the problem. Here are some tipps:
Add/enable more logging
Search the source code for System.exit(). This can terminate an Java app without any trace of why it happens. (this might help, too)
Check the standard output of the process
Look for crash dumps; see the documentation of the Java VM which you're using.
It could be a stupid question since almost everyone is preffering embedded container technique to test EJBs, but I have to clarify this because of my lack of experience.
Also, some my argue that embedded containers my not reproduce the real life situation of deploying in a real app server.
So, when testing ejb3, why is indicated to use embedded containers instead of standalone container ?
Thanks in advance.
Time.
Testing EJBs in full blown application servers usually takes up a lot of time because of app. server has to "spin up" whenever changes are made, so a lot of time is wasted. Because of that, embedded containers such as OpenEJB can save you a lot of time. Embedded Glassfish is also an options these days, although I haven't personally tried it.
Zero turnaround is a kind of holy grail in Java EE.
Here are the most relevant arguments that I've found. Please comment beside this, or add your own reasons about testing with embeddable containers vs. a real application server container. Thank you.
using an embedded container testing technique ensures flexibility(you just need to add the new libs to the classpath). as far as I understand if we want to be able to deliver the testing project for several application servers we have to not be bound to the application server container in tests implementation. some app server could use some specific annotations or deployment descriptors, if they are used then you are bound to app server
embedded containers are lighter - this means reduced time for running the tests. real appserver have difficulties in starting and stopping automatically or could hang up. so to build fully automated testing process using real app server could be too difficult...
another problem is the stateless nature of most Java EE applications. after a method invocation of a transaction boundary (for example, a stateless session bean), all JPA-entities become detached. the client loses its state. this forces you to transport the entire context back and forth between the client and the server - heavy load,Every change of the client’s state has to be merged with the server
with embedded container you have one process that runs all (tests and ejbs), with real app server you should coordinate 2 processes(AppServer and Tests)
for full testing, of course, you need also tests on real appserver. different server could have some particularities, for example class loading etc.. embedded containers, however, help testing the logic (unit and integration of units testing) so for daily automated testing this could be enough and more easy.
An embedded container is much faster to execute (start/stop) than a full container -> this affects the developer for sure. Setup/configuration is easier to automate, specially with continuous integration. On the other hand, as some core features are disabled on an embedded container, you can't test everything.
You may want to investigate http://www.jboss.org/arquillian to have both options. From the site:
Arquillian enables you to test your
business logic in a remote or embedded
container. Alternatively, it can
deploy an archive to the container so
the test can interact as a remote
client.
In the end, it depends on the kind of EJBs you want to test. Certain complex scenarios will not work on an embedded container without mocks to some external services. In my projects we test EJBS with a custom mock container we created (ultra fast and easy to use) and, if all proceeds well, we test in the real thing, a full JBoss, using a remote control API pretty much like Arquillian.
Hope it helps.
I would like to start a new instance of a wcf service host from another (UI) application. I need the service to be out of process because I want to make use of the entire 1.4GB memory limit for a 32bit .NET process.
The obvious method is to use System.Diagnostics.Process.Start(processStartInfo) but I would like to find out whether it is a good way or not. I am planning on bundling the service host exe with the UI application. When I start the process, I will pass in key parameters for the WCF service (like ports and addresses etc). The UI application (or other applications) will then connect to this new process to interact with the service. Once the service has no activity for a while, it will shut itself down or the UI can explicitly make a call to shut the service down.
You can definitely do this:
create a console app which hosts your ServiceHost
make that console app aware of a bunch of command line parameters (or configure them in the console app's app.config)
launch the console app using Process.Start() from your UI app
That should be fairly easy to do, I'd say.
Perhaps I'm completely offbase here, but I don't think there is a 1.4 GB memory limit for .NET processes. The memory allocated for each process is managed by the OS. For 32-bit opeating systems, there is a 4 GB memory space available, but that is shared among all of the processes. So while it may appear that there is only 1.4 GB available, it's not technically true.
The only reason I bring that up is to say that the other way to approach this would be to load your WCF service inside a separate AppDomain within your UI application. The System.AppDomain class can be thought of as a lightweight process within a process. AppDomains can also be unloaded when you are finished with them. And since WCF can cross AppDomain boundaries as well as process boundaries, it's simply another consideration.
If you are not familiar with AppDomains, the approach that #marc_s recommended is the most straightforward. However, if you are looking for an excuse to learn about AppDomains, this would be a great opportunity to do so.
I have several different c# worker applications that run various continuous tasks: sending emails from queue, importing new orders from website database to orders database, making database backups and restores, running data processing for OLTP -> OLAP, and other related tasks. Before, I released these as windows services, but currently I release them as regular console applications. They are all based on a common task runner framework I created, and I am happy with that, however I am not sure what is the best way to deploy these types of applications. I like the console version because it is quick and easy, and it is possible to quickly see program activity and output. The downside is that the worker computer has several console screens running and it gets messy. On the other hand the service method seems to take to long to deploy and I have to go through event logs to see messages. What are some experiences/comments on this?
I like the console app approach. I typically have things set up so I can pass a switch like -unattended that suppresses the console screen.
Windows Service would be a good choice, it runs in the background no matter if you close current session, also you can configure it to start automatically after windows restart when performing a patches update on the server. You can log important messages to event viewer or database table.
For a thing like this, the standard way of doing it is with Windows services. You want the service to run on the network account so it won't require a logged in user.
I worked on something a few years ago that had similar issues. Logically I needed a service, but sometimes I needed to see what was going on and generally I wanted a history. So I developed a service which did the work, any time it wanted to log, it called to it's subscribers (implemented as an observer pattern).
The service registered it's own data logger (writing to a database) and at run time, the user could run a GUI which connected to the service using remoting to become a live listener!
I'm going to vote for Windows Services. It's going to get to be a real pain managing those console applications.
Windows Service deployment is easy: after the initial install, you just turn them off and do an XCOPY. No need to run any complicated installers. It's only semi-complicated the first time, and even then it's just
installutil MyApp.exe
Configre the services to run under a domain account for the best security and easiest interop with other machines.
Use a combination of event logs (with Error, Warning, and Information) for important notifications, and just dump verbose logging to a text file.
Why not get the best of all worlds and use something like:
http://topshelf-project.com/
It will allow you to run your program as command line or a windows service.
I'm not sure if this applies to your applications or not, but when I have some console applications that are not dependent on user input or they are the kind of applications that just do their job and quit, I run such programs on a virtual server, this way I don't see a screen popping up when I'm working, and virtual servers are easy to create and restart.
We regularly use windows services as the background processes. I don't like command-line apps as you need to be logged into the server for them to run. Services run in the background all the time (assuming they're auto-start). They're also trivial to install w/the sc.exe command-line tool that's in windows. I like it better than the bloat-ware that is installutil.exe. Of course installutil does more, but I don't need what it does. I just want to register my service.
We've also created a infrastructure where we have a generic service .exe that loads .DLLs based on an interface definition, so adding a new "service" is as simple as dropping in a new DLL and restarting the service host.
However, we started to move away from services. The problem we have with them is that they lock up the DLLs (for obvious reasons) so it's a pain to upgrade them. We need to stop, upgrade and then restart. Not hard, but additional steps. Instead we're moving to special "pages" in our asp.net apps that run the actual background jobs we need done. There's still a service, but all it does it invoke the asp.net pages so it doesn't lock up any of our DLLs. Then we can replace the DLLs in the asp.net bin directory and normal asp.net rules for app-domain restart kick in.
I'm exploring the possibility of writing an application in Erlang, but it would need to have a portion written in Cocoa (presumably Objective-C). I'd like the front-end and back-end to be able to communicate easily. How can this best be done?
I can think of using C ports and connected processes, but I think I'd like a reverse situation (the front-end starting and connecting to the back-end). There are named pipes (FIFOs), or I could use network communications over a TCP port or a named BSD socket. Does anyone have experience in this area?
One way would be to have the Erlang core of the application be a daemon that the Cocoa front-end communicates with over a Unix-domain socket using some simple protocol you devise.
The use of a Unix-domain socket means that the Erlang daemon could be launched on-demand by launchd and the Cocoa front-end could find the path to the socket to use via an environment variable. That makes the rendezvous between the app and the daemon trivial, and it also makes it straightforward to develop multiple front-ends (or possibly a framework that wraps communication with the daemon).
The Mac OS X launchd system is really cool this way. If you specify that a job should be launched on-demand via a secure Unix-domain socket, launchd will actually create the socket itself with appropriate permissions, and advertise its location via the environment variable named in the job's property list. The job, when started, will actually be passed a file descriptor to the socket by launchd when it does a simple check-in.
Ultimately this means that the entire process of the front-end opening the socket to communicate with the daemon, launchd launching the daemon, and the daemon responding to the communication can be secure, even if the front-end and the daemon run at different privilege levels.
One way is Theo's way with NSTask, NSPipe and NSFileHandle. You can start by looking at the code to CouchDBX http://couchprojects.googlecode.com/svn/trunk/unofficial-binary-releases/CouchDBX/
Ports are possible but not nice at all.
Is there some reason for why this communication can't simply be handled with mochiweb and json communication?
Usually when creating Cocoa applications that front UNIX commands or other headless programs you use an NSTask:
Using the NSTask class, your program can run another program as a subprocess and can monitor that program’s execution. An NSTask object creates a separate executable entity; it differs from NSThread in that it does not share memory space with the process that creates it.
A task operates within an environment defined by the current values for several items: the current directory, standard input, standard output, standard error, and the values of any environment variables. By default, an NSTask object inherits its environment from the process that launches it. If there are any values that should be different for the task, for example, if the current directory should change, you must change the value before you launch the task. A task’s environment cannot be changed while it is running.
You can communicate with the backend process by way of stdin/stdout/stderr. Bascially NSTask is a high-level wrapper around exec (or fork or system, I always forget the difference).
As I understand it you don't want the Erland program to be a background daemon that runs continuously, but if you do, go with #Chris's suggestion.
The NSTask and Unix domain socket approaches are both great suggestions. Something to keep an eye on is an Erlang FFI implementation that's in the works:
http://muvara.org/crs4/erlang/ffi
erl_call should be usable from an NSTask. I use it from a Textmate command and it is very fast. Combining erl_call with an OTP gen_server would let you keep a persistent backend state with relative ease. See my post on erl_call at my blog for more details.
Using NSTask you may also consider using PseudoTTY.app (which allows interactive communication)!
Another sample code of interest could be BigSQL, a PostgreSQL client that enables the user to send SQL to a server and display the result.
open -a Safari http://web.archive.org/web/20080324145441/http://www.bignerdranch.com/applications.shtml