Ordering of applications deployed in Mule ESB - mule

I am deploying four applications in mule.
mule is picking them and deploy them in random fashion(deploy any application without any particular order)
I want to deploy the applications in ascending order of their name. As first application is setting some environment variables which are used by later applications.
How can I achieve that?
I am using Mule Enterprise ESB 3.5.2 standalone, and I am trying it on Linux machine. On windows 7 same applications are picked and deployed in ascending order without any extra configuration.
Thanks in advance

Mule will (or at least used to, i haven´t tried) respect the alphanumeric order of the application names on startup. However, this is not documented and could change. There is not a feature like that intentionally.
Ideally, you would architect your applications in a way that they are detached enough to, using asynchronoys channels, not to have direct startup dependencies.

Setting environment variables is typically something you should do elsewhere, statically, in your environment. If you have to compute runtime data that should be available to all applications, there are other ways to do that.
Using plain hot deployment or MMC deployment, you cannot know the start order and should design the applications to cope with that. That will make them more reliable and portable as well. Have the applications update information using standard communications methods (http, databas, hazlecast or whatnot).
If you really want the startup order to be trusted and respected, start mule using mule -app app1:app2:app3:app4. Note that you will lose the ability to add new applications on the fly.
You can, however, update the application and it will reload without reloading the other applications. That is something to keep in mind.

Mule generally deploy the application in alphabetic order .. for example Application starts with A will be deployed first then application starts with B and so on..
So, there is nothing much can be done over here and only thing as the time being comes into my mind is to design the apps in alphabetic order so that the dependent apps starts after the main or parent apps get started ...
but again I don't think it is a practical and recommended approach

Related

How to avoid deployment of apps and adapters twice on Mobilefirst?

We have developed multiple apps and adapters as part of our project
We have written little ant scripts to deploy the 'apps' and 'adapters',
Prior executing 'ant tasks' we want to know whether the apps and adapters were already deployed or not
Can I use the tables 'PROJECT_ADAPTERS' and 'PROJECT_APPLICATIONS' to avoid the duplicate deployment? (or) What will happen if try to deploy same 'apps' and 'adapters' twice by mistake ?
You should treat the worklight server as a black-box, although in purely technical terms you can investigate the database and deduce information in doing so you are not using a formal API and hence anything you do is unsupported and may become invalid in future product releases.
However there are published ant tasks for retrieving the list of deployed applications and adapters, so in principle you can use those. However I query the wisdom of doing so. My primary concern is that I don't see how knowing that some version of your artefact is deployed is significant. Suppose you have changed the source, don't you want to deploy anyway?
The ant tasks are documented in the InfoCentre. Search for the topic Administering MobileFirst Applications through ant. Here's a link that works today.
As Idan has indicated there is some degree of cleverness in the build tools to avoid redundant deployment. I suggest you just use the tools as they stand rather than try to circumvent them via back-door approaches.
Nothing bad will happen by re-deploying an adapter or application.
In fact, if the checksum is identical between the already-deployed .wlapp/.adapter and the to-be deployed .wlapp/.adapter, it may not get deployed at all. And if they do get deployed, twice, and it's exactly the same apps and adapters (no code changes), then they'll just be deployed again (they will not be duplicated).

Why testing ejb3 in a embedded container?

It could be a stupid question since almost everyone is preffering embedded container technique to test EJBs, but I have to clarify this because of my lack of experience.
Also, some my argue that embedded containers my not reproduce the real life situation of deploying in a real app server.
So, when testing ejb3, why is indicated to use embedded containers instead of standalone container ?
Thanks in advance.
Time.
Testing EJBs in full blown application servers usually takes up a lot of time because of app. server has to "spin up" whenever changes are made, so a lot of time is wasted. Because of that, embedded containers such as OpenEJB can save you a lot of time. Embedded Glassfish is also an options these days, although I haven't personally tried it.
Zero turnaround is a kind of holy grail in Java EE.
Here are the most relevant arguments that I've found. Please comment beside this, or add your own reasons about testing with embeddable containers vs. a real application server container. Thank you.
using an embedded container testing technique ensures flexibility(you just need to add the new libs to the classpath). as far as I understand if we want to be able to deliver the testing project for several application servers we have to not be bound to the application server container in tests implementation. some app server could use some specific annotations or deployment descriptors, if they are used then you are bound to app server
embedded containers are lighter - this means reduced time for running the tests. real appserver have difficulties in starting and stopping automatically or could hang up. so to build fully automated testing process using real app server could be too difficult...
another problem is the stateless nature of most Java EE applications. after a method invocation of a transaction boundary (for example, a stateless session bean), all JPA-entities become detached. the client loses its state. this forces you to transport the entire context back and forth between the client and the server - heavy load,Every change of the client’s state has to be merged with the server
with embedded container you have one process that runs all (tests and ejbs), with real app server you should coordinate 2 processes(AppServer and Tests)
for full testing, of course, you need also tests on real appserver. different server could have some particularities, for example class loading etc.. embedded containers, however, help testing the logic (unit and integration of units testing) so for daily automated testing this could be enough and more easy.
An embedded container is much faster to execute (start/stop) than a full container -> this affects the developer for sure. Setup/configuration is easier to automate, specially with continuous integration. On the other hand, as some core features are disabled on an embedded container, you can't test everything.
You may want to investigate http://www.jboss.org/arquillian to have both options. From the site:
Arquillian enables you to test your
business logic in a remote or embedded
container. Alternatively, it can
deploy an archive to the container so
the test can interact as a remote
client.
In the end, it depends on the kind of EJBs you want to test. Certain complex scenarios will not work on an embedded container without mocks to some external services. In my projects we test EJBS with a custom mock container we created (ultra fast and easy to use) and, if all proceeds well, we test in the real thing, a full JBoss, using a remote control API pretty much like Arquillian.
Hope it helps.

Why use Glassfish instead of Apache? What's it strengths and weaknesses?

Sorry for my ignorance here, but when I hear the word webserver, I immediately imagine Apache, although I know people use Microsoft's IIS too. However since I've been hanging out here at Stackoverflow I've noticed lots of people use Glassfish.
Which made me wonder, why would I want to use Glassfish (in the sense that I'm interested, but I don't really understand why it might make my life easier). From what I read it's Sun's open-source derivate of Apache's Tomcat, thus I imagine it's a good (or great) quality product. But since I don't know its strengths and weaknesses, I don't know when it would be wise to choose Glassfish over another server. Could anyone elaborate ?
GlassFish is an Application Server which can also be used as a Web Server (Http Server).
A web Server means: Handling HTTP requests (usually from browsers).
A Servlet Container (e.g. Tomcat) means: It can handle servlets & JSP.
An Application Server (e.g. GlassFish) means: It can manage Java EE applications (usually both servlet/JSP and EJBs).
You should use GlassFish for Java EE enterprise applications.
The need for a seperate Web server is mostly needed in a production environment. You would normally find a Application server to be suffice most of your development needs. A web server is capable of holding larger number of active sessions and connections, thus providing the necessary balance without performance costs.
Stick to a simple web server if you are only working with servlets/jsps. It is also to be noted that in a netbeans environment, glassfish has better support than other App servers. In the context of eclipse though, WSAD and JBoss seem to the preferred options.
Glassfish will soon release the modular kernel.
This means that the containers you need start up and shutdown as you need them. I.e no EAR deployed, EJB container won;t start up. This seems to have made it very good for development as it can start and stop very quickly. This takes it a lot closer to development environments like Rails (where redeployment is a massive part of your development)
I have used GlassFish server for developing Web Services.
It provides a very interactive Admin Console where admin can test the Web Services.
I really find it helpful while developing Web Services

Best methodology for developing c# long running processor apps

I have several different c# worker applications that run various continuous tasks: sending emails from queue, importing new orders from website database to orders database, making database backups and restores, running data processing for OLTP -> OLAP, and other related tasks. Before, I released these as windows services, but currently I release them as regular console applications. They are all based on a common task runner framework I created, and I am happy with that, however I am not sure what is the best way to deploy these types of applications. I like the console version because it is quick and easy, and it is possible to quickly see program activity and output. The downside is that the worker computer has several console screens running and it gets messy. On the other hand the service method seems to take to long to deploy and I have to go through event logs to see messages. What are some experiences/comments on this?
I like the console app approach. I typically have things set up so I can pass a switch like -unattended that suppresses the console screen.
Windows Service would be a good choice, it runs in the background no matter if you close current session, also you can configure it to start automatically after windows restart when performing a patches update on the server. You can log important messages to event viewer or database table.
For a thing like this, the standard way of doing it is with Windows services. You want the service to run on the network account so it won't require a logged in user.
I worked on something a few years ago that had similar issues. Logically I needed a service, but sometimes I needed to see what was going on and generally I wanted a history. So I developed a service which did the work, any time it wanted to log, it called to it's subscribers (implemented as an observer pattern).
The service registered it's own data logger (writing to a database) and at run time, the user could run a GUI which connected to the service using remoting to become a live listener!
I'm going to vote for Windows Services. It's going to get to be a real pain managing those console applications.
Windows Service deployment is easy: after the initial install, you just turn them off and do an XCOPY. No need to run any complicated installers. It's only semi-complicated the first time, and even then it's just
installutil MyApp.exe
Configre the services to run under a domain account for the best security and easiest interop with other machines.
Use a combination of event logs (with Error, Warning, and Information) for important notifications, and just dump verbose logging to a text file.
Why not get the best of all worlds and use something like:
http://topshelf-project.com/
It will allow you to run your program as command line or a windows service.
I'm not sure if this applies to your applications or not, but when I have some console applications that are not dependent on user input or they are the kind of applications that just do their job and quit, I run such programs on a virtual server, this way I don't see a screen popping up when I'm working, and virtual servers are easy to create and restart.
We regularly use windows services as the background processes. I don't like command-line apps as you need to be logged into the server for them to run. Services run in the background all the time (assuming they're auto-start). They're also trivial to install w/the sc.exe command-line tool that's in windows. I like it better than the bloat-ware that is installutil.exe. Of course installutil does more, but I don't need what it does. I just want to register my service.
We've also created a infrastructure where we have a generic service .exe that loads .DLLs based on an interface definition, so adding a new "service" is as simple as dropping in a new DLL and restarting the service host.
However, we started to move away from services. The problem we have with them is that they lock up the DLLs (for obvious reasons) so it's a pain to upgrade them. We need to stop, upgrade and then restart. Not hard, but additional steps. Instead we're moving to special "pages" in our asp.net apps that run the actual background jobs we need done. There's still a service, but all it does it invoke the asp.net pages so it doesn't lock up any of our DLLs. Then we can replace the DLLs in the asp.net bin directory and normal asp.net rules for app-domain restart kick in.

Monitoring a Custom Service

I've created a service for one of my apps. How do i create a system tray component in VB.net that can be used to monitor the progress of the service? Is there a way to have this installed via tcpip on multiple client machines such as those that are for our employees?
We do exactly that here, with the server running a really basic HTTP server on a configurable port on a separate thread that returns status in an XML format (nothing else, just that) -- the client just uses a web request to get the XML, before parsing it and displaying it appropriately.
This approach also allows for future extensibility (detailed status, sending service control commands, adding an association to an XSLT file elsewhere for use with a normal web browser, etc.)
You could use WCF for this. Using WCF your service would open up an EndPoint which would expose status information to callers. You could then build a tray icon application that can be deployed to the employees workstations. The tray icon application could periodically poll the WCF service the your Windows Service is hosting and get status information. I know #Johan mentioned Remoting already and this is a similar approach. I'd recommend WCF though as the programming API is more simple, IMHO, and WCF will give you more flexibility with regards to network transports, etc.
I guess your question is not about how to actually do the "traybar"-thing, but how to communicate with the service to get the information you want to show in the monitor/traybar-program?
It can be done in many ways, API is one way, use sendmessage/postmessage/getmessage is one way to make 2 running programs communicate with each other without having to store anything in files or databases first.
DDE is another way. If it needs to do the stuff via net then there is something called NetDDE, but I havent done anything with NetDDE I cant help anything there.
But about the API and DDE, feel free to ask more questions if you want some clarification.
I'll take the second question: Is there a way to remotely install software on client machines?
Yes. However it is very dependent on your environment. For example, if you have an Active Directory domain, you can use group policy to force installation of software on the client boxes.
If you don't like that or if you aren't on active directory, you can buy something like Altiris to push installs down.
Another option would be to use login scripts which would run a custom program to detect if your program is installed and take appropriate action. But then you are probably better off buying Altiris.
For the comunication part, i have used remoting before, and this works very well. With a little bit of configuration, you can even get it working to another machine.