I have an EAR that I deploy as production, in context "/".
I'd like to deploy a test version of the application on the server, the same Glassfish instance.
Is it possible to deploy the application under a different context and port in the same instance?
If so, beside changing the context in application.xml, do I need to change anything else?
Usually you can deploy a test version of the application by altering the context root, and deploying it as a whole new application.
However, you must take the application's design into consideration. If the application utilizes a database, more often that not, you'll need a test database instance. All JNDI names (this includes datasources and EJBs, if any) that the test and production applications use, must not have any conflicts. It is an ill-advised move to run multiple instances of the same application, all of which reference the same JNDI names.
Finally, it is a standard accepted practice to separate your test and production environments, and even have separate machines for the same, in the case of mission critical apps and the like. This is done usually to prevent accidental overwriting of one environment (usually the production one) by another.
Related
I noticed that both options are available while running Jboss, and they both recompile the project (I noticed 'make' running with both). I did see this question, the accepted answer made sense, but I wasn't sure what hot-swapping means. What is a possible example of a change which could be registered without needing to restart the server?
Your question needs more details to answer completely, but here are some basic concepts:
Hot-swapping is simply replacing the files of your project into the deployment folder of the application server (unpackaged, i.e. not the .war/.ear but all separate files). It is usually faster because the change are immediately visible in the web-application. But it is not always possible/supported by application servers, and often if you hot-swap .jar files the application server doesn't pick it up or end up confused.
Restarting JBoss will stop all existing services ( EJBs, Pooling, Queues, Messaging...) and restart them. It is almost the cleanest way to run your application (the cleanest would be un-deploy, restart and deploy)
Redeploy means your application and its services are first removed from JBoss, but other services setup at server level (Messaging, Pools, JMX,... depends on your actual settings) are still deployed. Then the application is deployed (copied from your dev folder or .WAR/.EAR to JBoss webapp)
Typically, you would hot-swap (eventually manually) .(x)html/.jsp/.jsf/images/.js/.css safely as JBoss doesn't need to process them.
Changing code in java classes deployed as .class in a WEB-INF/classes can often be hot-swapped.
Changing code in java files deployed as .jar will almost always need at least redeployment. Some OSGi enabled application server properly configured are more flexible in hot-swapping a complete application (I know Glassfish does but I don't know what specific setting is needed)
Finally, in development, sometimes the multiple redeployments lead to memory leak or unstable application server (often you'll get a OutOfMemory exception in the logs) then you need to cleanup (undeploy, stop, start then deploy)
I want to create an application that is not aware of the environment it runs in.
The environment specific configuration I want to leave up to the configuration of glassfish.
So eg I have a persistence.xml which 'points' to a jta data source
<jta-data-source>jdbc/DB_PRODUCTSUPPLIER</jta-data-source>
In glassfish this datasource is configured to 'point' to a connection pool.
This connection pool is configured to connect to a database.
I would like to have a mechanism such that I can define these resources for a production and an accept environment without having to change the jndi name. Because this would mean that my application is environment aware.
Do I need to create two domains for this? Or do I need two completely separate glassfish installations?
One way to do this is to use clustering features (GF 2.1 default install is often developer mode, so you'll have to enable clustering, GF 3.1 clustering seems to be on by default).
As part of clustering, you can create stand alone instances that do not participate in a cluster. Each instance can have its own config. These instances share everything under the Resources section, and each instance can have separate values in the system properties, most importantly these are separate port numbers.
So a usage scenario would be that your accept/beta environment will run on it's own instance with different ports (defaults being 38080, 38181, etc., assuming you're doing an http app). When running this way, your new instance will be running in a separate JVM. With GF 2.1, you need to learn how to manage the node agent. With GF 3.1, you won't have to worry about that.
When you deploy an application, you must choose the destination, called a Target, so you can have an accept/beta version on one instance, and a production version on the other instance.
This is how I run beta deployments with our current GF 2.1 non-clustered setup and it works pretty well.
What is the best design to have many enviroments for one web-app? Is it better to have multiple tomcat instances or multiple web-app instances deployed on one Tomcat server?
If one server can handle the load, I would said it's better to have just one Tomcat instance and deploy web-app multiple times if necessary.
This way:
You'll have only one server to take care of (secure, administer, backup).
You share hardware resources among applications (RAM, DISK, CPU)
The idea of deploying the same web-app several times in order to reduce administration burden is good.
But in my opinion, this isn't an acceptable solution : suppose you deploy a web-app twice. Once for a TEST environment and a second time for a PRODUCTION environment. The web-app may encounter exceptions/errors (typically, memory-related issues) that may lead the whole Tomcat server to crash. In such a situation, problems that were encountered in one environment would cause the other one to be unavailable.
Therefore, I would rather install as many Tomcat instances as different environment.
Ideally, you should keep all production code on a completely separate environment as much as possible just to avoid mistakes and for security reasons.
Depending on your resources and team size, say for example, you have an enclave for production: web server, database, mail server. This should have rules to disallow any development resources from access production resources and vice versa. If your dev resources have been compromised or you run a script going to the wrong resource, there would be a layer of protection for that.
Yes, this is all inconvenient, but it could save you from having big headaches in the long run.
When I start a Weblogic instance with a deployed application, the deployment is sometimes left in prepared state, not in active state. I have to go to Weblogic Console and start the deployment manually, which is quite slow and annoying repetetive work. Since this is done on a development machine — sometimes 50 times a day, — there are no security implication as the server is only visible on the local network. Is there some way to have it always start the deployment active?
Note that I'm not redeploying the application, I instead have it "constantly deployed" and stop/start the Weblogic instance using the scripts in bin directory.
If you are running weblogic in development mode, you can use the autodeploy folder for your app. See details here: http://download.oracle.com/docs/cd/E11035_01/wls100/deployment/autodeploy.html#wp1021620
Think this should solve your problem
We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes.
Our application, or our system, as I should rather say, comes in two or three parts.
Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces.
Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients.
Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster.
Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services.
That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call?
Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email:
The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container?
As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following:
Because EJBs are inherently location independent, they use a remote programming
model. Method parameters and return values are serialized over RMI-IIOP and returned
by value. This is the intrinsic RMI "Call By Value" model.
WebSphere provides the "No Local Copies" performance optimization for running EJBs
and clients (typically servlets) in the same application server JVM. The "No Local
Copies" option uses "Call By Reference" and does not create local proxies for called
objects when both the client and the remote object are in the same process. Depending
on your workload, this can result in a significant overhead savings.
Configure "No Local Copies" by adding the following two command line parameters to
the application server JVM:
* -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util
* -Dcom.ibm.CORBA.iiop.noLocalCopies=true
CAUTION: The "No Local Copies" configuration option improves performance by
changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM.
One side effect of this is that the Java object derived (non-primitive) method parameters
can actually be changed by the called enterprise bean. Consider Figure 16a:
Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations?
Thanks
The only automatic optimization that can really be done for remote EJBs is if they are colocated (accessed from within the same JVM). In that case, the ORB will short-circuit some of the work that would otherwise be required if the request needed to go across the wire. There will still be some necessary ORB overhead including object serialization (unless you turn on noLocalCopies, with all the caveats it brings).
Alternatively, if you know that the UI controller is colocated, your method calls do not rely on parameter or return value copying, and your interface does not rely on the exception differences between local and remote views, then you could create and expose a local subinterface that will be much faster than remote access through the ORB.