IBM WebSphere Server JVM properties - jvm

There are certain JVM properties (such as timezone) which can be changed by application deployed on WebSphere server. Such changes affects all the application on that server. Is there a way by which we can prevent applications from changing JVM properties at runtime. I am wondering if we can set some property in WebSphere server which will then prevents applications changing anything on JVM at runtime. Its like having JVM properties controlled by IBM WebSphere server rather than applications deployed on it.

You can prevent applications from setting JVM system properties at runtime by enabling Java 2 security on the application server and then ensuring none of the deployed applications is configured with a Java 2 security policy file that grants the java.util.PropertyPermission, specifically the "write" action, for any property.

Related

How does a GlassFish cluster find active IIOP endpoints?

I have a curiosity and I was searching for it without any result. In GlassFish documentation it is written:
If the GlassFish Server instance on which the application client is
deployed participates in a cluster, the GlassFish Server finds all
currently active IIOP endpoints in the cluster automatically. However,
a client should have at least two endpoints specified for
bootstrapping purposes, in case one of the endpoints has failed.
but I am asking myself how this list is created.
I've done some tests with a stand-alone client that is executed in a JVM and does some RMI calls on an application that is deployed in a GlassFish cluster and I can see from the logs that the IIOP endpoints list is completed automatically and it is set as com.sun.appserv.iiop.endpoints system property but if I stop a server instance or start another during the execution of the client the list remains the one that was created when the JVM was started.
GlassFish clustering is managed by the GMS (Group Management Service) which usually uses UDP Multicast, but can use TCP where that is not available.
See section 4 "Administering GlassFish Server Clusters" in the HA Administration Guide (PDF)
The Group Management Service (GMS) enables instances to participate in a cluster by
detecting changes in cluster membership and notifying instances of the changes. To
ensure that GMS can detect changes in cluster membership, a cluster's GMS settings
must be configured correctly.

how to handle configuration for accept and production environment in glassfish

I want to create an application that is not aware of the environment it runs in.
The environment specific configuration I want to leave up to the configuration of glassfish.
So eg I have a persistence.xml which 'points' to a jta data source
<jta-data-source>jdbc/DB_PRODUCTSUPPLIER</jta-data-source>
In glassfish this datasource is configured to 'point' to a connection pool.
This connection pool is configured to connect to a database.
I would like to have a mechanism such that I can define these resources for a production and an accept environment without having to change the jndi name. Because this would mean that my application is environment aware.
Do I need to create two domains for this? Or do I need two completely separate glassfish installations?
One way to do this is to use clustering features (GF 2.1 default install is often developer mode, so you'll have to enable clustering, GF 3.1 clustering seems to be on by default).
As part of clustering, you can create stand alone instances that do not participate in a cluster. Each instance can have its own config. These instances share everything under the Resources section, and each instance can have separate values in the system properties, most importantly these are separate port numbers.
So a usage scenario would be that your accept/beta environment will run on it's own instance with different ports (defaults being 38080, 38181, etc., assuming you're doing an http app). When running this way, your new instance will be running in a separate JVM. With GF 2.1, you need to learn how to manage the node agent. With GF 3.1, you won't have to worry about that.
When you deploy an application, you must choose the destination, called a Target, so you can have an accept/beta version on one instance, and a production version on the other instance.
This is how I run beta deployments with our current GF 2.1 non-clustered setup and it works pretty well.

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.

Windows Service Container

For my projects I need quite often to create windows services.
I need them for scheduling operations, file system watching, asynchronous or long running side tasks (backup files, sending messages, check incoming mail to process, notifications etc).
I also use them to expose WCF services that are cross applications in the enterprise.
The self hosted scenario seems to me more appropriate as we are still on II6 that is quite limited (only http) for exposing WCF.
Most of) the services need also to expose some kind of administration interface (web or desktop) for reporting, starting and stopping the various services etc.
Seems strange to me that a "host container" that leverages most of these features (host, install new services, remote ui for admin, exposing wcf, scheduling etc) with some kind of mef plugins doesn't already exists.
What are the options if I do not want to start from scratch?
I am a developer for an open source windows service hosting framework called Daemoniq. I understand how installers can be an inconvenience so creating installers on the fly is one of its features. You can download it from http://daemoniq.org
Current features include:
container agnostic service location via the CommonServiceLocator
set common service properties like serviceName, displayName, description and serviceStartMode via app.config
run multiple windows services on the same process
set recovery options via app.config
set services depended on via app.config
set service process credentials via command-line
install, uninstall, debug services via command-line
Please feel free to have a look at it. Code contributions are also welcome =D
Thanks!
There is one host server in development (Microsoft) - codename Dublin.
The possible option would be to create one Windows Service - host application, which will load all of your WCF services and create ServiceHost for each of them (for instance, through reflection).
Having only one windows service would make it easy to administer all service hosts (you wouldn't have to administer windows service, but only in-process hosts).

WebSphere Application Server EJB Optimization

We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes.
Our application, or our system, as I should rather say, comes in two or three parts.
Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces.
Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients.
Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster.
Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services.
That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call?
Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email:
The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container?
As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following:
Because EJBs are inherently location independent, they use a remote programming
model. Method parameters and return values are serialized over RMI-IIOP and returned
by value. This is the intrinsic RMI "Call By Value" model.
WebSphere provides the "No Local Copies" performance optimization for running EJBs
and clients (typically servlets) in the same application server JVM. The "No Local
Copies" option uses "Call By Reference" and does not create local proxies for called
objects when both the client and the remote object are in the same process. Depending
on your workload, this can result in a significant overhead savings.
Configure "No Local Copies" by adding the following two command line parameters to
the application server JVM:
* -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util
* -Dcom.ibm.CORBA.iiop.noLocalCopies=true
CAUTION: The "No Local Copies" configuration option improves performance by
changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM.
One side effect of this is that the Java object derived (non-primitive) method parameters
can actually be changed by the called enterprise bean. Consider Figure 16a:
Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations?
Thanks
The only automatic optimization that can really be done for remote EJBs is if they are colocated (accessed from within the same JVM). In that case, the ORB will short-circuit some of the work that would otherwise be required if the request needed to go across the wire. There will still be some necessary ORB overhead including object serialization (unless you turn on noLocalCopies, with all the caveats it brings).
Alternatively, if you know that the UI controller is colocated, your method calls do not rely on parameter or return value copying, and your interface does not rely on the exception differences between local and remote views, then you could create and expose a local subinterface that will be much faster than remote access through the ORB.