I have simple Java Server Faces application deployed on Weblogic server. How I can upgrade the package with zero downtime?
For example can I create cluster of nodes and switch the active nodes or I can do this using only one node?
You can specify a version in MANIFEST.MF and weblogic will transparently switch over to the new version on deployment, waiting for old sessions to close and routing new sessions to the new version.
Bear in mind that certain versions of WLS can't parse the version to a decimal, it will treat it as a string, so unless you want v1.9.6 to replace v1.10.0, just make it a decimal number.
Related
Question: I thought Weblogic and GlassFish / Payara are completely different servers and do not share any common code/component. How come I reached a Weblogic CVE when using Payara?
Configuration: Both our development and production systems are under Payara:
Payara 4.1.1.171.1 Full edition
Oracle Java 1.8.0_144
CentOS 7
Symptoms:
we have illegal connection to url /wls-wsat/CoordinatorPortType11 and wls-wsat/ParticipantPortType, under Anonymous authentication despite having Apache Shiro as security system.
we have an unknown Pyhton program running in our Production. Nothing found in Development so far
Payara Development has shutdown once and one deployment failed ending having Payara stopped (start-domain was required). Payara Production has shutdown once. All of it for unknown reason, especially there were at most one or two users doing nothing special at the shutdown moments
What I can (not) do:
After seeing this and reading this, I think the problems is solved for WebLogic systems but I don't know the mapping GlassFish version <-> Weblogic version, if it exists
Unless I missed a big stuff, I haven't found anything related CVE-2017-10271 and Payara.
We are planning to upgrade to Payara 4.1.2.174 shortly but I have no guarantee it will fix this issue.
I'm trying to check how Shiro can block such connection
I'm asking this question to make sure that there is (or not) no relationship between WebLogic and GlassFish/Payara before opening an issue on Payara GitHub. I unsuccessfully tried to run the python script, I don't know Python :(
Planning to migrate Active MQ version form 5.5.1 to 5.11.2 how to migrate the existing messages from older version(5.5.1) to newer version(5.11.2)
Thanks in advance.
This assumes you have already taken care of any migration issues noted in each release note from 5.6.0 to 5.11.2.
There are essentially two ways to upgrade/migrate a broker.
Simply install the new broker and point out the old (kahaDB) database. This will automatically upgrade to a new version. This may cause some downtime during store upgrade (at least if there are a lot of messages in the store).
Have two parallell brokers running at once and let the old "fade out". You can setup a shiny new 5.11 broker side by side. This also makes it possible to migrate to other store types (JDBC or LevelDB). It's a little more work but will keep you uptime maximized. If you depend on message order, I would not recommend this method.
Setup the new broker.
Remove transportConnector from the old broker, and add a network connector from old to new.
Stop old, start new, start old.
Now, clients (using failover, right?) will fail over to the new broker and messages from the old brokers will be copied over to new as long as there are connected consumers on all queues.
When no more messages are left on old broker, shut it down and uninstall.
As with all upgrades, bypassing a lot of versions will make the upgrade less reliable. I would try some dry run upgrade of a production replica to ensure that everything goes as planned.
I have my web application currently hosted on WebSphere Application Server 7.0.0.0.
Now i want to migrate my application to JBOSS EAP 6.2.0. GA.
Has anyone done it before? I need help on below issues.
I want to create following equivalent components in JBOSS.
1) Oracle data source
--> To create Oracle Data Source, we first definitely need to create Oracle JDBC Provider. So also need to know how to create equivalent to this in JBOSS.
2) Queue
3) Activation Specification
4) Shared library to contain configuration file and third party jars.
How to deploy applications on JBOSS knowldge would be an added advantage.
Yeah, have done some googling and found below links,
http://www.redhat.com/f/pdf/jboss/JBoss_WebSphereMigrationGuide.pdf
https://docs.jboss.org/author/display/AS72/How+do+I+migrate+my+application+from+WebSphere+to+AS+7
But the links doesnt have any practicle knowledge.
I have tried migrating from websphere to JBOSS\WildFly 10.
Not sure about the other versions of JBoss but wildfly 10 has a configuration xml which you can use to configure your server.
I configured the database, queues, queue factories, namespace bindings using this configuration xml which is available as part of the server installation itself.
The file is present in this location
YOUR_SERVER_INSTALLATION_HOME/opt/jboss/wildfly/standalone/configuration/standalone.xml
There are multiple configurations that are possible and you can customize them as per your need as well. You can refer the below documentation for customization.
https://docs.jboss.org/author/display/WFLY10/Subsystem+configuration
My application that uses persistent Ehcache gets deployed as an ear file on the weblogic server. The deployment stategy used is the Production redeployment Strategy
(Production redeployment strategy involves deploying a new version of an updated application alongside an older version of the same application. WebLogic Server automatically manages client connections so that only new client requests are directed to the new version. Clients already connected to the application during the redeployment continue to use the older version of the application until they complete their work, at which point WebLogic Server automatically retires the older application.)
Since the Ehcache of the new version application is configured before the ehcache of the older version is shutdown (since the application is still running), the index and data files are not even created and used.
Hence persistence doesn't work.
What could i do to make the cache persistent? I wanted that somehow ehcahche manager stops while deploying new version in weblogic.
Regards
I want to create an application that is not aware of the environment it runs in.
The environment specific configuration I want to leave up to the configuration of glassfish.
So eg I have a persistence.xml which 'points' to a jta data source
<jta-data-source>jdbc/DB_PRODUCTSUPPLIER</jta-data-source>
In glassfish this datasource is configured to 'point' to a connection pool.
This connection pool is configured to connect to a database.
I would like to have a mechanism such that I can define these resources for a production and an accept environment without having to change the jndi name. Because this would mean that my application is environment aware.
Do I need to create two domains for this? Or do I need two completely separate glassfish installations?
One way to do this is to use clustering features (GF 2.1 default install is often developer mode, so you'll have to enable clustering, GF 3.1 clustering seems to be on by default).
As part of clustering, you can create stand alone instances that do not participate in a cluster. Each instance can have its own config. These instances share everything under the Resources section, and each instance can have separate values in the system properties, most importantly these are separate port numbers.
So a usage scenario would be that your accept/beta environment will run on it's own instance with different ports (defaults being 38080, 38181, etc., assuming you're doing an http app). When running this way, your new instance will be running in a separate JVM. With GF 2.1, you need to learn how to manage the node agent. With GF 3.1, you won't have to worry about that.
When you deploy an application, you must choose the destination, called a Target, so you can have an accept/beta version on one instance, and a production version on the other instance.
This is how I run beta deployments with our current GF 2.1 non-clustered setup and it works pretty well.