I would prefer to serialize a new data objects without restarting production cluster.
Please keep in mind that you don't have to do a full restart, you can do a rolling restart (restart members one by one and add the new class) before you start working on the newly added object.
On the other hand, there's user code deployment feature (however this is offered in the enterprise edition) that enables to load classes to the cluster from members/clients: http://docs.hazelcast.org/docs/3.10.3/manual/html-single/index.html#member-user-code-deployment-beta
Related
I have a nimbus+storm cluster using Zookeeper, and I wish to move my cluster and point it to a new Zookeeper. Do you know if this is possible? Can I keep all the information of the old zookeeper and save it in the new one? Is it possible to do it without downtime?
I have looked in the internet for this procedure but I have not found much.
Would it be as simples as change the storm.yml file in both the master . and worker nodes? Do I need a restart afterwards?
# storm.zookeeper.servers:
# - "server1"
# - "server2"
If you just change storm.yml, you'd be pointing Storm at a new empty Zookeeper cluster, and it will be like you just installed Storm from scratch. More likely, you want to grow your Zookeeper cluster to include your new machines, then update storm.yml to point at the new machines, then shrink the cluster to exclude the machines you want to move away from. That way, your Zookeeper quorum is preserved even though you've moved to other physical machines.
This is easier to do on Zookeeper 3.5 with dynamic reconfiguration http://zookeeper.apache.org/doc/r3.5.5/zookeeperReconfig.html. I'm unsure whether Storm will run on Zookeeper 3.5, but you may consider investigating whether you can upgrade to 3.5 before growing/shrinking the cluster.
Otherwise you will have to do a rolling restart to add the new Zookeeper nodes, then do another one to remove the old machines once the cluster has stabilized.
Let me suggest a hack here. This was a script provided by microsoft for migration on HD Insight cluster , but you can change it and use it for your need.
The script can be downloaded from : https://github.com/hdinsight/hdinsight-storm-examples/tree/master/tools/zkdatatool-1.0 and you can read more about it here :
https://blogs.msdn.microsoft.com/azuredatalake/2017/02/24/restarting-storm-eventhub/
I have used it in the past when i had to migrate some stuff between PaaS clusters and i can confirm it works ok!
I am dealing with the infrastructure for a new project. It is a standard Laravel stack = PHP, SQL server, and Nginx. For the PHP + Nginx part, we are using Kubernetes cluster - so scaling and blue/green deployments are taken care of.
When it comes to the database I am a bit unsure. We don't want to use Kubernetes for SQL, so the current idea is to go for Google Cloud SQL managed service (Are the competitors better for blue/green deployment of SQL?). The question is can it sync the data between old and new versions of the database nodes?
Let's say that we have 3 active Pods and at least 2 active database nodes (and a load balancer).
So the standard deployment should look like this:
Pod with the new code is created.
New database node is created with current data.
The new Pod gets new environment variables to connect to the new database.
Database migrations are run on the new database node.
Health check for the new Pod is run, if it passes Pod starts to receive traffic.
One of the old Pods is taken offline.
It should keep doing this iteration until all of the Pods and Database nodes are replaced.
The question is can this work with the database? Let's imagine there is a user on the website that is using the last OLD database node to write some data and when switched to the NEW database node the data are simply not there until the last database node is upgraded. Can they be synced behind the scenes? Does Google Cloud SQL managed service provide that?
Or is there a completely different and better solution to this problem?
Thank you!
I'm not 100% sure if this is what you are looking for, but for my understanding, Cloud SQL replicas would be a better solution. You can have read replicas [1], that are a copy of the master instance and have different options [2]
A read replica is a copy of the master that reflects changes to the master instance in almost real time. You create a replica to offload read requests or analytics traffic from the master. You can create multiple read replicas for a single master instance.
or a failover replica [3], that in case the master goes down, the data continue to be available there.
If an instance configured for high availability experiences an outage or becomes unresponsive, Cloud SQL automatically fails over to the failover replica, and your data continues to be available to clients. This is called a failover.
You can combine those if you need.
I am developing web application based in Spring. I added Apache ignite in maven dependency.
It is very simple application, which is only 2 rest api.
One is querying by key, which return object. another is put data.
But I have a problem: when I develop additional implementation, I don't know how I can deploy this application.
The application always should be available. but I deploy it to one node, then the node may not available.
Is there good method for distributed memory application deploy?
In your case you will typically start an Ignite server node embedded in your application. You can then start multiple instances of application, and as long as nodes discover each other, they will share the data. For more information about discovery configuration see here: https://apacheignite.readme.io/docs/cluster-config
I am using Glassfish 3.1.2, and I set up a cluster with one node and two instances.
I have an message driven bean in my application that subscribes to a topic, which I deployed to the cluster.
When I publish a message to the topic I want both instances to receive the message.
However, in practice I am finding that only one instance receives the message.
I believe I am running into a feature called "shared subscriptions"
http://docs.oracle.com/cd/E18930_01/html/821-2438/gjzpg.html#MQAGgjzpg
The feature (which is enabled by default) says that beans in the cluster with the same client id are shared, and are effectively only one subscription.
It says that by default the client id of an MDB is its name, which means that both my instances are using the same client id.
So other than completely disabling this feature, I would like to know if it is possible to setup an MDB so that each instance subscribes with a different client ID? This seems a bit tricky since both instances are using the same WAR file. I think you can set the client ID in an annotation, but I'm not sure if that can be changed at runtime...
I'm not sure why you would completly disable this feature. In the link you provided, it states clearly that you configure this per ActivationSpec/MDB. So as far as I understand it, it would affect only the MDB you have at hand.
For an MDB, set the ActivationSpec property useSharedSubscriptionInClusteredContainer to false. Do this in exactly
the same way as with other ActivationSpec properties, using
annotations in the MDB itself or in the deployment descriptor
ejb-jar.xml or glassfish-ejb-jar.xml.
But you can of course set the client ID on a connection dynamically during runtime. Please note that you probably would have to handle the JMS connection yourself a bit more than relying on the features managed by the container.
http://docs.oracle.com/javaee/6/api/javax/jms/Connection.html#setClientID(java.lang.String)
I want to create an application that is not aware of the environment it runs in.
The environment specific configuration I want to leave up to the configuration of glassfish.
So eg I have a persistence.xml which 'points' to a jta data source
<jta-data-source>jdbc/DB_PRODUCTSUPPLIER</jta-data-source>
In glassfish this datasource is configured to 'point' to a connection pool.
This connection pool is configured to connect to a database.
I would like to have a mechanism such that I can define these resources for a production and an accept environment without having to change the jndi name. Because this would mean that my application is environment aware.
Do I need to create two domains for this? Or do I need two completely separate glassfish installations?
One way to do this is to use clustering features (GF 2.1 default install is often developer mode, so you'll have to enable clustering, GF 3.1 clustering seems to be on by default).
As part of clustering, you can create stand alone instances that do not participate in a cluster. Each instance can have its own config. These instances share everything under the Resources section, and each instance can have separate values in the system properties, most importantly these are separate port numbers.
So a usage scenario would be that your accept/beta environment will run on it's own instance with different ports (defaults being 38080, 38181, etc., assuming you're doing an http app). When running this way, your new instance will be running in a separate JVM. With GF 2.1, you need to learn how to manage the node agent. With GF 3.1, you won't have to worry about that.
When you deploy an application, you must choose the destination, called a Target, so you can have an accept/beta version on one instance, and a production version on the other instance.
This is how I run beta deployments with our current GF 2.1 non-clustered setup and it works pretty well.