Tools used to update dynamic properties without even restarting the application/server - apache

In my project I am trying to do the setting where in I can update the dynamic properties in the server/application without even restarting it.
We face this problem that whenever we have to update or change some properties which are dynamic in nature, then every time we have to restart the server/application and this results in unavailability of the server for that time stamp.
I have already found one tool Archaius-ZooKeeper to set it.https://github.com/Netflix/archaius/
We are trying to do it for JBoss servers where we use war file to deploy on server.
Please suggest are there any other method or tool or technology that can be used to set it.
Thanks in advance.

You could consider jRebel, allows you to redeploy your app without any downtime, then you can use jRebel Remoting to redeploy from eclipse to a remote server

You may use Zookeeper. You have to create a Znode and add the properties in the Znode. All your servers/applications should read from this Znode and also put an watch on this Znode for data changes.
Alternately, you may use a database to store the properties along with their modification time. Whenever you change the value of a property, the corresponding modification time is changed. All your applications/servers keep pulling the delta at some intervals (may be 2 secs/ 5 secs etc.).
Or you may have the properties hosted on a web server, or on NFS, or on some distributed cache etc. All your applications/servers keep reading it at some intervals for detecting any changes.

You can use Spring Cloud Zookeeper. I shared a little example here.

Related

Camel readlock strategy in cluster

We are trying to move to cluster with Apache Camel. So far we had it on one node and worked well.
One node:
I have readlock strategy set to 'changed' which keeps track of file changes with camelLock file and only when the file has finished downloading, it will be picked up for processing. But camel readlock strategy 'changed' is discouraged in clustering. According to the camel documentation 'idempotent' is recommended. This is what happens when I am testing with 5GB file.
Two nodes:
I have readlock strategy to 'idempotent' which distributes files to one of the nodes but camel starts processing the file even before the file has finished downloading.
Is there a way to stop camel from processing even before file has downloaded when readlock strategy is idempotent?
Even though both "readLock=changed" and "readLock=idempotent" cause the file-consumer to wait, they really address quite different use-cases: while "readLock=changed" guards against the file being incomplete (i.e. still being written by some producer/sender), "readLock=idempotent" guards against a file being read by two consumer routes. It's a bit confusing that they're addressed by the same option.
First, to address the "changed" scenario: can the sender be changed so that it writes the file in one directory and then, when it is done writing, it copies it into the directory being monitored by your file-consumer? If this is under your control, this is a good way of letting the OS handle things instead of trying to deal with it yourself. (This does not address the issue of the multiple readers.) Otherwise, I suggest you revert back to readLock=changed
Next, on multiple readers, one work around is to only have this route run on only one node of your cluster. Sometimes this might defeat the purpose of clustering, but it is quite possible that you're starting up additional nodes to help with some other routes, and you're fine with this particular route running on just one node. It's a bit of a hack to do something like this, because all nodes are no longer equal, but it is still an option to consider. Simplest would be to start one node with some environment property that flags it as the node that will handle file-reading... or some similar approach.
If you do want the route on multiple nodes, you can start by using the option "idempotent=true" but this is not good enough on its own. The option uses a repository, where it records what files have been read before, and the default repository is in-memory (i.e. each node has its own). So, the default implementation is helpful if the same file is actually being received more than once, and you wish to skip it. However, if you want it to work across nodes, you have to use a different repository.
One central repository could be a database. In that case use can use Camel's JDBC or JPA based repositories. Or, you could use something like Hazelcast. See here for your options: http://camel.apache.org/idempotent-consumer.html
You can use readLock=idempotent-changed.
idempotent-changed is for using an idempotentRepository and changed as the combined read-lock. This allows you to use read locks that supports clustering if the idempotent repository implementation supports that.
You can read more about these idempotent-changed options here: https://camel.apache.org/components/3.13.x/file-component.html
We also used readLock=changed in Docker clustered mode and worked perfectly since we used readLockMinAge for certain interval.

Migrating Transformations in Pentaho PDI

We are using two servers, one as preprod and other as Production. When we are migrating jobs or Transformations from preprod to Prod it copies its connection properties as well and this affects our Production job execution.
Can someone let me know how to migrate transformations without coping it's connections to another server.
From the Tools->Options menu, there are two checkboxes that effect PDI's import behavior: "Replace existing objects on open/import" and "Ask before replacing objects".
Normally when migrating between environments, I set the first option to false. That way if a connection definition already exists, it is silently not replaced. The other way to go is to check both options on and answer 'No' when asked to replace an existing definition.
In this way, a transform/job that runs on pre-prod can simply be exported and imported into prod without changing anything, and it runs against prod in the new environment as long as the connections are named the same.
The only thing to watch out for is importing a new connection definition for the first time. There will be no warning that a new connection object is being created, and after import, it will still point to pre-prod. After each new connection import, you need to change the connection definition to point to the new environment. The good new is you only have to do that once.
I wish they had an option, or just an info dialog to show all new connection objects created as a result of the import; that way you would know exactly what you need to change. But alas -- earwax.
If by 'connection' you mean 'databases connection', JNDI allows you to give them a symbolic name independent of your environment : it is when you configure your environment (e.g. biserver or baserver) that you specify to which database (jdbc driver, IP and port,...) this symbolic name is related.
So your transformations don't contain any refrence to a server adress and you can deploy it "as is".
I use JNDI for my CDE dashboards in biserver too : to deploy a dashboard, I just export it from the dev environment and import it in the preprod environment without modifying anything.
There are a lot of resources on the web about JNDI. Check the Pentaho documentation too.

Reading different properties for different cluster/node

I have developed a hybrid worklight app and everything is set up. Now my case is that I have a load balance and two clusters. These two clusters have been synchronized with only one WAR file. Due to some reason, we have a server java file in the WAR for sharing some global variables with worklight adapters.
The problem now is that these 2 clusters are working independently (will be redirected by the load balance). The global variables in the JAVA file inside their WAR will not be shared. How can we maintain only one set of global variable in this case?
Or is there any method for the JAVA to read the current cluster detail(for example cluster id or IP address) so that I can write logic to point to different properties in worklight.properties
[PS: not good at English. I will clarify more if you guys don't understand me]
What you actually need here is not to use static variables to share this information.
I suggest using Redis or Memcached (or some other free solution) to share information across the cluster.
A simpler solution (but less efficient) can be using an SQL database to store/load those shared properties. You can actually create a "configuration" adapter (SQL adapter) which will be called by the other adapters to read/write the configuration properties.

How to change CommitBatchSize and CommitBatchThreshold

We have a transactional replication set and we want to change CommitBatchThreshold but couldn't find a proper documentation on how to change it. Would adding '-CommitBatchThreshold 10' to the Steps would work in Agent Properties?
Wanna be sure before changing everything.
Yes, adding the parameters and parameter values to the Steps page in the Agent Properties will work. Note that if you are running the Distribution Agent continuously then you will need to restart the replication agent for the changes to take affect.

Changing createDate on Liferay Journal Article (Web Content) via Liferay API

So here's the situation. I want to add 'old' news from our previous website in to an asset publisher portlet on our new Liferay 6.1 site. The problem is that I want them to show up as if I had added them in the past.
So, I figure, how hard can it be to modify the createDate? I've since been able to directly access the MySQL database and perform updates on the article object's createDate field. However, it doesn't seem to propagate to my Liferay deployment, regardless of clearing caches, reindexing search indices, and restarting Liferay. The web content still maintains it's 'original' createDate even though the database shows it as the value I have changed it to.
Here's the query I used:
mysql> UPDATE JournalArticle SET createDate='2012-03-08 15:17:12' WHERE ArticleID = 16332;
I have since learned that it is a no-no to directly manipulate the database, as the dynamics of database/Liferay isn't as straight forward as Liferay performing lookups. So it looks like I might need to use the Liferay API, namely, setCreateDate as seen here.
But I have absolutely no idea where and how to leverage the API. Do I need to create a dummy portlet with the sole purpose of using this API call? Or can I create a .java file somewhere on the server running my Liferay deployment and run it to leverage this method?
I only have like 15 articles I need to do this to. I can find them by referencing the ArticleID and GroupID.
Any help would be greatly appreciated. I've grepped the Liferay deployment and found setCreateDate being used heavily within .java files inside the knowledge-base-portlet, but I can't tell how else to directly use them without creating a portlet.
On the other hand, if anybody knows how to get my database to propagate it's changes to the Liferay deployment, even though I know it's a dirty hack, that would probably be the easiest.
Thanks; I really appreciate it.
Using of Liferay API is of course the clear and better way, but for only 15 articles I would try to change it directly through the database.
I checked the database and it seems that Liferay stores the data in these tables: JOURNALARTICLE and ASSETENTRY.
Try to change the created date in both these tables.
Then reload the cache: Control Panel -> Server Administration --> Clear Database Cache.
You can write hook for application startup event. This way whenever liferay is first started it will change the create date as you desire. Later if you want to remove the hook it can be done easily. See here on how to create a hook and deploy it.
http://www.liferay.com/community/wiki/-/wiki/Main/Portal+Hook+Plugins
Also, changing in database itself is not at all recommended even for 1 value/article. Always use Liferay provided service api to modify.