AWS Neptune - How Can I Migrate My 1.0.5.1 Based Cluster to Serverless? - amazon-neptune

The Problem
I have an Amazon Neptune cluster with an instance running in db.t3.medium DB instnance class. I do not see a choice to move this to the Serverless instance.
How can I migrate this instance?

Root Cause
You can only migrate an instance running Neptune Engine version 1.2 or later.
How to Fix
You need to migrate your Neptune Engine version first to 1.2. Once that is done, you will get the migration option to Serverless.
The engine version is controlled not in the cluster instance but at the cluster level and if you are running an older version of the engine, you may need to incrementally upgrade to from the highest version in the major version group, then move up to the next higher version. If you are running 1.0.x, you will first need to go to 1.1.0 R7 then move onto 1.2.
As with any major version upgrade, you could incur some downtime during migration.
To change the engine version, "Modify" the cluster (not instance) settings (the top right button on the console page) and select the latest possible DB engine version. You can keep the rest of the settings, and you can apply the change to take effect immediately if you can afford to initiate a downtime shortly after. Continue to upgrade to the next higher level until you reach 1.2. Each upgrade can take a while.

Related

Apache geode gemfire pulse

We are using Spring data gemfire, we are planning to migrate to Apache geode latest version. In the VMWare gemfire version we had to explicitly set the path of the gemfire installable for the pulse to work properly. If we are using Apache geode jar, will we able to get the pulse up and running without specifying the installable location.
We are not using gfsh in our project, we want to ensure that we have minimal dependency on the installable version when we upgrade gemfire.
You don't need to set the GEODE_HOME environment variable when using spring-boot-data-geode, you just need to make sure the correct dependencies are within the classpath of your application (see here for more details).
I've written a very basic example showing how to start a Locator with the Pulse application embedded, you can find it here
As a side note, and regarding the following:
We are using Spring data gemfire, we are planning to migrate to Apache geode latest version.
In order to avoid weird and hard to fix runtime issues, please always make sure to use a combination of versions fully supported in the Spring Boot for Apache Geode and VMware Tanzu GemFire Version Compatibility Matrix
After going through various answers and documentations I was able to start pulse by the help of following article.
Start Gemfire Pulse

What is the best strategy to select which parameters to use for Geode server and locator startup script

Our company uses Geode services for some of our applications, we are making use of Geode Member group configurations as well for maintaining different regions.
We have been undergoing an effort of migrating our applications from Geode version 1.6 to the latest version 1.12.
We have seen dramatic performance decrease after the upgrade, if we use the older parameters for the server and locator startup scripts, and things work fine when we remove those parameters.
We are now planning to take the route of understanding the parameters (earlier used) and available, to determine the most optimal configurations for the server and locator to get the best out of the new Geode version.
I was wondering if someone has any best practices or recommendations to follow for this task.
Below are the configurations for the Geode locator and server startup scripts for old and new versions.
Locator startup command
---Old configurations ( works great with Geode 1.6 version but not with any version after Geode 1.8)
gfsh start locator --locators=$locators_str --name=${EC2_HOSTNAME}.aws.compnaynamedigital.net --initial-heap=2G --max-heap=2G --dir=/opt/compnayname/geode/locator --J=-Dlog4j.configurationFile=/opt/compnayname/geode/log4j2-locator.xml --J=-DCLUSTER=${ECS_CLUSTER} --J='-javaagent:/opt/compnayname/geode/jmxtrans-agent-1.2.6.jar=/opt/compnayname/geode/jmxtrans-agent-locator.xml' --J=-Dgemfire.distributed-system-id=${DISTRIBUTED_SYSTEM_ID} --J=-Dgemfire.member-timeout=30000 --J=-Dgemfire.max-num-reconnect-tries=0 --J=-Dgemfire.jmx-manager=true --J=-Dgemfire.jmx-manager-start=true --J=-Dgemfire.jmx-manager-port=1099 --J=-Dgemfire.http-service-port=0 --J=-Dgemfire.log-level=info --J=-Dgemfire.log-file-size-limit=10 --J=-Dgemfire.log-disk-space-limit=10 --J=-Dgemfire.disable-auto-reconnect=true
---New configuration (works great with all versions)
gfsh start locator --locators=$locators_str --name=${EC2_HOSTNAME}.aws.compnaynamedigital.net --J=-Xmx2048m --dir=/opt/compnayname/geode/locator --J=-Dlog4j.configurationFile=/opt/compnayname/geode/log4j2-locator.xml --J='-javaagent:/opt/compnayname/geode/jmxtrans-agent-1.2.6.jar=/opt/compnayname/geode/jmxtrans-agent-locator.xml'
Server Startup command
---Old configurations ( works great with Geode 1.6 version but not with any version after Geode 1.8)
gfsh start server --locators=$locators_str --name=${EC2_HOSTNAME}.aws.compnaynamedigital.net --initial-heap=${GEODE_INIT_HEAP} --max-heap=${GEODE_MAX_HEAP} --group=${SERVER_GROUP} --dir=/opt/compnayname/geode/server --classpath=/opt/compnayname/geode/services-geode.jar --J=-Dlog4j.configurationFile=/opt/compnayname/geode/log4j2-server.xml --J=-DCLUSTER=${ECS_CLUSTER} --J='-javaagent:/opt/compnayname/geode/jmxtrans-agent-1.2.6.jar=/opt/compnayname/geode/jmxtrans-agent-server.xml' --J=-Dgemfire.distributed-system-id=${DISTRIBUTED_SYSTEM_ID} --J=-Dgemfire.member-timeout=30000 --J=-Dgemfire.max-num-reconnect-tries=0 --J=-Dgemfire.socket-buffer-size=16777215 --J=-Dgemfire.off-heap-memory-size=${GEODE_OFF_HEAP} --J=-XX:+UseParNewGC --J=-XX:+UseConcMarkSweepGC --J=-XX:CMSInitiatingOccupancyFraction=60 --eviction-heap-percentage=70 --critical-heap-percentage=90 --J=-Dgemfire.http-service-port=0 --J=-Dgemfire.log-level=info --J=-Dgemfire.log-file-size-limit=10 --J=-Dgemfire.log-disk-space-limit=10 --J=-Dgemfire.disable-auto-reconnect=true ${ADDTL_GEODE_SERVER_OPTS}
---New configuration (works great with all versions)
gfsh start server --locators=$locators_str --name=${EC2_HOSTNAME}.aws.compnaynamedigital.net --J=-Xmx${GEODE_MAX_HEAP} --group=${SERVER_GROUP} --dir=/opt/compnayname/geode/server --classpath=/opt/compnayname/geode/services-geode.jar --J=-Dlog4j.configurationFile=/opt/compnayname/geode/log4j2-server.xml --J='-javaagent:/opt/compnayname/geode/jmxtrans-agent-1.2.6.jar=/opt/compnayname/geode/jmxtrans-agent-server.xml'
Test Environment Details
We are using the exact same environment (read AWS) for testing the old and new configurations and performing the same test to measure the response time. We are using 3 Geode locators and 3 Geode servers for the different member groups.
The only difference is the Geode version
We are actually doing a count operation (we have written a count function to execute on Geode regions to count the records existing in the downloaded data which is actually data sketches (https://datasketches.apache.org/)). This count operation on the same data in the same testing environment is giving a drastically slow response with the old configuration using any Geode version beyond 1.8
Another surprising thing is that if I use the old configurations in my local laptop (my laptop serves as locator and server both) with any Geode version greater than 1.8 (including the latest version of Geode), then I am not seeing this issue. Somehow these extra configurations are causing slowness in the AWS environment in the distributed infrastructure.
Please let me know if more information is required and I will be glad to provide more details.
Any information on this will be appreciated.
The main difference I see is the inclusion of --J='-javaagent:/opt/xyz/geode/jmxtrans-agent-1.2.6.jar=/opt/xyz/geode/jmxtrans-agent-server.xml' in the startup parameters. This seems to be a third party Java Agent to expose several JVM metrics through JMX.
Do you know if the agent itself is modifying the byte code?, I've seen negative effects of that approach for Geode applications in the past (not performance related, though). Have you tried upgrading the agent to the latest available version (1.2.10)?. As a side note, Geode already exposes a lot of metrics and information through JMX out of the box, is there any reason why you're relying on yet another external tool for this?.
We have seen dramatic performance decrease after the upgrade
How are you measuring performance?, where do you see the degradation?, are you executing exactly the same workload on exactly the same machines, where the only difference is the Geode version?. There are several actors in play here.
That said, diagnosing and troubleshooting performance degradations can be a long and though process, so my suggestion would be to open a Geode JIRA Ticket with all the relevant information and artefacts.
Cheers.

add new datacenter during datastax upgrade 4.8.8 to 5.0.2

I have multiple datacenters. One of them is Cassandra other one is Solr datacenter. I already started upgrading process. Still 1 node is being upgrading since "upgradesstables" command have been taking for 4 days.
I want to add new cassandra datacenter and i dont have time to wait upgrading process is done. Can i add new cassandra datacenter with version 5.0.2 while there is upgrading process is going on.
Although you can run a cluster in a partially upgraded state, it is a transient state and not a situation you'd want your cluster to be in for any length of time. There's some operations you should avoid while the cluster is in a partially upgraded state and also your cluster will show a schema disagreement while in this state.
I would say its best to not add that new DC into the mix. Please see the upgrade limitations here:
https://docs.datastax.com/en/latest-upgrade/upgrade/datastax_enterprise/upgdDSE50.html#upgdDSE50__restrictions

How to backup and restore deployment application in glassfish

I want to upgrade a new version of the application that already running on my glassfish server contain domain with many applications. Before deploying new version, I have to backup the running application so I could restore it when having something wrong. I've found some ways to solve:
Save the application directory, save the domain.xml. When having problem, I will copy these files again to server. So could I re-deploy application by using this way?
Backup & restore domain that have the application. um.. I just only want to upgrade an application.
Any help?
Thanks,
Glassfish has built-in support of application versioning, which is very convenient for upgrade-revert scenarios. It is possible to deploy a new version of the application without removing the old version. You may later revert to the previous version via glassfish console or asadmin utility. Glassfish even supports rolling upgrades - multiple versions of application run simultaneously, new sessions are routed to new version, living sessions are served by old version until none exists and older version is turned off. In this way, users would not experience any down time.
Have a look into the Glassfish documentation on application versioning - Chapter Module and Application Versions.
In short, restrictions when using glassfish versioning:
you must assign a version tag by naming your deployment with version suffix (only new versions need tag, current version can stay untagged)
you must remember which version is the previous one, as more than 2 versions can be deployed (in your case this will be the one without version tag)
remember to backup database and other external resources shared by all versions of the app
I believe that you do not need to back up anything except for the old application archive (WAR, EAR), as with usual deployment, you can always undeploy new version and deploy old version (a restart of the server may be required in between). Backup is necessary only if you need to amend glassfish configuration during deployment (new datasources, security, etc.)

Set up distributed index using Hibernate Search and Lucene

Our application is using Hibernate Search for indexing some of its data. The application is running on two JBoss EAP 6.2 application servers for load distribution and failover. We need changes made on one machine to be immediately visible on the other. The index is a central part of the application and needs to be consistent with the database data. Completely rebuilding it takes a long time so it is important that it remains intact even in the case of a server crash. Also, the index is expected to grow too large to keep all of it in memory.
Our current solution is to use the standard filesystem directory with a shared filesystem (NFS) and the JGroups backend to ensure that only one server writes to a given index at any time. This works more or less, but sometimes we have problems with index updates taking very long (up to 20 seconds) or failing completely. Due to some other reasons we need to migrate away from the currently used file system, so we are evaluating alternatives for the current setup.
One thing we tried is the Infinispan directory with a file cache store for persistence, but we had some problems there regarding OutOfMemoryErrors (see also my post in the Infinispan forums https://developer.jboss.org/thread/253732). Also, performance was still not acceptable in our first tests (about 3 seconds for an index update with two clustered servers set up on my developer machine), though that may be due to configuration issues.
I think this is not such an uncommon requirement, but I couldn't find much information on best practices to implement it.
Who has experiences with similar setups? Does the Infinispan directory work for you? Can anybody suggest a working configuration or how to proceed to arrive at one? What alternatives have you tried and which work?
You need to be careful about which versions are being used. The Infinispan version which is bundled within JBoss EAP is not intended (i.e. tested as extensively as for other purposes) for storing the Lucene index.
When JBoss EAP 6.2 was released, the bundled Infinispan was considered good to go for the internal needs of the application server, but as you might have discovered, the feature of index storage was having at least some performance issues.
In recent developments of Infinispan we applied many improvements to the index storage feature, fixing some bugs and getting very significant performance improvements out of it. I would hope you could be willing to try Infinispan 7.2.0.Beta1 ?
All of these improvements are also being backported to JBoss Data Grid, version 6.5 will make them available as a supported product. Note this feature of storing an Hibernate Search index wasn't supported before - it is going to be a new feature of JDG 6.5.
Modules from JDG 6.5 will be compatible with JBoss EAP, you'll just have to make sure you'll use the Infinispan build provided by JDG and not the one meant for internal usage of EAP.
Performance improvements are still being worked on. It's much better already - especially compared to that older version - but we won't stop working on that yet so if you could try latest bleeding edge versions of Infinispan 7.2.x (another release is scheduled for tomorrow), I'd highly appreciate your feedback to keep pushing it.