Hazelcast Backup and restore - backup

I want to perform Hazel cast back up and restore activity on Kubernetes environment from one AKS cluster to another AKS cluster. If anyone has performed in past or Is there any documentation is available to do that. I just started to learn Hazel cast your support will be appreciable.
I am using Embedded version 4.0.

This feature hot-backup does it.
However, this is not avilable in the free edition of Hazelcast.

Related

AWS Neptune - How Can I Migrate My 1.0.5.1 Based Cluster to Serverless?

The Problem
I have an Amazon Neptune cluster with an instance running in db.t3.medium DB instnance class. I do not see a choice to move this to the Serverless instance.
How can I migrate this instance?
Root Cause
You can only migrate an instance running Neptune Engine version 1.2 or later.
How to Fix
You need to migrate your Neptune Engine version first to 1.2. Once that is done, you will get the migration option to Serverless.
The engine version is controlled not in the cluster instance but at the cluster level and if you are running an older version of the engine, you may need to incrementally upgrade to from the highest version in the major version group, then move up to the next higher version. If you are running 1.0.x, you will first need to go to 1.1.0 R7 then move onto 1.2.
As with any major version upgrade, you could incur some downtime during migration.
To change the engine version, "Modify" the cluster (not instance) settings (the top right button on the console page) and select the latest possible DB engine version. You can keep the rest of the settings, and you can apply the change to take effect immediately if you can afford to initiate a downtime shortly after. Continue to upgrade to the next higher level until you reach 1.2. Each upgrade can take a while.

EKS step for new infrastructure

How do I setup EKS for new infrastructure? We are currently using KOPS to manage kubernetes which is a big problem. We would like to move to AWS EKS.
How did we get to do this?
Kubernetes is originally developed by Google, it is open-sourced since its launch and managed by a large community of contributors. So, it's better to use GKE to deploy it.
Google Cloud Platform supports easy way to deploy using Deployment Manager + Helm. You can use TerraForm to deploy if you want.
To understand step by step GKE deployment, follow three topics below:
https://codeburst.io/google-kubernetes-engine-by-example-part-1-358dc84d425b
https://codeburst.io/google-kubernetes-engine-by-example-part-2-ee1f519a32f9
https://codeburst.io/google-kubernetes-engine-by-example-part-3-9b7205ad502f
If you want to use deployment manager together with Helm, you can follow the topic below:
https://medium.com/google-cloud/gitlab-continuous-deployment-pipeline-to-gke-with-helm-69d8a15ed910
welcome to stack overflow!
You will find a super guide here: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
I just checked KOPS docs and they apparently do AWS as well. I haven't worked with this tool before though.
Could you please describe a bit more what is the challenge? You can set up an eks cluster and then change your pipeline to go to the new one instead of the Google one.

Websphere Migration from was7 to was9

Planning to Migrate the Websphere from 7.0 to 9 and 8.5 to 9.
Can anyone help me getting the detailed Process
Migration here is "In place". (Migration will be done on the same servers, where the old Installation are in)
if at all any migration tools need to be used, please provide the clear info on them.
any documental references, or any video references for the questioner is appreciated.
OS used : RHEL
CUrrent version: WAS 7x and 8.5
Migrating to : WAS 9.0
It sounds like you're in the very beginning stages of doing this migration. Therefore, I highly recommend you take some time to plan this out, especially to figure out the exact steps you'll be taking and how you'll handle something going wrong. For WebSphere, there is a collection of documents from IBM that discuss planning and executing the upgrade. Look around there for documentation on the tools and step by step guides for different kinds of topologies. The step by step guide for an in place migration of a cell is here.
You should make sure to take good backups before you start the process so you can restore back to before the migration if you need to.
In addition to doing the upgrade, an important part is to also make sure your applications are going to work on the new version if you haven't already. IBM provides this tool to scan applications and identify potential issues that developers will have to fix. There is documentation for the tool at that link as well.
If you are in the planning phase, I'd strongly suggest you to consider migrating to WebSphere Liberty instead of traditional WAS v9. All these migration tools (toolkit for binaries, Eclipse migration toolkit) support both migration scenarios.
Choosing Liberty might be a bit more work at the beginning, but you will gain more deployment flexibility and speed up future development. Liberty is also much better fitted for any cloud/containers environments, as it is much more lightweight, so in the future, if you would like to move to containers, it would be much easier.
Check this tutorial Migrate traditional WebSphere apps to WebSphere Liberty on IBM Cloud Private by using Kubernetes, although it shows the steps to migrate to Liberty on ICP, beginning is the same - analyzing of the application whether they are fit for Liberty and migrating. If you don't have access to IBM Cloud or ICP, you can use stand alone version of the Transformation Advisor that was released recently - IBM Cloud Transformation Advisor.
Having said all that, some apps include old or proprietary traditional WebSphere APIs and in that case it may be easier and cheaper to temporary migrate them to WAS v9, and modernize in the future.

Is it possible to deploy Spinnaker to an instance smaller than m4.xlarge on AWS?

We are currently following the default deployment instructions for Spinnaker which states using m4.xlarge as the instance type.
http://www.spinnaker.io/v1.0/docs/creating-a-spinnaker-instance#section-amazon-web-services
We did make an unsuccessful attempt to deploy it to m4.large but the services didnt start.
Has anyone tried the something similar and succeeded?
It really depends on the size of your cloud.
There are 4 core services that you need - gate/deck/orca/clouddriver. You can shut the other ones off if say, you don't care about automated triggers, baking or jenkins integration.
I'm able to run this locally with the docker images with about 8 gigs of ram and it works. Using s3 instead of cassandra also helps here.
You can play around with the settings in the baked image of spinnaker, but for internal demos and what not, I've been able to just spin up a VM, install docker and run the docker compose config correctly on m4.large.

Spark long deploying time on EC2 with custom Windows AMI

I am trying to run a Spark cluster with some Windows instances on an Amazon EC2 infrastructure, but I am facing some issues with extremely high deploying times.
My project needs to be run on a Windows environment, and therefore I am using an alternative AMI by indicating it with the -a flag provided by Spark's spark-ec2 script. When I run the script, the process keeps stuck waiting for the instances to be up and running, with the following message:
Waiting for all instances in cluster to enter 'ssh-ready' state.............
When I use the default AMI, instead, the cluster launches normally after very few minutes of waiting.
I have searched for similar problems with other users, and so far I have only been able to find this statement about long deploying time with custom AMI-s (see Josh Rosen's answer).
I am using the version 1.2.0 of Spark. The call that launches the cluster looks something like the following:
./spark-ec2 -k MyKeyPair
-i MyKeyPair.pem
-s 10
-a ami-905fe9e7
--instance-type=t1.micro
--region=eu-west-1
--spark-version=1.2.0
launch MyCluster
The AMI indicated above refers to:
Microsoft Windows Server 2012 R2 Base - ami-905fe9e7
Desc: Microsoft Windows 2012 R2 Standard edition with 64-bit architecture. [English]
Any help or acclaration abouth this issue would be greatly appreciated.
I think I have figured out the problem. It seems Spark does not support the creation of clusters on a Windows environment with its default scripts. I think it is still possible to create a cluster with some manual tweaking, but it goes out of my limited knowledge. Here is the official post that explains it.
Instead, as a temporal solution, I am considering the usage of a Microsoft Azure cluster, which has just released an experimental tool that makes able to use a variant of Apache Hadoop (Spark) on their HDinsight clusters. Here is the article that explains it better.