Export data from sqoop to couchbase 5.0 - hive

I have a use-case where I need to upload 20 million rows in couchbase from hive table using sqoop.
In couchbase documentation, I found couchbase Hadoop connector 1.2 which is for couchbase 2, whereas I am using couchbase5.0.
Can anyone suggest if an upgraded connector is available now?
Or any alternative way of achieving this functionality.

According to the latest documentation:
The Couchbase Hadoop Connector has reached End-of-Life (EOL) support.
We recommend existing Hadoop integrations to migrate to a supported
version of the Couchbase Kafka Connector. Additionally, the Couchbase
Hadoop Connector is not compatible with Couchbase Server 5.x; because
it relies on the TAP feed API which has been removed in Couchbase
Server 5.x in favor of the DCP feed.
If you can't use Kafka for some reason, then one possible solution would be to use the connector with an older version of Couchbase, and then connect that Couchbase cluster to a different cluster running Couchbase Server 5.x using XDCR.
Another solution might be to use Apache Nifi (which can connect just about anything, including Couchbase).
These solutions aren't officially supported by Couchbase (except for the Kafka connector), so you're on your own.

Related

AWS Neptune - How Can I Migrate My 1.0.5.1 Based Cluster to Serverless?

The Problem
I have an Amazon Neptune cluster with an instance running in db.t3.medium DB instnance class. I do not see a choice to move this to the Serverless instance.
How can I migrate this instance?
Root Cause
You can only migrate an instance running Neptune Engine version 1.2 or later.
How to Fix
You need to migrate your Neptune Engine version first to 1.2. Once that is done, you will get the migration option to Serverless.
The engine version is controlled not in the cluster instance but at the cluster level and if you are running an older version of the engine, you may need to incrementally upgrade to from the highest version in the major version group, then move up to the next higher version. If you are running 1.0.x, you will first need to go to 1.1.0 R7 then move onto 1.2.
As with any major version upgrade, you could incur some downtime during migration.
To change the engine version, "Modify" the cluster (not instance) settings (the top right button on the console page) and select the latest possible DB engine version. You can keep the rest of the settings, and you can apply the change to take effect immediately if you can afford to initiate a downtime shortly after. Continue to upgrade to the next higher level until you reach 1.2. Each upgrade can take a while.

Apache geode gemfire pulse

We are using Spring data gemfire, we are planning to migrate to Apache geode latest version. In the VMWare gemfire version we had to explicitly set the path of the gemfire installable for the pulse to work properly. If we are using Apache geode jar, will we able to get the pulse up and running without specifying the installable location.
We are not using gfsh in our project, we want to ensure that we have minimal dependency on the installable version when we upgrade gemfire.
You don't need to set the GEODE_HOME environment variable when using spring-boot-data-geode, you just need to make sure the correct dependencies are within the classpath of your application (see here for more details).
I've written a very basic example showing how to start a Locator with the Pulse application embedded, you can find it here
As a side note, and regarding the following:
We are using Spring data gemfire, we are planning to migrate to Apache geode latest version.
In order to avoid weird and hard to fix runtime issues, please always make sure to use a combination of versions fully supported in the Spring Boot for Apache Geode and VMware Tanzu GemFire Version Compatibility Matrix
After going through various answers and documentations I was able to start pulse by the help of following article.
Start Gemfire Pulse

Hazelcast Backup and restore

I want to perform Hazel cast back up and restore activity on Kubernetes environment from one AKS cluster to another AKS cluster. If anyone has performed in past or Is there any documentation is available to do that. I just started to learn Hazel cast your support will be appreciable.
I am using Embedded version 4.0.
This feature hot-backup does it.
However, this is not avilable in the free edition of Hazelcast.

Redis cluster support on various platforms

Are there any issues in setting up Redis cluster on various platforms like windows , Mac or Solaris. Currently Redis website says there is support for these platforms but I just want to know is there any caveat in cluster deployment on these?
Redis cluster (i.e. v3) should be runnable on all supported platforms (i.e. *nix). The Windows version is not an official port but the last time I checked (now) it was still at v2.8 so I don't see how you could use the cluster with it.
The MSOpenTech windows port of Redis has released a beta version 3.0 supporting Redis cluster:
https://github.com/MSOpenTech/redis/releases/tag/win-3.0.300-beta1

Python replication of Java web applic?

Can this Java web applic be replicated in Python and/or related toolkit (e.g., AI-Labs's Orange)?:
http://www.xjtek.com/anylogic/demo_models/38/
Check out GarlicSim as well.