Cloud Bigtable HBase Client not functioning - bigtable

After following steps outlined in the below link, I can get the hbase shell launching however all those hbase commands throwing;ERROR: NPN/ALPN extensions not installed
https://cloud.google.com/bigtable/docs/installing-hbase-client
I have java version of 1.7.0_60-b19 and I used ALPN 7.1.0.v20141016
What am I missing?
Thanks in advance for any help

On the doc, HBASE_CLASSPATH points to"$(pwd)/lib/bigtable/bigtable-hbase-0.1.5.jar" and in your comment above it is under mvn folder and new version thus I was searching alpn-boot file there. I found the issue with your help though. It is a copy past problem while downloading the jars. I truly appreciated your support

Related

How to perform Mongodb CDC using wso2 streaming integrator?

I'm so confused I don't know how to perform mongo cdc with wso2 streaming integrator. I set up a mongo replicaset follow this doc. I config cdc source like below,
but it doesn't work, I got these error logs . Can any one help me to fix this? Thanks in advance.
It seems like an issue with the extension installer script of the WSO2 SI. The mongo_java_driver is actually a bundled jar and due to that it should not be converted again into a bundle.
So to fix your problem, Follow the below steps,
Step 1- Uninstall the installed MongoDB jar.
Step 2- Go to WSO2SI_HOME/wso2/server/resources/extensionsInstaller folder and open the extensionDependencies.json file.
Step 3- Search for "name": "mongo-java-driver" and under the configurations usage type from "JAR" to "BUNDLE".
Step 4- reinstall the MongoDB extension via extension installer
This will solve your problem.
Have you copy the mongo-java-driver to <PRODUCT_HOME>/lib directory? it seems like the cdc extension couldn't locate the mongodb drivers

requested datatype filelists not available in yum update

In order to patch RedHat 7 machines to version 7.9, I've created an RPM repository with RPMs extracted from a DVD.iso file of the patch (example source guide), and updated said machines using yum.
The patch has succeeded with many of the machines (RHEL 7.7 only), but the rest (7.0, 7.2 and some 7.7 as well) have failed the with the following error:
Error: requested datatype filelists not available
I've also tried to gradualize the process and patch the 7.0 and 7.2 ones to 7.7 first by the same process, but yielded the same result. I've made sure I got each and every file in the Packages folder.
It is rather puzzling for me that some succeed and some fail, especially those with the same version. But I'm assuming they were created differently as I don't have the information to say otherwise. So my best direction would be to go by the error.
In this github post, lr1980 says:
https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/
this means the "filelists.xml.gz" is missing on repo => it's a packager.io problem
Indeed, browsing my repository's repodata folder reveals only other.xml.gz and primary.xml.gz files, which are also the only files pointed to in the repomd.xml.
I've tried uploading the filelists.xml.gz file from the dvd.iso and reindexing, but it gets removed (admittedly am not familiar with this area of knowledge.. at all). What does "it's a packager.io problem" mean?
How can I force the repo to have such a file, assuming that's what I need? Or what can I do to solve this issue otherwise?
Many thanks

Migrating Trac Wiki

I am trying to move Trac data from an old server at my workplace to a new server but I am stuck on the last step of migrating our wiki data. We use Trac 1.0.1 and are trying to update to Trac 1.2. The part I am stuck on is dumping the wiki. I have been trying to use
trac-admin wiki dump
This works for my tests but when I try to use it on the actual wiki I get an error saying that the filename is too long. This happens because the hierarchical files try to make a filename like this
child1%2child2%2child3%2child4%2child5%2.....
instead of
child1/child2/child3/child4/child5/.....
Since linux is seeing this path as one name it throws an error saying that the file name is too long. Has anyone ran into this problem before and have a solution for it????
I have also tried making a hotcopy of trac and transfering it but this doesnt work either. If anyone knows where the wikis are stored and how to copy that from our old server to our new server that would be the most optimal solution I am looking for

Expired time for acquiring lock pentaho

I'm using pentaho BI (spoon) and I have a problem with it.
At each action (open job/transformation or save for example) it show this window
http://i.stack.imgur.com/bqmZQ.jpg
now I can't open existing transformation. Does anyone know this issue?
I know this is an old post, but the problem is still current (version 8.2 CE), and I was unable to find any help (other than the workaround to delete the .lock files, but this only solves it for a few minutes, possibly even less).
I had exactly the same problem, it basically rendered Spoon useless. In my case, the culprit was the antivirus program. I noticed it only because I installed the AV after I used Spoon for some time. I just removed the AV (and installed another one) - no more problems.
It seems that your repository is having some issue in connecting.
Try checking the repo. connection and also check for the permission of accessing the repo.
I faced a similar issue and was able to resolve it by deleting the lock file under
D:\Pentaho-ce\file-repository\ETL\.meta\metastore\.lock
(local-repository root)\.meta\metastore\.lock
Kettle - Spoon version 5.4.0.1-130
I was also facing a similar issue and was able to resolve it by deleting the lock.
Pentaho tool creates .pentaho directory in the home directory.
inside .pentaho delete .lock from
.pentaho/metastore/.lock
and also delete .lock file
/home/pdi-ce-9.1.0.0-324/data-integration
restart your pentaho tool.
hope above solution works for you.
I faced a similar issue and was able to resolve it by deleting the lock file under C:\Users\myuser.pentaho\metastore

JGit Performance Issue

I'm using Eclipse JGit, it is a wonderful plugin for Java developers to read files and content from a Git based repository. However, I have observed a performance issue of 22 secs+ to clone and open a repository using the APIs. Has anyone experienced the same issue?
Below is the code:
String localRepoFolder = "C:\\temp\\some-project";
Git localGit = Git.open(new File(localRepoFolder));
The above two statements take about 22 secs, not sure if there is solution for this problem or if there is a better plugin out there.
If anybody knows a solution for this, please let me know.
Thanks in advance.
-A