I'm so confused I don't know how to perform mongo cdc with wso2 streaming integrator. I set up a mongo replicaset follow this doc. I config cdc source like below,
but it doesn't work, I got these error logs . Can any one help me to fix this? Thanks in advance.
It seems like an issue with the extension installer script of the WSO2 SI. The mongo_java_driver is actually a bundled jar and due to that it should not be converted again into a bundle.
So to fix your problem, Follow the below steps,
Step 1- Uninstall the installed MongoDB jar.
Step 2- Go to WSO2SI_HOME/wso2/server/resources/extensionsInstaller folder and open the extensionDependencies.json file.
Step 3- Search for "name": "mongo-java-driver" and under the configurations usage type from "JAR" to "BUNDLE".
Step 4- reinstall the MongoDB extension via extension installer
This will solve your problem.
Have you copy the mongo-java-driver to <PRODUCT_HOME>/lib directory? it seems like the cdc extension couldn't locate the mongodb drivers
Related
Background
I was planning to use S3 to store the Flink's checkpoints using the FsStateBackend. But somehow I was getting the following error.
Error
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
Flink version: I am using Flink 1.10.0 version.
I have found the solution for the above issue, so here I am listing it in steps that are required.
Steps
We need to add some configs in the flink-conf.yaml file which I have listed below.
state.backend: filesystem
state.checkpoints.dir: s3://s3-bucket/checkpoints/ #"s3://<your-bucket>/<endpoint>"
state.backend.fs.checkpointdir: s3://s3-bucket/checkpoints/ #"s3://<your-bucket>/<endpoint>"
s3.access-key: XXXXXXXXXXXXXXXXXXX #your-access-key
s3.secret-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx #your-secret-key
s3.endpoint: http://127.0.0.1:9000 #your-endpoint-hostname (I have used Minio)
After completing the first step we need to copy the respective(flink-s3-fs-hadoop-1.10.0.jar and flink-s3-fs-presto-1.10.0.jar) JAR files from the opt directory to the plugins directory of your Flink.
E.g:--> 1. Copy /flink-1.10.0/opt/flink-s3-fs-hadoop-1.10.0.jar to /flink-1.10.0/plugins/s3-fs-hadoop/flink-s3-fs-hadoop-1.10.0.jar // Recommended for StreamingFileSink
2. Copy /flink-1.10.0/opt/flink-s3-fs-presto-1.10.0.jar to /flink-1.10.0/plugins/s3-fs-presto/flink-s3-fs-presto-1.10.0.jar //Recommended for checkpointing
Add this in checkpointing code
env.setStateBackend(new FsStateBackend("s3://s3-bucket/checkpoints/"))
After completing all the above steps re-start the Flink if it is already running.
Note:
If you are using both(flink-s3-fs-hadoop and flink-s3-fs-presto) in Flink then please use s3p:// specificly for flink-s3-fs-presto and s3a:// for flink-s3-fs-hadoop instead of s3://.
For more details click here.
After following steps outlined in the below link, I can get the hbase shell launching however all those hbase commands throwing;ERROR: NPN/ALPN extensions not installed
https://cloud.google.com/bigtable/docs/installing-hbase-client
I have java version of 1.7.0_60-b19 and I used ALPN 7.1.0.v20141016
What am I missing?
Thanks in advance for any help
On the doc, HBASE_CLASSPATH points to"$(pwd)/lib/bigtable/bigtable-hbase-0.1.5.jar" and in your comment above it is under mvn folder and new version thus I was searching alpn-boot file there. I found the issue with your help though. It is a copy past problem while downloading the jars. I truly appreciated your support
I am attempting a migration of SP2010 to SP2013, so far what I have accomplished is below:
1) created a backup of the contentDB from SP2010 and restored it on to SP2013
2) Added all the wsp's exported from SP2010 solution store to SP2013 soltuion store.
3) When I try to deploy a solution I get a error message saying "A feature with ID 14/5c935448-ed11-4bae-bfff-ef8b307f38ac has already been installed in this farm. Use the force attribute to explicitly re-install the feature."
most of them suggest to turn on force attribute on the feature and then do the deployment in my case I do not have code for the wsp, so unable to recompile them to turn on the force attribute.
Have used the featureadmin for SP2013 it does not find any faulty feature in the farm, it doesnot list any feature with Id 5c935448-ed11-4bae-bfff-ef8b307f38ac.
Executing select fullurl, description from features join AllWebs on
(features.webid = AllWebs.id) where featureid = '5c935448-ed11-4bae-bfff-ef8b307f38ac' lists rows of data from db but I can't find the feature folder on 14/15 hive.
Stuck at the moment trying to find a way to get the solutions deployed and perform the db upgrade. Any pointers welcome.
Thanks in advance.
You can define -force with power shell command as well.
Salam Santhosh
I face this problem before , and I resolved by adding the AlwaysForceInstall="TRUE" to the feature.xml for the WSP, after that I went to the central admin and I do uninstall and re-install to the WSP, after that you can activate the feature using STSADM or powershell normally
this is an example of the feature
I am running a fresh install of Pentaho Data Integration 5.0.1.A Stable from:
http://community.pentaho.com/projects/data-integration/
on my macbook pro, java 1.7.0_25, and I keep seeing this error in the console:
Attempting to load ESAPI.properties via file I/O.
Attempting to load ESAPI.properties as resource file via file I/O.
Not found in 'org.owasp.esapi.resources' directory or file not readable:
/Applications/pdi-ce-5.0.1.A/data-integration/ESAPI.properties
Not found in SystemResource Directory/resourceDirectory: .esapi/ESAPI.properties
What are the ESAPI.properties used for? What should they be set to by default?
thanks, -John
This is a known bug (PDI-10568) that should be fixed in an upcoming release. As a work around, try putting the default ESAPI and validation properties in your $HOME/.esapi/ folder. Create one if it doesn't already exist.
Background: ESAPI is an Enterprise Level Security library used by Pentaho webservices to properly encode URLs and HTML content, read more at https://www.owasp.org/index.php/ESAPI
I am new Apache Geronimo.I read the following link for deploying repository. Even I don't know where will these concept will be useful. just i am learning.
I created sub-directory under and created .xml file according to the above link.
here, I faced the problem while deploying time. deploy(.bat) deploy <GERONIMO_HOME>/repo2/repo2.xml command is not working.
<GERONIMO_HOME>=C:\Users\Infratab Bangalore\geronimo-tomcat7-javaee6-3.0.1
I run the following command for deploying.but it's not working.
deploy(.bat) deploy C:/Users/Infratab Bangalore/geronimo-tomcat7-javaee6-3.0.1/repo2/repo2.xml
How can I fix this.
You should do following bellow steps:
1. Log in to the Administrative Console
2. Select Advanced on left bar
3. Go to Resources -> Repository