Install Spark on existing Hadoop cluster (ISSUE with HIVE) - hive

I am trying to get a Spark/Shark cluster up but keep running into the same problem. I have followed the instructions on https://github.com/amplab/shark/wiki/Running-Shark-on-a-Cluster and addressed Hive as stated.
Here are the details, any help would be great.
I already installed the following package:
Spark/Shark 1.0.0
Apache Hadoop 2.4.0
Apache Hive 0.13
Scala 2.9.3
Java 7
I configure ~/spark/conf/spark-env.sh as:
export HADOOP_HOME=/path/to/hadoop/
export HIVE_HOME=/path/to/hive/
export MASTER=spark://xxx.xxx.xxx.xxx:7077
export SPARK_HOME=/path/to/spark
export SPARK_MEM=4g
export HIVE_CONF_DIR=/path/to/hive/conf/
source $SPARK_HOME/conf/spark-env.sh
When start spark with "./spark-withinfo", I get the following errors:
-hiveconf hive.root.logger=INFO,console
Starting the Shark Command Line Client
14/07/07 16:26:57 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
14/07/07 16:26:57 [main]: WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
Logging initialized using configuration in jar:file:/path/to/hive/lib/hive-exec-0.13.0.jar!/hive-log4j.properties
14/07/07 16:26:57 [main]: INFO SessionState:
Logging initialized using configuration in jar:file:/path/to/hive/lib/hive-exec-0.13.0.jar!/hive-log4j.properties
14/07/07 16:26:57 [main]: INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:344)
at shark.SharkCliDriver$.main(SharkCliDriver.scala:128)
at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1139)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:51)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2444)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2456)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:338)
... 2 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1137)
... 7 more
Caused by: java.lang.NoSuchFieldError: METASTOREINTERVAL
at org.apache.hadoop.hive.metastore.RetryingRawStore.init(RetryingRawStore.java:78)
at org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:60)
at org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:285)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4102)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:121)
... 12 more
I guess Spark can not find some libs ton connect metastore in Hive, but I have been stacked here for a couple days and don't know how to solve it. BTW, I use MYSQL for hive metadata, and everything works well in hive.
Any help is appreciated. Thanks in advance.

You may need to add mysql connector jar file before you start spark...
In my case, I added mysql connector jar like below.
$SPARK_HOME/bin/compute-classpath.sh
CLASSPATH=$CLASSPATH:/opt/big/hive/lib/mysql-connector-java-5.1.25-bin.jar

Related

Jrebel is not working with weblogic 12.X

Issue Description:
Unable to bounce the weblogic server with JRebel
Error:
I am getting the follwoing error when i try to bounce the server
JRebel: ERROR Class 'java.lang.ClassLoader' could not be processed by com.zeroturnaround.javarebel.bv#null: org.zeroturnaround.bundled.javassist.bytecode.DuplicateMemberException: duplicate method: _jr$defineClass in java.lang.ClassLoader
at org.zeroturnaround.bundled.javassist.bytecode.ClassFile.testExistingMethod(SourceFile:721)
at org.zeroturnaround.bundled.javassist.bytecode.ClassFile.addMethod(SourceFile:696)
at org.zeroturnaround.bundled.javassist.CtClassType.addMethod(SourceFile:1411)
at com.zeroturnaround.javarebel.bv.process(SourceFile:40)
at org.zeroturnaround.javarebel.integration.support.JavassistClassBytecodeProcessor.process(SourceFile:79)
at com.zeroturnaround.javarebel.vu.a(SourceFile:376)
at com.zeroturnaround.javarebel.vu.a(SourceFile:365)
at com.zeroturnaround.javarebel.vu.a(SourceFile:350)
at com.zeroturnaround.javarebel.f.runBootClassProcessors(SourceFile:245)
at com.zeroturnaround.javarebel.bl.a(SourceFile:115)
at com.zeroturnaround.javarebel.gib.a(SourceFile:63)
at com.zeroturnaround.javarebel.gfc.a(SourceFile:59)
at com.zeroturnaround.javarebel.gfc.doTransform(SourceFile:39)
at com.zeroturnaround.jrebelbase.reorder.a.transform(SourceFile:182)
at com.zeroturnaround.jrebelbase.reorder.a.transform(SourceFile:148)
at sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java)
at sun.instrument.InstrumentationImpl.redefineClasses0(Native Method)
at sun.instrument.InstrumentationImpl.redefineClasses(InstrumentationImpl.java:170)
at com.mercury.opal.capture.jdk15.agent.ProbeClassFileTransformer.instrumentAndReplace(ProbeClassFileTransformer.java:369)
at com.mercury.opal.capture.jdk15.agent.ProbeClassFileTransformer.reinstrumentClass(ProbeClassFileTransformer.java:331)
at com.mercury.opal.capture.jdk15.agent.ProbeClassFileTransformer.patchClassLoaders(ProbeClassFileTransformer.java:137)
at com.mercury.opal.capture.jdk15.agent.ProbeClassFileTransformer.<init>(ProbeClassFileTransformer.java:98)
at com.mercury.opal.capture.jdk15.agent.InstrumentationAgent.premain(InstrumentationAgent.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.__invoke(DelegatingMethodAccessorImpl.java:43)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
at java.lang.reflect.Method.invoke(Method.java:483)
at sun.instrument.InstrumentationImpl._jrLoadClassAndStartAgent(InstrumentationImpl.java:386)
at com.zeroturnaround.jrebelbase.reorder.b.a(SourceFile:31)
at com.zeroturnaround.jrebelbase.reorder.a.c(SourceFile:129)
at com.zeroturnaround.jrebelbase.reorder.a.a(SourceFile:118)
at com.zeroturnaround.javarebel.gec.a(SourceFile:309)
at com.zeroturnaround.javarebel.gec.deferredInitHook(SourceFile:149)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java)
Config Details :
following lines in startup script based on the Jrebel root folder
export REBEL_HOME=[JRebel root folder]
export JAVA_OPTIONS="-agentpath:$REBEL_HOME/lib/libjrebel64.so -Drebel.remoting_plugin=true $JAVA_OPTIONS"
Version:
Weblogic : 12.2.1.2.0
Jrebel : 7.1.2
Can anyone help me to come over?
Please try using the latest version of JRebel instead of 7.1.2. Download it at https://zeroturnaround.com/software/jrebel/download/prev-releases/.
If for some reason the result is still the same, also delete REBEL_HOME/bootcache and retry. Should you still see the exception, please write to support#zeroturnaround.com along with the jrebel.log from REBEL_HOME and they'll be able to help you out.

Create Dataframe issue in Pyspark from Windows 10

I am unable to execute the below command from pyspark windows
schemaPeople = spark.createDataFrame(people)
I have set HADOOP_HOME to winutils
I have provide 77 permission to C:/tmp/hive
Still I am getting the below error -
Py4JJavaError: An error occurred while calling o23.applySchemaToPythonRDD.
: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:189)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
I have gone through a lot of similar questions before posting this , appreciate any help here
I got this error a bunch when trying to setup Spark on windows using the winutils file. I had to setup Spark differently to get around this.
I ended up downloading the Hadoop binary for my version of spark and going from there. I documented the whole thing with a walkthrough if you are interested. Spark on windows
The gist is that the official Hadoop release from Apache does not include a Windows binary and compiling from sources can be tedious so really helpful people have made compiled distributions available. If you want to use Spark 2.0.2 download the binaries from steve loughran's github for 2.1.0 you can download from here from there you should be able to set it up as expected.

zeppelin hive interpreter throws ClassNotFoundException

I have deployed zeppelin 0.6 and configured hive under Jdbc interpreter.
Tried executing
%hive
show databases
Throws:
org.apache.hive.jdbc.HiveDriver class java.lang.ClassNotFoundException
java.net.URLClassLoader.findClass(URLClassLoader.java:381)
java.lang.ClassLoader.loadClass(ClassLoader.java:424)
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
java.lang.ClassLoader.loadClass(ClassLoader.java:357)
java.lang.Class.forName0(Native Method)
java.lang.Class.forName(Class.java:264)
org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:220)
org.apache.zeppelin.jdbc.JDBCInterpreter.getStatement(JDBCInterpreter.java:233)
org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:292)
org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:398)
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:383)
org.apache.zeppelin.scheduler.Job.run(Job.java:176)
org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
I just ran into this issue this morning. I'm not sure if this is the recommended way to fix, but I downloaded the binary packages for Hive 1.2, and Hadoop 2.6.4. I then copied the following jars to ./interpreter/jdbc/ and reloaded zeppelin ./bin/zeppelin-daemon.sh reload
cp ~/Dev/Hadoop/apache-hive-1.2.1-bin/lib/hive-jdbc-1.2.1-standalone.jar ./interpreter/jdbc/
cp ~/Dev/Hadoop/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4.jar ./interpreter/jdbc/
1)
You could download just Hive JDBC driver instead of whole Hive jars set, for example, one from Cloudera:
http://www.cloudera.com/downloads/connectors/hive/jdbc/2-5-17.html
2)
Hive starting with 0.14 will have a standalone jar for JDBC part:
hive-jdbc-standalone.jar
but until https://issues.apache.org/jira/browse/HIVE-9600 is resolved,
you would need two more jars:
hadoop-common.jar
hadoop-auth.jar
to put into classpath along with hive-jdbc-standalone.jar
The top rated answer given here fixes the issue
However I have added the classpath of HADOOP_HOME to interpreter.sh to take the jar files in common
Below is the line which I have added to bin/interpreter.sh inside zeppelin
HADOOP_HOME=/opt/hadoop-2.6.2/
addJarInDirForIntp "${HADOOP_HOME}/share/hadoop/common

Spark application development on local cluster by IntelliJ

I tried many things to execute the application on local cluster. However it did not work.
I am using CDH 5.7 and spark version is 1.6.
I am trying to create dataframe from hive on CDH 5.7.
If I use spark-shell, all codes works really well. However, I have no idea how can I set my intellJ configuration for efficient development environment.
Here is my code;
import org.apache.spark.{SparkConf, SparkContext}
object DataFrame {
def main(args: Array[String]): Unit = {
println("Hello DataFrame")
val conf = new SparkConf() // skip loading external settingg
.setMaster("local") // could be "local[4]" for 4 threads
.setAppName("DataFrame-Example")
.set("spark.logConf", "true")
val sc = new SparkContext(conf)
sc.setLogLevel("WARN")
println(s"Running Spark Version ${sc.version}")
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql("From src select key, value").collect().foreach(println)
}
}
When I run this program on IntelliJ, the error messages are following;
Hello DataFrame
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/05/29 11:30:57 INFO Slf4jLogger: Slf4jLogger started
Running Spark Version 1.6.0
16/05/29 11:31:02 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.1.0
16/05/29 11:31:02 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:249)
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:329)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:239)
at org.apache.spark.sql.hive.HiveContext$$anon$2.<init>(HiveContext.scala:459)
at org.apache.spark.sql.hive.HiveContext.catalog$lzycompute(HiveContext.scala:459)
at org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:458)
at org.apache.spark.sql.hive.HiveContext$$anon$3.<init>(HiveContext.scala:475)
at org.apache.spark.sql.hive.HiveContext.analyzer$lzycompute(HiveContext.scala:475)
at org.apache.spark.sql.hive.HiveContext.analyzer(HiveContext.scala:474)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at org.corus.spark.example.DataFrame$.main(DataFrame.scala:25)
at org.corus.spark.example.DataFrame.main(DataFrame.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx------
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:539)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:194)
... 24 more
Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx------
at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:624)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:573)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:517)
... 25 more
Process finished with exit code 1
Is there anyone know a solution?
Thanks.
I found several resources about this problem. But none of them did not work.
https://www.linkedin.com/pulse/develop-apache-spark-apps-intellij-idea-windows-os-samuel-yee
https://blog.cloudera.com/blog/2014/06/how-to-create-an-intellij-idea-project-for-apache-hadoop/
Thanks all. I solved the problem by my self.
The problem is that local spark(Maven version) did not know the information of hive on our cluster.
The solution is very simple.
Just add following codes on your source code.
conf.set("spark.sql.hive.thriftServer.singleSession", "true")
System.setProperty("hive.metastore.uris","thrift://hostname:serviceport")
It works!
Let's play with spark.

ClassNotFoundException when using the Mule Amazon SQS connector

I'm using the Amazon SQS connector in my Mule project. When I updated it from the 2.5.5 to the 3.0.0 version according to the user guide and set the DEBUG logging level for the com.amazonaws package I noticed the following error right after project starts:
DEBUG 2015-07-20 15:15:56,927 [Receiving Thread] com.amazonaws.1.9.39.shade.jmx.spi.SdkMBeanRegistry: Failed to load the JMX implementation module - JMX is disabled
java.lang.ClassNotFoundException: com.amazonaws.1.9.39.shade.jmx.SdkMBeanRegistrySupport
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at org.mule.module.launcher.FineGrainedControlClassLoader.findClass(FineGrainedControlClassLoader.java:175)
at org.mule.module.launcher.MuleApplicationClassLoader.findClass(MuleApplicationClassLoader.java:134)
at org.mule.module.launcher.FineGrainedControlClassLoader.loadClass(FineGrainedControlClassLoader.java:119)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at com.amazonaws.1.9.39.shade.jmx.spi.SdkMBeanRegistry$Factory.<clinit>(SdkMBeanRegistry.java:46)
at com.amazonaws.1.9.39.shade.metrics.AwsSdkMetrics.registerMetricAdminMBean(AwsSdkMetrics.java:351)
at com.amazonaws.1.9.39.shade.metrics.AwsSdkMetrics.<clinit>(AwsSdkMetrics.java:316)
at com.amazonaws.1.9.39.shade.AmazonWebServiceClient.requestMetricCollector(AmazonWebServiceClient.java:629)
at com.amazonaws.1.9.39.shade.AmazonWebServiceClient.isRMCEnabledAtClientOrSdkLevel(AmazonWebServiceClient.java:570)
at com.amazonaws.1.9.39.shade.AmazonWebServiceClient.isRequestMetricsEnabled(AmazonWebServiceClient.java:562)
at com.amazonaws.1.9.39.shade.AmazonWebServiceClient.createExecutionContext(AmazonWebServiceClient.java:523)
at com.amazonaws.1.9.39.shade.services.sqs.AmazonSQSClient.listQueues(AmazonSQSClient.java:1163)
at com.amazonaws.1.9.39.shade.services.sqs.AmazonSQSClient.listQueues(AmazonSQSClient.java:1501)
at org.mule.modules.sqs.connection.strategy.SQSConnectionManagement.connect(SQSConnectionManagement.java:173)
at org.mule.modules.sqs.connectivity.SQSConnectionManagementSQSConnectorAdapter.connect(SQSConnectionManagementSQSConnectorAdapter.java:21)
at org.mule.modules.sqs.connectivity.SQSConnectionManagementSQSConnectorAdapter.connect(SQSConnectionManagementSQSConnectorAdapter.java:9)
at org.mule.devkit.3.6.1.shade.connection.management.ConnectionManagementConnectorFactory.makeObject(ConnectionManagementConnectorFactory.java:47)
at org.mule.devkit.3.6.1.shade.connection.management.ConnectionManagementConnectorFactory.makeObject(ConnectionManagementConnectorFactory.java:15)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)
at org.mule.modules.sqs.connectivity.SQSConnectorConfigConnectionManagementConnectionManager.acquireConnection(SQSConnectorConfigConnectionManagementConnectionManager.java:407)
at org.mule.modules.sqs.connectivity.SQSConnectorConfigConnectionManagementConnectionManager.acquireConnection(SQSConnectorConfigConnectionManagementConnectionManager.java:55)
at org.mule.devkit.3.6.1.shade.connection.management.ConnectionManagementProcessInterceptor.execute(ConnectionManagementProcessInterceptor.java:47)
at org.mule.devkit.3.6.1.shade.connection.management.ConnectionManagementProcessInterceptor.execute(ConnectionManagementProcessInterceptor.java:19)
at org.mule.security.oauth.process.RetryProcessInterceptor.execute(RetryProcessInterceptor.java:84)
at org.mule.devkit.3.6.1.shade.connection.management.ConnectionManagementProcessTemplate.execute(ConnectionManagementProcessTemplate.java:33)
at org.mule.modules.sqs.sources.ReceiveMessagesMessageSource.run(ReceiveMessagesMessageSource.java:134)
at java.lang.Thread.run(Thread.java:662)
It's true, the mule-module-sqs-3.0.0.jar downloaded by Maven doesn't contain such class. I rebuild the Amazon SQS connector's source code with little changes in the pom.xml: set <minimizeJar>false</minimizeJar> for the maven-shade-plugin. Then the missing class appeared in the jar and I managed to run the project without errors.
Not sure is it a bug or not but don't like the idea to build the connector manually. Will really appreciate if you help me to sort it out. Thanks.