JDBC client to Hive - No data or no sasl data in the stream Exception - authentication

We have a Kerberised cluster and I'm trying to run a Java action in Oozie where I make a JDBC connection to Hive. This JDBC connections works fine on the Sandbox without Kerberos.
The connection string is as simple as the following, where I'm providing username and password in it:
Connection con = DriverManager.getConnection("jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET","user123","passw123");
The Oozie action (strangely) completes succesfully, and the Java action log does not present any error:
1742 [main] INFO org.apache.hive.jdbc.Utils - Supplied authorities: W12345:10000
1742 [main] INFO org.apache.hive.jdbc.Utils - Resolved authority: W12345:10000
1766 [main] INFO org.apache.hive.jdbc.HiveConnection - Will try to open client transport with JDBC Uri: jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET
<<< Invocation of Main class completed <<<
Oozie Launcher ends
1785 [main] INFO org.apache.hadoop.mapred.Task - Task:attempt_1464245290012_0129_m_000000_0 is done. And is in the process of committing
1847 [main] INFO org.apache.hadoop.mapred.Task - Task attempt_1464245290012_0129_m_000000_0 is allowed to commit now
1854 [main] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_1464245290012_0129_m_000000_0' to hdfs://danskehadoop/user/user123/oozie-oozi/0000013-160527101253015-oozie-oozi-W/JavaAction--java/output/_temporary/1/task_1464245290012_0129_m_000000
1909 [main] INFO org.apache.hadoop.mapred.Task - Task 'attempt_1464245290012_0129_m_000000_0' done.
But in reality the Java main does not complete correctly the execution (and does not execute the needed queries) because the JDBC connection fails with an exception that I can see only in the Hive log:
ERROR [HiveServer2-Handler-Pool: Thread-78363]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
I'm actually connected to the cluster, and already done further kinit on my username.
Does anybody know what could the cause of this exception be?
Thanks in advance for the help!
Antonio

This happened to me on MapR hadoop distribution platform.
In my case it was Keepalived checking Hive port every 5 seconds and producing such error. I simply used "nc" command to check if Hive port is in use and did not use any authentication method. Later I switched to "maprcli" command which uses SASL authentication and the error was gone.

Related

X-Ray Daemon don't receive any data from envoy

I have a service running a task definition with three containers:
service itself
envoy
x-ray daemon
And I want to trace and monitor my services interacting with each other with x-ray.
But I don't see any data in x-ray.
I can see the request logs and everything in the envoy logs but there are no error messages about missing connection to the x-ray daemon.
Envoy container has three env variables:
APPMESH_VIRTUAL_NODE_NAME = mesh/mesh-name/virtualNode/service-virtual-node
ENABLE_ENVOY_XRAY_TRACING = 1
ENVOY_LOG_LEVEL = trace
The x-ray daemon is pretty plain and has just a name and an image (amazon/aws-xray-daemon:1).
But when looking in the logs of the x-ray dameon, there is only the following:
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] Initializing AWS X-Ray daemon 3.0.0
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] Using buffer memory limit of 76 MB
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] 1216 segment buffers allocated
2022-05-31T14:48:05.051+02:00 2022-05-31T12:48:05Z [Info] Using region: eu-central-1
2022-05-31T14:48:05.788+02:00 2022-05-31T12:48:05Z [Error] Get instance id metadata failed: RequestError: send request failed
2022-05-31T14:48:05.788+02:00 caused by: Get http://169.254.169.254/latest/meta-data/instance-id: dial tcp xxx.xxx.xxx.254:80: connect: invalid argument
2022-05-31T14:48:05.789+02:00 2022-05-31T12:48:05Z [Info] Starting proxy http server on 127.0.0.1:2000
As far as I read, the error you can see in these logs doesn't affect the functionality (https://repost.aws/questions/QUr6JJxyeLRUK5M4tadg944w).
I'm pretty sure I'm missing a configuration or access right.
It's running already on staging but I set this up several weeks ago and I don't find any differences between the configurations.
Thanks in advance!
In my case, I made a copy-paste mistake by copying trailing line break into the name of the environment variable ENABLE_ENVOY_XRAY_TRACING which wasn't visible in the overview and only inside the text field.

hive metastore read timeout

We are using Hive 2.3.3 along with Hadoop 2.7.7 and Spark 2.4.4.
We are using MariaDB as a backend database for Metastore.
The startup of the Metastore service is fine, and I am able to access Hive CLI and perform query operations. I am not starting up HS2 (to try to isolate the issue).
But after a while, all of the sudden, the Hive Metastore service is stuck and is not responding, even to simple queries show databases or show tables.
All the tables against which we execute other queries are empty (no partitions created as of now) and this environment is being newly created.
hive.log error file:
2020-10-09T18:43:56,971 DEBUG [IPC Client (1223050066) connection to master/10.28.66.65:8020 from hadoopuser] ipc.Client: IPC Client (1223050066) connection to master/10.28.66.65:8020 from hadoopuser: closed
2020-10-09T18:43:56,971 DEBUG [IPC Client (1223050066) connection to master/10.28.66.65:8020 from hadoopuser] ipc.Client: IPC Client (1223050066) connection to master/10.28.66.65:8020 from hadoopuser: stopped, remaining connections 0
2020-10-09T18:53:53,769 WARN [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 5s. getAllFunctions
org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_all_functions(ThriftHiveMetastore.java:3812) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_all_functions(ThriftHiveMetastore.java:3800) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllFunctions(HiveMetaStoreClient.java:2393) ~[hive-exec-2.3.3.jar:2.3.3]
....
2020-10-09T18:53:58,777 INFO [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] hive.metastore: Closed a connection to metastore, current connections: 0
2020-10-09T18:53:58,777 INFO [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] hive.metastore: Trying to connect to metastore with URI thrift://master:9083
2020-10-09T18:53:58,779 INFO [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] hive.metastore: Opened a connection to metastore, current connections: 1
2020-10-09T18:53:58,805 INFO [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] hive.metastore: Connected to metastore.

ActiveMQ Master/Slave on Weblogic - vm transport issue

I am trying to configure ActiveMQ master/slave setup on a single WebLogic machine. The problem is when I start Managed Server1 it successfully connects to vm transport and everything works perfectly, but when I start Managed Server2 I am receiving the following errors in broker logs
INFO 2016-September-27 10:08:00,227 ActiveMQEndpointWorker:124 - Connection attempt already in progress, ignoring connection exception
INFO 2016-September-27 10:08:01,161 TransportConnector:260 - Connector vm://localhost started
INFO 2016-September-27 10:08:30,228 TransportConnector:291 - Connector vm://localhost stopped
INFO 2016-September-27 10:08:30,229 TransportConnector:260 - Connector vm://localhost started
WARN 2016-September-27 10:08:30,228 ActiveMQManagedConnection:385 - Connection failed: javax.jms.JMSException: peer (vm://localhost#61) stopped.
WARN 2016-September-27 10:08:30,231 TransportConnection:823 - Failed to add Connection ID:ndl-wls-300.mydomain.com-52251-1474966937425-65:1 due to java.lang.NullPointerException
ERROR 2016-September-27 10:08:30,233 ActiveMQEndpointWorker:183 - Failed to connect to broker [vm://localhost?create=false]: java.lang.NullPointerException
javax.jms.JMSException: java.lang.NullPointerException
Please help, I am stuck with this.
I still don't see the reason for the slave within the same VM. I suggest you reach out to an ActiveMQ expert consultant to validate your architecture.
However, I think I can help you move a little bit closer to this issue:
There is a fundamental miss understanding here.. the vm url is broken down like this:
vm://${brokerName}?option=value,etc
The first time you create vm://localhost?create=true.. you have created a broker
The second time you reference vm://localhost?create=false.. you have created a client connection to the first broker.
To get two brokers, you'd need two different vm://${brokerName}?create=true

Spring Session, Embedded Redis Server Error

Unable to start embedded Redis server, its giving the following error. What could be the possible reason. I'm working on Wildfly, Ubuntu. Following is the stacktrace.
... 25 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'redisServer' defined in org.egov.infra.config.session.RedisHttpSessionConfiguration: Invocation of init method failed; nested exception is java.lang.RuntimeException: Can't start redis server. Check logs for details.
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1566)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:116)
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:606)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:462)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:139)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:93)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [rt.jar:1.8.0_31]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [rt.jar:1.8.0_31]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [rt.jar:1.8.0_31]
at java.lang.reflect.Constructor.newInstance(Constructor.java:408) [rt.jar:1.8.0_31]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:147)
... 27 more
Caused by: java.lang.RuntimeException: Can't start redis server. Check logs for details.
at redis.embedded.AbstractRedisInstance.awaitRedisServerReady(AbstractRedisInstance.java:66)
at redis.embedded.AbstractRedisInstance.start(AbstractRedisInstance.java:37)
at redis.embedded.RedisServer.start(RedisServer.java:11)
at org.egov.infra.config.redis.EmbeddedRedisServer.afterPropertiesSet(EmbeddedRedisServer.java:20)
This is a bug that has been reported here https://github.com/spring-projects/spring-session/issues/150
In my case I was running embedded redis-server on port 1337 and this port was locked and went into a loop sometime back when I was running my testcases in debug mode. After that I also started spring-boot app which created another server connection on the port 6379. But I failed to terminate the server running on the port 1337. Since then I had been getting the exception when I was trying to execute test cases "Can't start redis server. Check logs for details.", since 1337 was locked. Debugging line-my-line "AbstractRedisInstance" class and "awaitRedisServerReady" method revealed "1337 Already in use" which was never logged at all. Killed this port and re-run testcases and again I was on fly. Hope this helps

Hbase Master and Region servers could not be started

Hadoop is successfully running in distributed mode.
Getting following error while starting HBase in distributed mode.
Tried everything in hbase-site.xml configuration. No idea how to proceed with the problem?
014-03-10 13:55:42,493 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server ip-112-11-1-111.ec2.internal/112.11.1.111:2181.
Will not attempt to authenticate using SASL (Unable to locate a login configuration)
2014-03-10 13:55:42,494 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2014-03-10 13:55:42,594 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
2014-03-10 13:55:42,594 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries
2014-03-10 13:55:42,595 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2104)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:152)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2118)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1069)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:199)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1109)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1099)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1083)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:162)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:345)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2099)
Make sure that ZooKeeper is provisioned and running expectedly.
Check zoo.cfg and /etc/hosts to make sure that all zookeeper servers are reachable by the HBase master.