Unable to start spark thrift server in google-cloud-dataproc - apache-spark-sql

I am having difficulties in starting the thrift server for spark sql in port 10010 in google-cloud dataproc cluster, It fails with the following error, Can anyone help please ?, I tried changing the port number still no luck.
sudo -u spark HIVE_SERVER2_THRIFT_PORT=10010 /usr/lib/spark/sbin/start-thriftserver.sh
Here is the log:
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:ThriftBinaryCLIService is started.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:HiveServer2 is started.
16/11/30 23:47:16 ERROR org.apache.hive.service.cli.thrift.ThriftCLIService: Error starting HiveServer2: could not start ThriftBinaryCLIService
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:10002.
at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:109)
at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:91)
at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:87)
at org.apache.hive.service.auth.HiveAuthFactory.getServerSocket(HiveAuthFactory.java:241)
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:66)
at java.lang.Thread.run(Thread.java:745)
16/11/30 23:47:16 INFO org.apache.hive.service.server.HiveServer2: Shutting down HiveServer2
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:ThriftBinaryCLIService is stopped.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:OperationManager is stopped.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:SessionManager is stopped.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:CLIService is stopped.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:HiveServer2 is stopped

I'm not exactly sure why that did not work, but I would recommend running
apt-get install spark-thriftserver instead.
The server is by default configured to come up on port 10002 (as it attempted to do in your case), but you can change that in spark-env.sh.
It's also worth noting that a Thrift Server with an AppMaster and executor can fill a small cluster, or be blocked out by another small Spark job.

Related

server.HiveServer2: Error starting priviledge synchonizer

Hive version 3.1.2
Hadoop components(hdfs/yarn/historyjob) with kerberos authentication.
hive kerberos config:
hive.server2.authentication=KERBEROS
hive.server2.authentication.kerberos.principal=hiveserver2/_HOST#BDP.COM
hive.server2.authentication.kerberos.keytab=/etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
hive.metastore.sasl.enabled=true
hive.metastore.kerberos.keytab.file=/etc/kerberos/hadoop/metastore.bdp-05.keytab
hive.metastore.kerberos.principal=metastore/_HOST#BDP.COM
First, start the Metastore:
./bin/hive --service metastore > /dev/null &
Nothing unnormal in the log.
Then start hiveserver2 :
./bin/hive --service hiveserver2 > /dev/null &
Here is the start logs :
2020-12-30T11:28:48,746 INFO [main] server.HiveServer2: Starting HiveServer2
2020-12-30T11:28:49,168 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:49,171 INFO [main] cli.CLIService: SPNego httpUGI not created, spNegoPrincipal: , ketabFile:
2020-12-30T11:28:49,187 INFO [main] SessionState: Hive Session ID = 0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,052 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,066 INFO [main] session.SessionState: Created local directory: /tmp/hive/0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,069 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/0754b9bc-f2f9-4d4c-ab95-a7359764bc49/_tmp_space.db
2020-12-30T11:28:50,600 INFO [main] metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://bigdata-server-05:9083
2020-12-30T11:28:50,605 INFO [main] metastore.HiveMetaStoreClient: HMSC::open(): Could not find delegation token. Creating KERBEROS-based thrift connection.
2020-12-30T11:28:50,653 INFO [main] metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2020-12-30T11:28:50,653 INFO [main] metastore.HiveMetaStoreClient: Connected to metastore.
2020-12-30T11:28:50,654 INFO [main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hiveserver2/bigdata-server-05#BDP.COM (auth:KERBEROS) retries=1 delay=1 lifetime=0
2020-12-30T11:28:50,781 INFO [main] service.CompositeService: Operation log root directory is created: /tmp/hive/operation_logs
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread pool size: 100
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread wait queue size: 100
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread keepalive time: 10 seconds
2020-12-30T11:28:50,784 INFO [main] service.CompositeService: Connections limit are user: 0 ipaddress: 0 user-ipaddress: 0
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:OperationManager is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:SessionManager is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:CLIService is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:ThriftBinaryCLIService is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:HiveServer2 is inited.
2020-12-30T11:28:50,835 INFO [pool-7-thread-1] SessionState: Hive Session ID = 693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,838 INFO [main] results.QueryResultsCache: Initializing query results cache at /tmp/hive/_resultscache_
2020-12-30T11:28:50,844 INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,844 INFO [main] results.QueryResultsCache: Query results cache: cacheDirectory /tmp/hive/_resultscache_/results-23ae949b-6894-4a17-8141-0eacf5fe5a63, maxCacheSize 2147483648, maxEntrySize 10485760, maxEntryLifetime 3600000
2020-12-30T11:28:50,846 INFO [pool-7-thread-1] session.SessionState: Created local directory: /tmp/hive/693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,849 INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/693b0399-aabd-42b5-a4b2-a4cebbd325d4/_tmp_space.db
2020-12-30T11:28:50,861 INFO [main] events.NotificationEventPoll: Initializing lastCheckedEventId to 0
2020-12-30T11:28:50,862 INFO [main] server.HiveServer2: Starting Web UI on port 10002
2020-12-30T11:28:50,885 INFO [pool-7-thread-1] metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized
2020-12-30T11:28:50,894 INFO [main] util.log: Logging initialized #4380ms
2020-12-30T11:28:51,009 INFO [main] service.AbstractService: Service:OperationManager is started.
2020-12-30T11:28:51,009 INFO [main] service.AbstractService: Service:SessionManager is started.
2020-12-30T11:28:51,010 INFO [main] service.AbstractService: Service:CLIService is started.
2020-12-30T11:28:51,010 INFO [main] service.AbstractService: Service:ThriftBinaryCLIService is started.
2020-12-30T11:28:51,013 WARN [main] security.HadoopThriftAuthBridge: Client-facing principal not set. Using server-side setting: hiveserver2/_HOST#BDP.COM
2020-12-30T11:28:51,013 INFO [main] security.HadoopThriftAuthBridge: Logging in via CLIENT based principal
2020-12-30T11:28:51,019 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:51,019 INFO [main] security.HadoopThriftAuthBridge: Logging in via SERVER based principal
2020-12-30T11:28:51,023 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:51,030 INFO [main] delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-12-30T11:28:51,033 INFO [main] security.TokenStoreDelegationTokenSecretManager: New master key with key id=0
2020-12-30T11:28:51,034 INFO [Thread[Thread-8,5,main]] security.TokenStoreDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2020-12-30T11:28:51,035 INFO [Thread[Thread-8,5,main]] delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-12-30T11:28:51,035 INFO [Thread[Thread-8,5,main]] security.TokenStoreDelegationTokenSecretManager: New master key with key id=1
2020-12-30T11:28:51,040 INFO [main] thrift.ThriftCLIService: Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
2020-12-30T11:28:51,040 INFO [main] service.AbstractService: Service:HiveServer2 is started.
2020-12-30T11:28:51,041 ERROR [main] server.HiveServer2: Error starting priviledge synchonizer:
java.lang.NullPointerException: null
at org.apache.hive.service.server.HiveServer2.startPrivilegeSynchonizer(HiveServer2.java:985) ~[hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:726) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1037) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.access$1600(HiveServer2.java:140) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1305) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1149) [hive-service-3.1.2.jar:3.1.2]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_271]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_271]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_271]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_271]
at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.3.jar:?]
2020-12-30T11:28:51,044 INFO [main] server.HiveServer2: Shutting down HiveServer2
In my case, the hiveserver2-sit.xml was created by Apache Ranger when turning the ranger-hive-plugin on. Once I disable the ranger-hive-plugin, hiveserver2-sit.xml was edited by Ranger.
Here are the remaining configurations:
<configuration>
<property>
<name>hive.security.authorization.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.security.authorization.manager</name>
<value>org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider</value>
</property>
<property>
<name>hive.security.authenticator.manager</name>
<value>org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator</value>
</property>
<property>
<name>hive.conf.restricted.list</name>
<value>hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager</value>
</property>
</configuration>
Start hiveServer2 will encounter the previous exception.
Remove hiveserver2-site.xml will work fine.
I don't know why? Somebody can explain?
is this still actual ? , if yes , check logs . You should see that it tries to connect to zookeeper , if not described it will try to connect to localhost:2181 , so either it must be there or zk quorum servers should be described.

Websockets in HiveMQ not working - nothing in the log

Problem solved: check bottom of posting
I cant connect to websockets. Port 1883 works fine.
This is the output from MQTT.fx:
2017-01-21 07:46:26,293 INFO --- BrokerConnectorController : onConnect
2017-01-21 07:46:26,294 INFO --- ScriptsController : Clear console.
2017-01-21 07:46:26,295 INFO --- MqttFX ClientModel : MqttClient with ID MQTT_FX_Client_Websocket assigned.
2017-01-21 07:46:36,314 ERROR --- MqttFX ClientModel : Error when connecting
org.eclipse.paho.client.mqttv3.MqttException: Connection lost
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:146) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_112]
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(Unknown Source) ~[?:1.8.0_112]
at org.eclipse.paho.client.mqttv3.internal.wire.MqttInputStream.readMqttWireMessage(MqttInputStream.java:65) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:107) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
... 1 more
2017-01-21 07:46:36,315 ERROR --- MqttFX ClientModel:Please verify your Settings (e.g. Broker Address, Broker Port & Client ID) and the user credentials!
org.eclipse.paho.client.mqttv3.MqttException: Connection lost
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:146) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_112]
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(Unknown Source) ~[?:1.8.0_112]
at org.eclipse.paho.client.mqttv3.internal.wire.MqttInputStream.readMqttWireMessage(MqttInputStream.java:65) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:107) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
... 1 more
2017-01-21 07:46:36,321 INFO --- ScriptsController : Clear console.
2017-01-21 07:46:36,322 ERROR --- BrokerConnectService : Connection lost
I made a Telnet test to the server and port and get a blank terminal.
I guess that means there is a connection because otherwise i would have a "connect failed". The message log plugin shows nothing and there is nothing in the log file.
Its Debian and HiveMQ 3.2.2.
JAVA_OPTS: -Djava.net.preferIPv4Stack=true -XX:-UseSplitVerifier -noverify -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/hivemq/heap-dump.hprof
-------------------------------------------------------------------------
2017-01-21 07:37:02,159 INFO - Starting HiveMQ Server
2017-01-21 07:37:02,176 INFO - HiveMQ version: 3.2.2
2017-01-21 07:37:02,178 INFO - HiveMQ home directory: /opt/hivemq
2017-01-21 07:37:02,226 INFO - Log Configuration was overridden by /opt/hivemq/conf/logback.xml
2017-01-21 07:37:12,013 INFO - Loaded Plugin HiveMQ JMX Metrics Reporting Plugin - v3.0.0
2017-01-21 07:37:12,014 INFO - Loaded Plugin HiveMQ MQTT Message Log Plugin - v3.0.0
2017-01-21 07:37:12,014 INFO - Loaded Plugin HiveMQ Sys Topic Plugin - v3.0.0
2017-01-21 07:37:12,014 INFO - Loaded Plugin HiveMQ JVM Metrics Plugin - v3.1.0
2017-01-21 07:37:12,038 INFO - JMX Metrics Reporting started.
2017-01-21 07:37:12,099 INFO - JMX Metrics Reporting started.
2017-01-21 07:37:12,109 INFO - Starting TCP listener on address 192.168.0.12 and port 1883
2017-01-21 07:37:12,131 INFO - Starting Websocket listener on address 192.168.0.12 and port 8000
2017-01-21 07:37:12,139 INFO - Started TCP Listener on address 192.168.0.12 and on port 1883
2017-01-21 07:37:12,139 INFO - Started Websocket Listener on address 192.168.0.12 and on port 8000
2017-01-21 07:37:12,140 INFO - Started HiveMQ in 9967ms
2017-01-21 07:37:12,142 INFO - No valid license file found. Using evaluation license, restricted to 25 connections.
EDIT
Ok it's a Nginx problem because without SSL my JavaScript site works.
It should be possible to use SSL for http and none secure for the websockets as i can see here:
Nginx MQTT websocket proxy 1
Nginx MQTT websocket proxy 2
I tried there configuration but had no luck.
Solution:
There is no need for an Nginx proxy with HiveMQ.
The Problem is that Firefox does not accept the self signed certificate for websockets. Setting "network.websocket.allowInsecureFromHTTPS" to true will make it work. In Chrome you get a message about JavaScript security and you can accept it. Since i only use Firefox and there is no message it took hours to find out what was wrong. Also the paho onFailure function didn't show up.
unfortunately websocket support has not been implemented into MQTT.fx yet.
You will have to use a different MQTT-client, if you insist on connecting via websocket.
Cheers,
Florian, from the HiveMQ Team.

ActiveMQ Master/Slave on Weblogic - vm transport issue

I am trying to configure ActiveMQ master/slave setup on a single WebLogic machine. The problem is when I start Managed Server1 it successfully connects to vm transport and everything works perfectly, but when I start Managed Server2 I am receiving the following errors in broker logs
INFO 2016-September-27 10:08:00,227 ActiveMQEndpointWorker:124 - Connection attempt already in progress, ignoring connection exception
INFO 2016-September-27 10:08:01,161 TransportConnector:260 - Connector vm://localhost started
INFO 2016-September-27 10:08:30,228 TransportConnector:291 - Connector vm://localhost stopped
INFO 2016-September-27 10:08:30,229 TransportConnector:260 - Connector vm://localhost started
WARN 2016-September-27 10:08:30,228 ActiveMQManagedConnection:385 - Connection failed: javax.jms.JMSException: peer (vm://localhost#61) stopped.
WARN 2016-September-27 10:08:30,231 TransportConnection:823 - Failed to add Connection ID:ndl-wls-300.mydomain.com-52251-1474966937425-65:1 due to java.lang.NullPointerException
ERROR 2016-September-27 10:08:30,233 ActiveMQEndpointWorker:183 - Failed to connect to broker [vm://localhost?create=false]: java.lang.NullPointerException
javax.jms.JMSException: java.lang.NullPointerException
Please help, I am stuck with this.
I still don't see the reason for the slave within the same VM. I suggest you reach out to an ActiveMQ expert consultant to validate your architecture.
However, I think I can help you move a little bit closer to this issue:
There is a fundamental miss understanding here.. the vm url is broken down like this:
vm://${brokerName}?option=value,etc
The first time you create vm://localhost?create=true.. you have created a broker
The second time you reference vm://localhost?create=false.. you have created a client connection to the first broker.
To get two brokers, you'd need two different vm://${brokerName}?create=true

JDBC client to Hive - No data or no sasl data in the stream Exception

We have a Kerberised cluster and I'm trying to run a Java action in Oozie where I make a JDBC connection to Hive. This JDBC connections works fine on the Sandbox without Kerberos.
The connection string is as simple as the following, where I'm providing username and password in it:
Connection con = DriverManager.getConnection("jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET","user123","passw123");
The Oozie action (strangely) completes succesfully, and the Java action log does not present any error:
1742 [main] INFO org.apache.hive.jdbc.Utils - Supplied authorities: W12345:10000
1742 [main] INFO org.apache.hive.jdbc.Utils - Resolved authority: W12345:10000
1766 [main] INFO org.apache.hive.jdbc.HiveConnection - Will try to open client transport with JDBC Uri: jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET
<<< Invocation of Main class completed <<<
Oozie Launcher ends
1785 [main] INFO org.apache.hadoop.mapred.Task - Task:attempt_1464245290012_0129_m_000000_0 is done. And is in the process of committing
1847 [main] INFO org.apache.hadoop.mapred.Task - Task attempt_1464245290012_0129_m_000000_0 is allowed to commit now
1854 [main] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_1464245290012_0129_m_000000_0' to hdfs://danskehadoop/user/user123/oozie-oozi/0000013-160527101253015-oozie-oozi-W/JavaAction--java/output/_temporary/1/task_1464245290012_0129_m_000000
1909 [main] INFO org.apache.hadoop.mapred.Task - Task 'attempt_1464245290012_0129_m_000000_0' done.
But in reality the Java main does not complete correctly the execution (and does not execute the needed queries) because the JDBC connection fails with an exception that I can see only in the Hive log:
ERROR [HiveServer2-Handler-Pool: Thread-78363]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
I'm actually connected to the cluster, and already done further kinit on my username.
Does anybody know what could the cause of this exception be?
Thanks in advance for the help!
Antonio
This happened to me on MapR hadoop distribution platform.
In my case it was Keepalived checking Hive port every 5 seconds and producing such error. I simply used "nc" command to check if Hive port is in use and did not use any authentication method. Later I switched to "maprcli" command which uses SASL authentication and the error was gone.

How can I change which address Datastax agent will try to connect to?

My Cassandra instances are not listening on 127.0.0.1. When I start datastax-agent I find this in logs:
# tail -n 100 /var/log/datastax-agent/agent.log
...
ERROR [Initialization] 2015-05-19 22:35:04,064 Can't connect to Cassandra, retrying soon.
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:246)
at clojurewerkz.cassaforte.client$connect_or_close.doInvoke(client.clj:149)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at clojurewerkz.cassaforte.client$connect.invoke(client.clj:165)
at opsagent.cassandra$setup_cassandra$fn__8157.invoke(cassandra.clj:344)
at again.core$with_retries_STAR_$fn__8013.invoke(core.clj:98)
at again.core$with_retries_STAR_.invoke(core.clj:97)
at opsagent.cassandra$setup_cassandra.invoke(cassandra.clj:339)
at opsagent.opsagent$setup_cassandra.invoke(opsagent.clj:153)
at opsagent.jmx$determine_ip.invoke(jmx.clj:276)
at opsagent.jmx$setup_jmx$fn__8438.invoke(jmx.clj:293)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:745)
How can I change which address the Datastax Agent connects to? I have tried setting local_interface in the agent's address.yaml (and restarting agent), but that doesn't seem to work.
The secret was to set rpc_address to 0.0.0.0. Cred to LHWizard for pointing this out.