We are using Hive 2.3.3 along with Hadoop 2.7.7 and Spark 2.4.4.
We are using MariaDB as a backend database for Metastore.
The startup of the Metastore service is fine, and I am able to access Hive CLI and perform query operations. I am not starting up HS2 (to try to isolate the issue).
But after a while, all of the sudden, the Hive Metastore service is stuck and is not responding, even to simple queries show databases or show tables.
All the tables against which we execute other queries are empty (no partitions created as of now) and this environment is being newly created.
hive.log error file:
2020-10-09T18:43:56,971 DEBUG [IPC Client (1223050066) connection to master/10.28.66.65:8020 from hadoopuser] ipc.Client: IPC Client (1223050066) connection to master/10.28.66.65:8020 from hadoopuser: closed
2020-10-09T18:43:56,971 DEBUG [IPC Client (1223050066) connection to master/10.28.66.65:8020 from hadoopuser] ipc.Client: IPC Client (1223050066) connection to master/10.28.66.65:8020 from hadoopuser: stopped, remaining connections 0
2020-10-09T18:53:53,769 WARN [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 5s. getAllFunctions
org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_all_functions(ThriftHiveMetastore.java:3812) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_all_functions(ThriftHiveMetastore.java:3800) ~[hive-exec-2.3.3.jar:2.3.3]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllFunctions(HiveMetaStoreClient.java:2393) ~[hive-exec-2.3.3.jar:2.3.3]
....
2020-10-09T18:53:58,777 INFO [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] hive.metastore: Closed a connection to metastore, current connections: 0
2020-10-09T18:53:58,777 INFO [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] hive.metastore: Trying to connect to metastore with URI thrift://master:9083
2020-10-09T18:53:58,779 INFO [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] hive.metastore: Opened a connection to metastore, current connections: 1
2020-10-09T18:53:58,805 INFO [9edaa669-999e-41a9-a7b3-bcee9d6198f4 main] hive.metastore: Connected to metastore.
I have installed WSO2 Identity Server on my machine which is running Windows 10. I am trying to start the server using the command wso2server.bat --run, however I get the error WSO2 Carbon initialization Failed. The following is the complete log from the terminal:-
C:\Users\USER\Downloads\wso2is-5.4.0\bin>wso2server.bat --run
JAVA_HOME environment variable is set to C:\Program Files\Java\jdk1.8.0_152
CARBON_HOME environment variable is set to C:\Users\USER\Downloads\wso2is-5.4.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[2017-12-21 13:57:30,161] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Starting WSO2 Carbon...
[2017-12-21 13:57:30,161] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Operating System : Windows 10 10.0, amd64
[2017-12-21 13:57:30,161] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Home : C:\Program Files\Java\jdk1.8.0_152\jre
[2017-12-21 13:57:30,161] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Version : 1.8.0_152
[2017-12-21 13:57:30,161] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java VM : Java HotSpot(TM) 64-Bit Server VM 25.152-b16,Oracle Corporation
[2017-12-21 13:57:30,161] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Carbon Home : C:\Users\USER\Downloads\wso2is-5.4.0
[2017-12-21 13:57:30,161] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Temp Dir : C:\Users\USER\Downloads\wso2is-5.4.0\tmp
[2017-12-21 13:57:30,161] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - User : USER, en-US, Asia/Calcutta
[2017-12-21 13:57:31,067] INFO {org.wso2.carbon.event.output.adapter.kafka.internal.ds.KafkaEventAdapterServiceDS} - Successfully deployed the Kafka output event adaptor service
[2017-12-21 13:57:31,129] INFO {org.wso2.carbon.event.processor.manager.core.internal.util.ManagementModeConfigurationLoader} - CEP started in Single node mode
[2017-12-21 13:57:32,676] INFO {org.wso2.carbon.ldap.server.configuration.LDAPConfigurationBuilder} - KDC server is disabled.
[2017-12-21 13:57:47,505] INFO {org.wso2.carbon.mex.internal.Office365SupportMexComponent} - Office365Support MexServiceComponent bundle activated successfully..
[2017-12-21 13:57:47,505] INFO {org.wso2.carbon.mex2.internal.DynamicCRMCustomMexComponent} - DynamicCRMSupport MexServiceComponent bundle activated successfully.
[2017-12-21 13:57:49,897] INFO {org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager} - LDAP connection created successfully in read-write mode
[2017-12-21 13:57:53,287] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Configured Registry in 59ms
[2017-12-21 13:57:53,350] INFO {org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent} - Registry Mode : READ-WRITE
[2017-12-21 13:57:53,365] INFO {org.wso2.carbon.attachment.mgt.server.internal.AttachmentServiceComponent} - Initialising Attachment Server
[2017-12-21 13:57:53,738] INFO {org.wso2.carbon.attachment.mgt.core.dao.impl.jpa.AbstractJPAVendorAdapter} - [Attachment-Mgt OpenJPA] DB Dictionary: h2
[2017-12-21 13:57:53,738] INFO {org.wso2.carbon.attachment.mgt.core.dao.impl.jpa.AbstractJPAVendorAdapter} - [Attachment-Mgt OpenJPA] Generate DDL Enabled.
[2017-12-21 13:57:53,988] INFO {org.wso2.carbon.identity.authenticator.x509Certificate.internal.X509CertificateServiceComponent} - X509 Certificate Servlet activated successfully..
[2017-12-21 13:57:54,738] INFO {org.wso2.carbon.attachment.mgt.server.internal.AttachmentServiceComponent} - Registering AttachmentServerService
[2017-12-21 13:57:55,738] INFO {org.wso2.carbon.bpel.core.internal.BPELServiceComponent} - Initializing BPEL Engine........
[2017-12-21 13:57:55,847] INFO {org.wso2.carbon.bpel.core.ode.integration.BPELServerImpl} - Using DAO Connection Factory class: org.apache.ode.dao.jpa.BPELDAOConnectionFactoryImpl
[2017-12-21 13:57:56,035] INFO {org.wso2.carbon.bpel.core.ode.integration.BPELServerImpl} - Registering E4X Extension...
[2017-12-21 13:57:56,035] INFO {org.wso2.carbon.bpel.core.ode.integration.BPELServerImpl} - Registering B4P Extension...
[2017-12-21 13:57:56,035] INFO {org.wso2.carbon.bpel.core.ode.integration.BPELServerImpl} - Registering B4P Filter...
[2017-12-21 13:57:56,050] INFO {org.wso2.carbon.bpel.core.ode.integration.BPELServerImpl} - Registering MBeans
[2017-12-21 13:57:56,128] INFO {org.wso2.carbon.humantask.core.internal.HumanTaskServiceComponent} - Initialising HumanTask Server
[2017-12-21 13:57:56,160] INFO {org.wso2.carbon.humantask.core.dao.jpa.AbstractJPAVendorAdapter} - [HT OpenJPA] DB Dictionary: h2
[2017-12-21 13:57:56,160] INFO {org.wso2.carbon.humantask.core.dao.jpa.AbstractJPAVendorAdapter} - [HT OpenJPA] Generate DDL Enabled.
[2017-12-21 13:57:56,191] INFO {org.wso2.carbon.humantask.core.internal.HumanTaskServiceComponent} - Registering Axis2ConfigurationContextObserver
[2017-12-21 13:57:56,191] INFO {org.wso2.carbon.humantask.core.internal.HumanTaskServiceComponent} - Registering HT related MBeans
[2017-12-21 13:57:56,206] INFO {org.wso2.carbon.humantask.core.internal.HumanTaskServiceComponent} - MXBean for Human tasks registered successfully
[2017-12-21 13:57:56,347] INFO {org.wso2.carbon.metrics.impl.util.JmxReporterBuilder} - Creating JMX reporter for Metrics with domain 'org.wso2.carbon.metrics'
[2017-12-21 13:57:56,363] INFO {org.wso2.carbon.metrics.impl.util.JDBCReporterBuilder} - Creating JDBC reporter for Metrics with source 'Lenovo-PC', data source 'jdbc/WSO2MetricsDB' and 60 seconds polling period
[2017-12-21 13:57:56,378] INFO {org.wso2.carbon.metrics.impl.reporter.AbstractReporter} - Started JDBC reporter for Metrics
[2017-12-21 13:57:56,378] INFO {org.wso2.carbon.metrics.impl.reporter.AbstractReporter} - Started JMX reporter for Metrics
[2017-12-21 13:58:43,732] INFO {org.wso2.carbon.registry.indexing.solr.SolrClient} - Default Embedded Solr Server Initialized
[2017-12-21 13:58:44,076] INFO {org.wso2.carbon.user.core.internal.UserStoreMgtDSComponent} - Carbon UserStoreMgtDSComponent activated successfully.
[2017-12-21 13:58:45,232] INFO {org.wso2.carbon.identity.user.store.configuration.deployer.UserStoreConfigurationDeployer} - User Store Configuration Deployer initiated.
[2017-12-21 13:58:45,232] INFO {org.wso2.carbon.identity.user.store.configuration.deployer.UserStoreConfigurationDeployer} - User Store Configuration Deployer initiated.
[2017-12-21 13:58:45,263] INFO {org.wso2.carbon.bpel.deployer.BPELDeployer} - Initializing BPEL Deployer for tenant -1234.
[2017-12-21 13:58:45,263] INFO {org.wso2.carbon.humantask.deployer.HumanTaskDeployer} - Initializing HumanTask Deployer for tenant -1234.
[2017-12-21 13:58:46,935] FATAL {org.wso2.carbon.core.init.CarbonServerManager} - WSO2 Carbon initialization Failed
org.apache.axiom.om.OMException: com.ctc.wstx.exc.WstxIOException: Invalid UTF-8 middle byte 0x3f (at char #2621, byte #-1)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:296)
at org.apache.axiom.om.impl.llom.OMDocumentImpl.getOMDocumentElement(OMDocumentImpl.java:109)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.getDocumentElement(StAXOMBuilder.java:570)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.getDocumentElement(StAXOMBuilder.java:566)
at org.apache.axis2.util.XMLUtils.toOM(XMLUtils.java:592)
at org.apache.axis2.util.XMLUtils.toOM(XMLUtils.java:575)
at org.apache.axis2.deployment.DescriptionBuilder.buildOM(DescriptionBuilder.java:97)
at org.apache.axis2.deployment.AxisConfigBuilder.populateConfig(AxisConfigBuilder.java:91)
at org.apache.axis2.deployment.DeploymentEngine.populateAxisConfiguration(DeploymentEngine.java:887)
at org.apache.axis2.deployment.FileSystemConfigurator.getAxisConfiguration(FileSystemConfigurator.java:116)
at org.apache.axis2.context.ConfigurationContextFactory.createConfigurationContext(ConfigurationContextFactory.java:64)
at org.apache.axis2.context.ConfigurationContextFactory.createConfigurationContextFromFileSystem(ConfigurationContextFactory.java:210)
at org.wso2.carbon.core.init.CarbonServerManager.getClientConfigurationContext(CarbonServerManager.java:573)
at org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:458)
at org.wso2.carbon.core.init.CarbonServerManager.removePendingItem(CarbonServerManager.java:291)
at org.wso2.carbon.core.init.PreAxis2ConfigItemListener.bundleChanged(PreAxis2ConfigItemListener.java:118)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:847)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
Caused by: com.ctc.wstx.exc.WstxIOException: Invalid UTF-8 middle byte 0x3f (at char #2621, byte #-1)
at com.ctc.wstx.sr.StreamScanner.constructFromIOE(StreamScanner.java:625)
at com.ctc.wstx.sr.StreamScanner.loadMore(StreamScanner.java:997)
at com.ctc.wstx.sr.StreamScanner.getNext(StreamScanner.java:754)
at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:2000)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1134)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuilder.java:681)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:214)
... 18 more
Caused by: java.io.CharConversionException: Invalid UTF-8 middle byte 0x3f (at char #2621, byte #-1)
at com.ctc.wstx.io.UTF8Reader.reportInvalidOther(UTF8Reader.java:314)
at com.ctc.wstx.io.UTF8Reader.read(UTF8Reader.java:212)
at com.ctc.wstx.io.ReaderSource.readInto(ReaderSource.java:87)
at com.ctc.wstx.io.BranchingReaderSource.readInto(BranchingReaderSource.java:57)
at com.ctc.wstx.sr.StreamScanner.loadMore(StreamScanner.java:991)
... 23 more
I referred to the following question: wso2 app server (carbon) startup error, however that did not help me much. Please advise me as to how I should run the WSO2 server.
Try following.
Stop the WSO2 Server.
add -Dfile.encoding=UTF8 under CMD_LINE_ARGS in wso2server.bat file.
Restart the Server.
Also, note that JDK 8u152 has a known gzip bug which causes failures in WSO2 products. Use 8u144 instead.
I am having difficulties in starting the thrift server for spark sql in port 10010 in google-cloud dataproc cluster, It fails with the following error, Can anyone help please ?, I tried changing the port number still no luck.
sudo -u spark HIVE_SERVER2_THRIFT_PORT=10010 /usr/lib/spark/sbin/start-thriftserver.sh
Here is the log:
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:ThriftBinaryCLIService is started.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:HiveServer2 is started.
16/11/30 23:47:16 ERROR org.apache.hive.service.cli.thrift.ThriftCLIService: Error starting HiveServer2: could not start ThriftBinaryCLIService
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:10002.
at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:109)
at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:91)
at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:87)
at org.apache.hive.service.auth.HiveAuthFactory.getServerSocket(HiveAuthFactory.java:241)
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:66)
at java.lang.Thread.run(Thread.java:745)
16/11/30 23:47:16 INFO org.apache.hive.service.server.HiveServer2: Shutting down HiveServer2
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:ThriftBinaryCLIService is stopped.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:OperationManager is stopped.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:SessionManager is stopped.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:CLIService is stopped.
16/11/30 23:47:16 INFO org.apache.hive.service.AbstractService: Service:HiveServer2 is stopped
I'm not exactly sure why that did not work, but I would recommend running
apt-get install spark-thriftserver instead.
The server is by default configured to come up on port 10002 (as it attempted to do in your case), but you can change that in spark-env.sh.
It's also worth noting that a Thrift Server with an AppMaster and executor can fill a small cluster, or be blocked out by another small Spark job.
I tried to run a spark 1.6.0 (spark-1.6.0-bin-hadoop2.6) program on local mode using intellij idea .It has the error below.(Chinese means you can not specify the address of the requested)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/09/17 16:18:25 INFO SparkContext: Running Spark version 1.6.0
16/09/17 16:18:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/09/17 16:18:25 INFO SecurityManager: Changing view acls to: ron
16/09/17 16:18:25 INFO SecurityManager: Changing modify acls to: ron
16/09/17 16:18:25 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ron); users with modify permissions: Set(ron)
16/09/17 16:18:26 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.
16/09/17 16:18:26 ERROR SparkContext: Error initializing SparkContext.
java.net.BindException: 无法指定被请求的地址: Service 'sparkDriver' failed after 16 retries!
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
16/09/17 16:18:26 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.net.BindException: 无法指定被请求的地址: Service 'sparkDriver' failed after 16 retries!
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
1.Get your hostname by using "hostname" command.
2.Make an entry in the /etc/hosts file for your hostname if not present as follows:
127.0.0.1 your_hostname
We have a Kerberised cluster and I'm trying to run a Java action in Oozie where I make a JDBC connection to Hive. This JDBC connections works fine on the Sandbox without Kerberos.
The connection string is as simple as the following, where I'm providing username and password in it:
Connection con = DriverManager.getConnection("jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET","user123","passw123");
The Oozie action (strangely) completes succesfully, and the Java action log does not present any error:
1742 [main] INFO org.apache.hive.jdbc.Utils - Supplied authorities: W12345:10000
1742 [main] INFO org.apache.hive.jdbc.Utils - Resolved authority: W12345:10000
1766 [main] INFO org.apache.hive.jdbc.HiveConnection - Will try to open client transport with JDBC Uri: jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET
<<< Invocation of Main class completed <<<
Oozie Launcher ends
1785 [main] INFO org.apache.hadoop.mapred.Task - Task:attempt_1464245290012_0129_m_000000_0 is done. And is in the process of committing
1847 [main] INFO org.apache.hadoop.mapred.Task - Task attempt_1464245290012_0129_m_000000_0 is allowed to commit now
1854 [main] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_1464245290012_0129_m_000000_0' to hdfs://danskehadoop/user/user123/oozie-oozi/0000013-160527101253015-oozie-oozi-W/JavaAction--java/output/_temporary/1/task_1464245290012_0129_m_000000
1909 [main] INFO org.apache.hadoop.mapred.Task - Task 'attempt_1464245290012_0129_m_000000_0' done.
But in reality the Java main does not complete correctly the execution (and does not execute the needed queries) because the JDBC connection fails with an exception that I can see only in the Hive log:
ERROR [HiveServer2-Handler-Pool: Thread-78363]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
I'm actually connected to the cluster, and already done further kinit on my username.
Does anybody know what could the cause of this exception be?
Thanks in advance for the help!
Antonio
This happened to me on MapR hadoop distribution platform.
In my case it was Keepalived checking Hive port every 5 seconds and producing such error. I simply used "nc" command to check if Hive port is in use and did not use any authentication method. Later I switched to "maprcli" command which uses SASL authentication and the error was gone.